Hacker Newsnew | past | comments | ask | show | jobs | submit | alexgotoi's commentslogin

At any HR conference you go, there are two overused words: AI and Skills.

As of this week, this also applies to Hacker News.


> this isn’t talent, but practice

This. Totally agree. Seniority level it’s based on the volume of practice someone has. Period.


There is no denying practice is needed, but... I've been doing this (getting to reduce ambiguity and simplify complex problems) since before my first job in free software communities, yet really, I wasn't anywhere close to "senior" when I joined my first job at a demanding SW organization at 22 years old.

There was simply a lot I did not know, but I had the talent to do this part well (sure, one can argue that I had "practice" doing this with any problem since I was ~10 years old, but calling that "senior" would be... over the top: I think it rather qualifies as "talent").

It took me a couple of years of learning good software engineering from my wonderful and smart senior colleagues and through my own failures and successes for me to become a Tech Lead too.


Disagree, it's not _just_ practice. You can do something for 10,000 hours but never actively try to improve. Does that mean you're now more senior because you had more volume of practice?

e.g, let's say someone spends 10k hours doing just 'addition and subtraction' problems on 2 digit numbers. Are they now better at maths than someone who spent 0.1k hours but doing a variety of problems?

To grow as a software engineer, you need to have volume + have this be outside of your comfort zone + actively try to improve/challenge yourself.

Apart from this, I do agree it's not 'innate talent' that drives someone to become a senior engineer, and I think anyone with the right attitude / mindset can do so.


“Some people say they have 20 years experience, when in reality, they have 1 year's experience repeated 20 times."

- Steven Covey


being senior is clearly about having certain abilities or skills and absolutely nothing to do with how long it took you to acquire those skills

> * The fundamental challenge in AI for the next 20 years is avoiding extinction.

This reminded me of the Don’t look up movie where they basically gambled with the humans extinction.


LLMs still need to bring clear added value to enterprise and corporate work; otherwise, they remain a geek’s toy.

Big media agencies that claim to use AI rely on strong creative teams who fine-tune prompts and spend weeks doing so. Even then, they don’t fully trust AI to slice long videos into shorter clips for social media.

Heavy administrative functions like HR or Finance still don’t get approval to expose any of their data to LLMs.

What I’m trying to say is that we are still in the early stages of LLM development, and as promising as this looks, it’s still far from delivering the real value that is often claimed.


I think their non-deterministic nature is what’s making it difficult to adopt. It’s hard to train somebody in the old way of “if you see this, do this” because when you call the LLM twice you most likely get different results.

It took a long time to computerize businesses and it might take some time to adopt/adapt to LLMs.


Oh no, AI-generated code will make me a bad programmer? Thank God I’ve been hand-crafting my 500 line regex monstrosities for 20 years—clearly the gold standard. Next you’ll tell me copy pasting Stack Overflow turns me into a cargo cultist. Wake up: bad programmers existed since punch cards; AI just speeds up the Darwinian cull. Use it to boilerplate the boring bits, then actually grok what it spits out. Or keep Luddite-posting while the rest of us ship.

The scary part isn’t “AI in the brain”, it’s who owns the update server.

We’re not talking cyberpunk super‑intelligence, we’re talking ad‑tech and enterprise SaaS burrowing into your reward system: telemetry as a medical feature, dark patterns as “nudges”, mandatory compliance firmware for work.

At that point it’s not an implant, it’s MDM for your thoughts.


Love the “Click to keep avoiding work” - so true!

This is the most brutal cut of all

The noprocast feature should have an option to insult the user for returning here.

Nah, look at the Y logo :)

In 2026 we’re still in 1998, not 2000. The AI bubble has room to inflate, the chips and “platform” stocks will keep printing new ATHs, and everyone will tell themselves this time it’s different because “it’s infrastructure”.

What actually lands inside big companies will be much less cinematic: narrow, localized automations in finance, HR, ops, compliance. A cluster of weird little agents that reconcile reports, nudge workflows, and glue together legacy systems. Not “AI runs the business end to end”, just a slow creep of point solutions that quietly become indispensable while the slideware keeps talking about transformation.


The coolest thing here, technically, is that this is one of the first public projects treating time as a first‑class axis in training, not just a footnote in the dataset description.

Instead of “an LLM with a 1913 vibe”, they’re effectively doing staged pretraining: big corpus up to 1900, then small incremental slices up to each cutoff year so you can literally diff how the weights – and therefore the model’s answers – drift as new decades of text get added. That makes it possible to ask very concrete questions like “what changes once you feed it 1900–1913 vs 1913–1929?” and see how specific ideas permeate the embedding space over time, instead of just hand‑waving about “training data bias”.


This is exactly the kind of boring, unsexy feature that actually builds trust. It’s the opposite of the usual “surprise, here’s an AI sidebar you didn’t ask for and can’t fully disable” pattern. If they want people to try this stuff, the path is pretty simple: ship a browser that treats AI like any other power feature. Off by default, clearly explained, reversible, and preferably shippable as an extension. You can always market your way into more usage; you can’t market your way back into credibility once you blow it.

It is well-known as a result of the expert reports in US v Google that generally software users do not change defaults

Whereas providing an option or a setting that the user must locate and change doesn't really mean much. Few users will ever see it let alone decide to change it

For example, why pay 22 billion to be "the default" if users can just change the default setting


Mozilla is certainly paddling upstream. Of all of the AI-integrated apps and sites that I'm subjected to, I can think of exactly two where it wasn't obnoxious and a pain in the neck to disable.

Kagi. Zed. That's it, that's the list.


Apple's Preview is my favourite. It uses AI to allow you to copy text from images. And that's it.

This is my go to example of “ai features that are actually useful to me”. Ubiquitous OCR, and ubiquitous semantic search in photos.

Not a chat bot. Not an “ask ai” button, just those things.


That's not "AI" in the sense of LLMs, which is what the recent trend in AI complaints is about.

> Kagi

I've been toying with that for ages on and off. Finally now a paid up user due to the fact that their guesswork engine (or makey-upy machine, or your preferred name) can be easily turned off, and stays off until requested otherwise.


My problem here is this; products are designed with a vision. If you are designing with 2-3 visions it won’t be that good, if you design with one vision (AI) then non-AI version of the product will be an after thought. This tells me non-AI version of it will suffer (IMHO)

> if you design with one vision (AI) then non-AI version of the product will be an after thought

That’s like saying if a car manufacturer adds a "Sport Mode", the steering wheel and brakes suddenly become an afterthought.

Being AI-available means we'll welcome more Firefox users who would otherwise choose a different browser. Being AI-optional means we won't alienate the anti-AI crowd. Why not embrace both?


I don't agree. I think opinionated design products are much worse in general.

It's really great when your opinions are aligned with those of the designer. If they're not, you're straight out of luck and you're stuck with something that isn't really for you.

This is why I love software that gives as much choice as possible. Like KDE for example. Because I have pretty strong vision myself and I respect my tools to conform to that, not the other way around


> This is exactly the kind of boring, unsexy feature that actually builds trust.

Though not so much trust as an option to enable AI features would build.


The trust is built by not enabling this by default, and by not burying the "kill switch" somewhere in settings that non-power users will never find.

Worse yet, burying in settings where they give a big disclaimer that they can (and often are) reset when the browser updates.

Currently disable switch is right next to AI chat bot settings. It’s pretty on your face.

I've been really confused as to what all the hubub is about. I think I saw the sidebar for about 4 seconds on each of my installs before I hid it forever. I tried to reenable it to see what people were complaining about but couldn't find it within 10 seconds so gave up.

AFAIK, you can't enable them without resetting things in about:config. So it's a "big red button", and that's a good thing.

Keyword: currently

They said they'll create a Bigger Red Switch (TM), so this interim solution is better than nothing, and it's going to get better.

If they're breaking their usual silence to talk about it on Mastodon via an employee/developer, they should better keep their word, because they're on a razor's edge there, and they'll be cut pretty badly if they slip on this one.


saying "trying to slow down, I promise" doesn't magically make your blatant advert not spam

edit: the original post ended with words to the tune of "Totally unrelated, but I run [insert newsletter here]... "


Edited and removed.

Why? Why kowtow to people who don't care about your wellbeing or long term success?

> It’s the opposite of the usual “surprise, here’s an AI sidebar you didn’t ask for and can’t fully disable” pattern.

They literally shipped an AI sidebar nobody asked for.


I find it a nice feature.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: