Hacker Newsnew | past | comments | ask | show | jobs | submit | mtone's commentslogin

Just looked - Microsoft Authenticator doesn't appear to work. I might be able to get off of it but it will take some prep. My banks are supported so that's good.


Why would you use Microsoft Authenticator when there are hundreds of other apps that manage OTPs?

Use aegis https://f-droid.org/packages/com.beemdevelopment.aegis/


Because many admins are horrible and disable TOTP for "security".

My uni does it and I've had use the only alternative option, cell call, and rigged Tasker to automatically answered and play the needed tone so I don't need to carry it with me.


Good question. That was for my MS account/licenses and some Azure stuff. I use Google Authenticator for most things.

Thanks for the link, I'll take a look. I might just move it to a secondary device first.


Microsoft authenticator should work on GOS, I can only find single person saying it doesn't but there's plenty of reasons it might not work for them (vpn, too strict exploit protection settings). And there's multiple people mentioning it working fine.


Microsoft Authenticator works on my GrapheneOS (I have the Play Services, not sure if it matters).


> if you could exclude all of the R&D and training costs

LLMs have a short shelf-life. They don't know anything past the day they're trained. It's possible to feed or fine-tune them a bit of updated data but its world knowledge and views are firmly stuck in the past. It's not just news - they'll also trip up on new syntax introduced in the latest version of a programming language.

They could save on R&D but I expect training costs will be recurring regardless of advancements in capability.


Recently llama.cpp made a few common parameters default (-ngl 999, -fa on) so it got simpler: --model and --context-size and --jinja generally does it to start.

We end up fiddling with other parameters because it provides better performance for a particular setup so it's well worth it. One example is the recent --n-cpu-moe switch to offload experts to CPU while filling all available VRAM that can give a 50% boost on models like gpt-oss-120b.

After tasting this, not using it is a no-go. Meanwhile on Ollama there's an open issue asking for this: https://github.com/ollama/ollama/issues/11772

Finally, llama-swap separately provides the auto-loading/unloading feature for multiple models.


Do you really need a H200 for this? Seems like something a consumer GPU could do. Smaller models might be ideal [0] as they don't require extensive world knowledge and are much more cost efficient/faster.

Why can't you build this today?

[0]: https://arxiv.org/pdf/2506.02153 Small Language Models are the Future of Agentic AI (Nvidia)


Related discussion on Anubis: https://news.ycombinator.com/item?id=43427679


I use Readdle Documents to sync PDF folders with my server PC via FTP. Free version supports PDF highlighting & simple annotations, basic file management, and automatically syncs back everything.


Assuming it's correct, I think this answer explains it well: https://stackoverflow.com/a/31996121/283879

Basically, yes it may write snapshots per-file (never per-commit) locally but there is a separate routine to transparently repack the whole thing with deltas.


There was talk about introducing type syntax as valid but ignored in the JS language, making TS valid JS.

It would take forever to become mainstream but if node and major browsers started to support this tomorrow, along with ESM modules we could drop TS compilation and bundling entirely during development, safely publish npm packages as TS (even a bundled TS) and simplify tooling for monorepos, IDEs, etc.

Unfortunately that wouldn't solve dealing with templates like JSX/TSX or future language syntax/features.

https://devblogs.microsoft.com/typescript/a-proposal-for-typ...


Right yea that solution would be many years out and doesn't work for JSX. As opposed to compiling to JS/JSDoc which could be done today and should solve our problems with stepping into NPM package code without dead ends.


They're turing-complete and modular so it's not really about what they can or cannot do.

Testability, tooling and the open-source ecosystem and either bad or non-existent. Writing PL/SQL is the worst environment I've worked in. That database sent emails, processed CSVs scheduled jobs, etc. yet there was still a web app to maintain next to it.

They're OK for certain things like essential triggers or performance-sensitive functions, but I would never deliberately put app logic in there. Major red flag.


Yep. Releasing, testing, debugging, etc are all more difficult in stored procs than in a “regular” language. Stored procs have other down sides:

  - often unique to that DB, so locks you in
  - Scaling that code is now tied to scaling your DB tier
  - Tooling is often very inadequate
  - Versioning and backwards compatibility of code can be a challenge


Some of those concerns apply to any database. Your query could slowdown if the database picks a bad plan, so you could say you will never trust the db to scale. That's separate from scaling the stored proc - just using the db can run into a scaling issue.


No, what I mean is your code scaling is now directly tied to how your DB scales. Your SP code can be impacting the rest of your DB, and vice-versa. I have seen large SP based systems to require Oracle boxes to be scaled up at enormous cost (hundreds of thousands or even millions of lines of SP).

Not because of slow queries, but just the cost of executing the stored procs themselves.


> Testability, tooling and the open-source ecosystem and either bad or non-existent

If you're properly testing the code in your application that exercises persistence, that means your test harness runs a real database like the one you're running in production and thus you can also write the database logic tests using your own application's testing facilities.

Of the things you listed, "the database sends e-mail" is the only one where I'd think you'd have to change the code at all, and have the database go through a mockable middle-man so that it becomes testable; but everything else can be comfortably tested from a test suite that is able to talk to a real database.


> That database sent emails, processed CSVs scheduled jobs, etc.

That's really a bad, very bad use of SP. They should only deal with and care about the data, not doing any interaction with any external systems.


Just upgraded to Windows 11 (for its HDR features) and weather is now part of the "Widgets" bombarding me with ads and poor news sources. I genuinely tried to customize my "feed" but it's all junk -no reputable sources whatsoever- and I didn't find a way to remove them.

So the only sane course of action was to disable widgets altogether while I still can. And now I don't have the weather anymore.


You can run the Weather app stand-alone & pin it to your Taskbar or Start Menu:

    Windows Key > weather


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: