Hacker Newsnew | past | comments | ask | show | jobs | submit | cestith's commentslogin

TLS doesn’t mask the IP of the server. The updater probably isn’t using DNS over HTTPS. If I can determine that a user’s updater just hit the update check server, I can start impersonating the update server.

That takes it out of the one day away territory, but it does allow an attacker to only have a malicious HTTP capture up and detectable during the actual attack window.

Then, of course, if you’re also being their DNS server you can send them to the wrong update check server in the first place. I wonder if the updater validates the certificate.


I usually just say kilobyte when speaking, and say “binary kilobyte” or “decimal kilobyte” if it’s not clear from context. I still (usually, but I forget) use the IEC symbols when I mean binary and the SI symbols when I mean decimal. The extra ‘i’ doesn’t cost that much.

The dnf, deb, or pacman tools could point to a repo where the packages have paid activation.

Companies can already do that. This is how redhat works in its entirety.

This has nothing to do with the base distribution


I don’t remember ever having to activate a piece of RedHat code after downloading it. I do remember paying a subscription to have authentication to particular repos. It’s been a while, though.

I’d like to know if I’m the subject of any of the Interpol notices. That is, unless it’s a Black Notice and it’s correct. Then I couldn’t really care. Even that one, though, if my name’s attached and it’s wrong that seems really bad.

https://en.wikipedia.org/wiki/Interpol_notice


I think people tend to have a range of scruples they expect. How someone is targeted is often far more lax on standards than who is targeted and for what reasons.


The special hardware was actually just a DSP at the ISP end. The big difference was before 56k modems, we had multiple analog lines coming into the ISP. We had to upgrade to digital service (DS1 or ISDN PRI) and break out the 64k digital channels to separate DSPs.

The economical way to do that was integrated RAS systems like the Livingston Portmaster, Cisco 5x00 seriers, or Ascend Max. Those would take the aggregated digital line, break out the channels, hold multiple DSPs on multiple boards, and have an Ethernet (or sometimes another DS1 or DS3 for more direct uplink) with all those parts communicating inside the same chassis. In theory, though, you could break out the line in one piece of hardware and then have a bunch of firmware modems.


Technically an LLM is a tool for extracting candidate responses to plain-text requests. Since (textual) programming languages are languages, they can create passable candidate responses to queries about those. Certain LLMs such as Copilot and Claude have had their training focused a bit more towards programming tasks, but saying that LLMs as a class are for coding assistance is a little narrowly stated.

It would maybe be handy to feed the responses from an LLM through a computational reasoning engine to grade a few of them.


Case in point: Wells Fargo foreclosure fraud. Case in point: Wells Fargo opening new accounts in customer names without direction from, approval by, or notification to said customers.

The primary incentive of a bank is to make money rather than customer satisfaction, security, or most other things. Sometimes other priorities suffer in the race to profit, sometimes including regulatory compliance and legality.


In particular a place I used to work had a plugin for threaded comments in Jira. The specific one we were using slowed things down noticeably with the DB on the same server, but not too much to be an improvement in overall usefulness.

Then we decided trying to make our Jira more reliable by splitting the DB out into a separate clustered DB system in the same data center. The latency difference going through a couple of switches and to another system really added up with those extra 1600 or so DB calls per page load.

We ended up doing an emergency reversion to an on-host DB. Later, we figured out what was causing that many queries.


You're referring to the on-prem Jira. That might suck, sure. My experience has been purely using Jira Cloud and Confluence Cloud, both of which I've found to be snappy and responsive.


Amusingly, exactly opposite experience here. That said, our on-prem is jira and confluence integrated with db on same machine, and apache in front doing additional caching. I imagine like so many things it is how you set it up...


If you read my previous comment, I said it was largely the specific poor plugin that caused most of the performance issue with the database queries. I never complained about the overall speed of on-prem Jira. That was the assertion of the person who’s only ever used the cloud version.


My last company switched several teams to Jira Cloud. My current company started with Cloud when we moved over from other tools.

Cloud does not give you the flexibility of your own plugins, your own redundancy design, or your own server upgrades. On top of that, the performance is pretty variable and is far worse than a self-hosted Jira on fast hardware.

It’s interesting to me that your lack of experience to make a comparison qualifies you in some way to criticize the experience I actually have.


You can’t run most Unix/Linux apps without porting.

https://github.com/ReturnInfinity/BareMetal-Examples/blob/ma...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: