Hacker Newsnew | past | comments | ask | show | jobs | submit | oconnore's commentslogin

Why would you be running sudo in production? A production environment should usually be setup up properly with explicit roles and normal access control.

Sudo is kind of a UX tool for user sessions where the user fundamentally can do things that require admin/root privileges but they don't trust themselves not to fat finger things so we add some friction. That friction is not really a security layer, it's a UX layer against fat fingering.

I know there is more to sudo if you really go deep on it, but the above is what 99+% of users are doing with it. If you're using sudo as a sort of framework for building setuid-like tooling, then this does not apply to you.


> A production environment should usually be setup up properly with explicit roles and normal access control.

… and sudo is a common tool for doing that so you can do things like say members of this group can restart a specific service or trigger a task as a service user without otherwise giving them root.

Yes, there are many other ways to accomplish that goal but it seems odd to criticize a tool being used for its original purpose.


PSA for anyone reading this, you should probably use polkit instead of sudo if you just want to grant systemd-related permissions, like restarting a service, to an unprivileged user.

It's roughly the same complexity (one drop-in file) to implement.


I’d broaden that slightly to say you should try to have as few mechanisms for elevating privileges as possible: if you had tooling around sudo, dzdo, etc. for PAM, auditing, etc. I wouldn’t lightly add a third tool until you were confident that you had parity on that side.


Privilege escalation (superuser capabilities) and RBAC ought to be viewed differently, IMO.

There's a place for true superusers, such as auditing, where no stone should be too heavy. But mostly for securing systems, we want RBAC, and sudo is abused as a pile-driver where only a mallet was needed. Polkit is more of a proper policy toolkit.


That’s a valid choice. I’m just saying that you should pick ideally one tool for that class of work. For example, if you support one tool for Mac and Linux users that’s probably worth more than supporting two similar tools even if one of them is better.


What's the benefit?


You can acquire permission on-demand and scoped more tightly.


> Why would you be running sudo in production? A production environment should usually be setup up properly with explicit roles and normal access control.

And doing cross-role actions may be part of that production environment.

You could configure an ACME client to run as a service account to talk to an ACME server (like Let's Encrypt), write the nonce files in /var/www, and then the resulting new certificate in /etc/certs. But you still need to restart (or at least reload) the web/IMAP/SMTP server to pick up the updated certs.

But do you want the ACME client to run as the same service user as the web server? You can add sudo so that the ACME service account can tell the web service account/web server to do a reload.


In your example certbot is given permission to write to /var/www/.well-known/acme-challenge and to write certs somewhere. Your web server also has permission to read those files too.

There is no need for the acme client and web server to run as the same user. For reloads the certbot user can be given permission to just invoke the reload command / signal directly. There does not need to be sudo in between them.


Almost everyone is running sudo in production.


the fact this is a reply to the content in the parent just demos the complete lack of social skills or empathy many in this community are known for


Auditing.


bro i just want to apt install gimp :(


Can you share how the scripts work? That seems to be the most interesting part, but is omitted from the article. The only technical details are UART + an opto-coupler.

> Both devices run custom scripts designed to handle data transmission reliably rather than quickly. This approach limits throughput, but reliability is paramount for critical monitoring, where losing data is unacceptable. The scripts are finely tuned to ensure that every log entry is transmitted securely without risk of cross-contamination between networks.


Yep they are pretty simple, on one end you have a python script that listens to syslog messages, when get gets an interesting one it converts into a binary string and sends out over GPOI14.

This goes through an opto coupler

On the other end there a python script listening on GPIO16, it takes a string of binary data, decodes, checks it's valid, then creates a tagged syslog message. Syslog is configured to forward everything onto a central location for folks to monitor.

Hope that makes sense.


I'm not sure which dimensions you're talking about, but in terms of bed size the F-150 has been very consistent over the years (although I think Crew Cabs — although they always existed — have become more popular). The Ranger still cannot fit a full sized sheet of plywood flat in the bed.

Quick research: the new Ranger's bed size has only increased 0.9" (width) relative to the 1990 version. Bed length seems to be the same.


Everything except the bed size has grown enormously on modern consumer trucks. Nowadays truck beds look proportionally tiny compared to trucks from 20-30 years ago when the bed made up a much larger percent of the vehicle.

Ford knows their market. Most F-150 buyers aren't looking for a functional truck, they want a comfortable commuter car that looks like a cool truck.


You are looking at the wrong thing, look at overall vehicle dimensions.

1995 Ford Ranger Extended Cab - 3200+ lbs - 198" long - 69" wide - 6' bed

2023 Ford Ranger Super Cab (last year they had a 2 door) - 4100+ lbs - 210" length - 73" width - 6' bed

1000 lbs heavier, a foot longer, a few extra inches wide, with the same size bed.

https://www.edmunds.com/ford/ranger/2023/supercab/features-s... https://www.edmunds.com/ford/ranger/1995/extended-cab/st-754...


The 1998 ranger was the right size. 6 ft bed while not being monstrously sized.

The new rangers have the height of the old F150 which makes their beds look just weird.


The standard size f150 bed can't fit a standard 4'x8' sheet of plywood


It never could. An 8 foot bed has always been optional.


Can confirm. I have a 2020 F150 with a standard box. No way am I fitting a sheet of plywood flat.


I was considering getting a Rivian and decided that in fact I would probably not allow the 24 year old dude at my local construction supply co to use a skid steer to drop a load of gravel into the bed of my $75k+ electric vehicle.

So instead I got a used Ford F150 (gas) and when the skid steer guy drops gravel into the bed I feel fine.


There is a lot to be said for that perspective. I wonder if any PMs have considered making the bed of the truck a FRU that you can swap out at home.


The bed of more traditional pickups like the F-150 can be swapped out in a couple of hours by one or two dudes with a lift and an impact wrench. Heck, you can buy blank F-250s without a bed at all.


Hell, a few hours with that impact wrench and you can also lift the cab right off the frame, too.


A modular open spec for attaching beds to trucks might be useful.

What are some possible attachments?

4-6.5' Truck Bed, Trailer, Camper, Mobile Workshop / Trade Rig, Car hauler, Bed with rack and storage and 270° awning

What all needs to be connected?

Mechanical attachment, 4WD/AWD/RWD axle and differential, CAN bus, backup can, lights

Public link: Open Truck Bed Standard Proposal https://gemini.google.com/share/1e70ae398d26 :

"Kinetic-Link" (K-Link) open spec:

> The proposed Active-AWD Trade Platform utilizes a Through-the-Road (TTR) Hybrid architecture to decouple the mechanical drivetrain while maintaining synchronized propulsion via a Vehicle Control Unit (VCU). By integrating high-topology Axial Flux or Radial-Axial (RAX) in-wheel motors, the system achieves exceptional torque density within the limited packaging of a trailer wheel well. The control strategy relies on Zero-Force Emulation, utilizing a bi-directional load cell at the hitch to modulate torque output via a PID loop, ensuring the module remains neutrally buoyant to the tow vehicle during steady-state cruising. In low-traction environments, the system transitions to Virtual AWD, employing Torque Vectoring to mitigate sway and Regenerative Braking to prevent jackknifing, effectively acting as an intelligent e-Axle retrofit. This configuration leverages 400V/800V DC architecture for rapid energy discharge and V2L (Vehicle-to-Load) site power, solving the unsprung weight damping challenges through advanced suspension geometry while eliminating the parasitic drag of traditional passive towing.

A modular truck bed could have Through-the-road TTR AWD (given a better VCU) and e.g. hub motors or an axle motor.


Anything can be field replaceable if your field has enough tooling. (j/k)


There's always a chance the new Scout will fit that model. I'm not getting my hopes up though. It seems every company that releases an EV truck says they'll sell it for $30-40k and then suddenly it's $80k+.


Pro-tip: Rivians have a class V tow capability. Rent a dump trailer and let it take the damage. Both your wallet and back will thank you.


Why do you do this?


because i prefer monospaced pixel fonts, tho the underlying engine requires the ttf and now otf font, which is a vector format in order to render.


The only disappointing aspect of the Iocaine maze is that it is not a literal maze. There should be a narrow, treacherous path through the interconnected web of content that lets you finally escape after many false starts.


I guess you had a bad experience, but this hasn’t been an issue for me using it for many years now.


Lol is this meant to be an ironic comment?


If this is a concern, pass your UUIDv7 ID through an ECB block cipher with a 0 IV. 128 bit UUID, 128 bit AES block. Easy, near zero overhead way to scramble and unscramble IDs as they go in/out of your application.

There is no need to put the privacy preserving ID in a database index when you can calculate the mapping on the fly


This is, strictly speaking, an improvement, but not by much. You can't change the cipher key because your downstream users are already relying on the old-key-scrambled IDs, and you lose all the benefits of scrambling as soon as the key is leaked. You could tag your IDs with a "key version" to change the key for newly generated IDs, but then that "key version" itself constitutes an information leak of sorts.


Why do you need forward secrecy?


I edited that out of my post, as I'm not sure it's the correct term to use, but the problem remains. If the key leaks, then all IDs scrambled with that key can be de-scrambled, and you're back to square one.


Then that's just worse and more complicated than storing a 64 bit bigint + 128 UUIDv4. Your salt (AES block) is larger than a bigint. Unless you're talking about a fixed value for the AES (is that a thing) but then that's peppering which is security through obfuscation.


Uhh... What? You just use AES with a fixed key and IV in block mode.

You put in 128 bits, you get out 128 bits. The encryption is strong, so the clients won't be able to infer anything from it, and your backend can still get all the advantages of sequential IDs.

You also can future-proof yourself by reserving a few bits from the UUID for the version number (using cycle-walking).


I still feel like calling something like uuid.v4() is easier and less cognitively complex.


There are advantages in monotonically increasing UUIDs, they work better with BTrees and relational databases.


I just meant having UUIDv7 internally, and UUIDv4 externally if date leakage is a concern (both on the same object).

UUIDv7 still works great in distributed systems and has algorithmic advantages as you have mentioned.


This package also seems to just have a misbehaving github action that is in a loop.


Hmm yeah, I decided that one counts because the new packages have (slightly) different content, although it might be the case that the changes are junk/pointless anyway.


This is a lot of fuss when you can get a batch update to stay within a few minutes of latency. You only have this problem if you are very insistent on both (1) very near real-time, and (2) Iceberg. And you can't go down this path if you require transactional queries.

I think most people who need very near real-time queries also tend to need them to be transactional. The use case where you can accept inconsistent reads but something will break if you're 3 minutes out of date is very rare.


What do you mean “transactional”? Do you mean that a reader never sees a state in Iceberg that is not consistent with a state that could have been seen in a successful transaction in Postgres? If so, that seems fairly straightforward: have the CDC process read from Postgres in a transaction and write to Iceberg in a transaction. (And never do anything in the Postgres transaction that could cause it to fail to commit.)

But the 3 minute thing seems somewhat immaterial to me. If I have a table with one billion rows, and I do an every-three-minute batch job that need to sync an average of one modified row to Iceberg, that job still needs write the correct deletion record to Iceberg. If there’s no index, then either the job writes a delete-by-key or the job need to scan 1B Iceberg rows. Sure, that’s doable in 3 minutes, but it’s far from free.


> This is a lot of fuss when you can get a batch update to stay within a few minutes of latency.

Replying again to add: cost. Just because you can do a batch update every few minutes by doing a full scan of the primary key column of your Iceberg table and joining against your list of modified or deleted primary keys does not mean you should. That table scan costs actual money if the Iceberg table is hosted somewhere like AWS or uses a provider like Databricks, and running a full column scan every three minutes could be quite pricey.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: