ahh, it might have been spellcheck then. I turned off all that stuff. In the heat of the moment, maybe I was a bit too angry to do proper root cause analysis :P
Is this about commit signing? Git and all of the mentioned forges (by uploading the public key in the settings) support SSH keys for that afaik.
git configuration:
gpg.format = ssh
user.signingkey = /path/to/key.pub
If you need local verification of commit signatures you need gpg.ssh.allowedSignersFile too to list the known keys (including yours). ssh-add can remember credentials. Security keys are supported too.
I’m wondering, wouldn’t a default deny inbound firewall still need hole punching with IPv6? You wouldn’t need STUN to find your global address but if you use varying ports you’d need to communicate the port first, and you’d also need to time the simultaneous open. So a coordinating party is still needed somewhere. Getting rid of TURN relays (if you’re affected by symmetric NATs) is of course a huge plus.
No, you'd have something like UPnP open a port on the firewall, I imagine. It depends on the setup, which can now be much more flexible, since the firewall can run on the machine itself. You also have the benefit that multiple machines can listen on the same port, so you don't need a proxy any more.
You should use unique local addresses (ULAs, fc00::/7) not link-local addresses (fe80::/10) for this. Choose a random prefix and advertise it in your network (you can use some website like https://www.unique-local-ipv6.com if you want).
This prevents clashing subnets when using VPN like it sometimes happens with IPv4.
In C# the closest analogue to a C++ destructor would probably be a `using` block. You’d have to remember to write `using` in front of it, but there are static analysers for this. It gets translated to a `try`–`finally` block under the hood, which calls `Dispose` in `finally`.
using (var foo = new Foo())
{
}
// foo.Dispose() gets called here, even if there is an exception
Or, to avoid nesting:
using var foo = new Foo(); // same but scoped to closest current scope
These also is `await using` in case the cleanup is async (`await foo.DisposeAsync()`)
I think Java has something similar called try with resources.
try (var foo = new Foo()) {
}
// foo.close() is called here.
I like the Java method for things like files because if the there's an exception during the close of a file, the regular `IOException` block handles that error the same as it handles a read or write error.
void bar() {
try (var f = foo()) {
doMoreHappyPath(f);
}
catch(IOException ex) {
handleErrors();
}
}
File foo() throws IOException {
File f = openFile();
doHappyPath(f);
if (badThing) {
throw new IOException("Bad thing");
}
return f;
}
That said, I think this is a bad practice (IMO). Generally speaking I think the opening and closing of a resource should happen at the same scope.
Making it non-local is a recipe for an accident.
*EDIT* I've made a mistake while writing this, but I'll leave it up there because it demonstrates my point. The file is left open if a bad thing happens.
In Java, I agree with you that the opening and closing of a resource should happen at the same scope. This is a reasonable rule in Java, and not following it in Java is a recipe for errors because Java isn't RAII.
In C++ and Rust, that rule doesn't make sense. You can't make the mistake of forgetting to close the file.
That's why I say that Java, Python and C#'s context managers aren't remotely the same. They're useful tools for resource management in their respective languages, just like defer is a useful tool for resource management in Go. They aren't "basically RAII".
> You can't make the mistake of forgetting to close the file.
But you can make a few mistakes that can be hard to see. For example, if you put a mutex in an object you can accidentally hold it open for longer than you expect since you've now bound the life of the mutex to the life of the object you attached it to. Or you can hold a connection to a DB or a file open for longer than you expected by merely leaking out the file handle and not promptly closing it when you are finished with it.
Trying to keep resource open and close in the same scope is an ownership thing. Even for C++ or Rust, I'd consider it not great to leak out RAII resources from out of the scope that acquired them. When you spread that sort of ownership throughout the code it becomes hard to conceptualize what the state of a program would be at any given location.
As someone coming from RAII to C#, you get used to it, I'd say. You "just" have to think differently. Lean into records and immutable objects whenever you can and IDisposable interface ("using") when you can't. It's not perfect but neither is RAII. I'm on a learning path but I'd say I'm more productive in C# than I ever was in C++.
I agree with this. I don't dislike non-RAII languages (even though I do prefer RAII). I was mostly asking a rhetorical question to point out that it really isn't the same at all. As you say, it's not a RAII language, and you have to think differently than when using a RAII language with proper destructors.
Pondering - is there a language similar to C++ (whatever that means, it's huge, but I guess a sprinkle of don't pay for what you don't use and being compiled) which has no raw pointers and such (sacrificing C compatibility) but which is otherwise pretty similar to C++?
Rust is the only one I really know of. It's many things to many people, but to me as a C++ developer, it's a C++ with a better template model, better object lifetime semantics (destructive moves <3) and without all the cruft stemming from C compat and from the 40 years of evolution.
The biggest essential differences between Rust and C++ are probably the borrow checker (sometimes nice, sometimes just annoying, IMO) and the lack of class inheritance hierarchies. But both are RAII languages which compile to native code with a minimal runtime, both have a heavy emphasis on generic programming through templates, both have a "C-style syntax" with braces which makes Rust feel relatively familiar despite its ML influence.
You can move the burden of disposing to the caller (return the disposable object and let the caller put it in a using statement).
In addition, if the caller itself is a long-lived object it can remember the object and implement dispose itself by delegating. Then the user of the long-lived object can manage it.
> You can move the burden of disposing to the caller (return the disposable object and let the caller put it in a using statement).
That doesn't help. Not if the function that wants to return the disposable object in the happy path also wants to destroy the disposable object in the error path.
Location: Germany (UTC +1/+2), EU citizen
Remote: preferred
Willing to relocate: possibly
Technologies: C# and previously C++ and Java, prefer functional style and privately dabble with F# and Haskell. I know SQL, Azure, Docker, high performance computing (Monte Carlo simulations), see also CV
CV: https://stash.ldr.name/wwtbh/rcv-202512-e5ce.pdf
Email: whoshiring-e5ce /-\T ldr D()T name
I work in mathematical finance so a lot of domain knowledge in that area (derivatives, pricing, probability theory).
What if extension headers made it better? We could come up with a protocol consisting solely of a larger Next Header field and chain this pseudo header with the actual payload whenever the protocol number is > 255. The same idea could also be used in IPv4.
I didn't mean to imply otherwise. But, as you say, this is equally applicable to IPv4 and IPv6. There were a lot of issues solved by IPv6, but "have even more room for non-TCP/UDP transports" wasn't one of them (and didn't need to be, tbqh).
Interesting that they went with a custom MIME type and a custom version header. I would have expected the version to be in the MIME type, but I feel like there is a reason behind this.
reply