Hacker Newsnew | past | comments | ask | show | jobs | submit | dcanelhas's commentslogin

Indeed, then this design will truly have come full circle.

I remember the pitch for Julia early on being matlab-like syntax, C-like performance. When I've heard Julia mentioned more recently, the main feature that gets highlighted is multiple-dispatch.

https://www.youtube.com/watch?v=kc9HwsxE1OY

I think it seems pretty interesting.


Julia is actually faster than C for some things.


Even with text, parsing content in 2D seems to be a challenge for every LLM I have interacted with. Try getting a chatbot to make an ascii-art circle with a specific radius and you'll see what I mean.


I don't really consider ASCII art to be text. It requires a completely different type of reasoning. A blind person can be understand text if it's read out loud. A blind person really can't understand ASCII art if it's read out loud.


I think the selfishness here is related to being fine with generating a pile of electronic waste that becomes a problem for everyone else, as long as he can avoid carrying a few ounces extra.

It's hard to recycle electronics, because separating materials that are chemically bonded together is very labor intensive and isn't worth it from the price of aluminum, copper, lithium, etc alone.

It would have to cost more to dispose of a laptop for this to work out financially.


You’ve identified the real problem. This person’s preferences (and yours and mine) are guided by externalities being priced poorly.

If the consumer was responsible for the real cost of disposal and someone said “I don’t care about repairing it” then it wouldn’t be selfish at all.

But it’s extremely hard to do that. Because if you price proper disposal higher you’ll just get improperly disposed stuff.

A tax on the products to account for this is highly regressive. It’s a complicated muddle.


In 15 years: Testing shows that automotive shaped charge glassbreakers can't penetrate the armor on most modern automotive glass.

Was drone-proofing civilian cars a mistake?


I wish there was an active dev community that could patch win10 going forward, but without access to source code for the kernel, perhaps that isn't really viable.

Ideally I would want to use Linux but I also want to play games that are only supported on windows.

Does using WSL help or is an outdated windows base still going to be the weakest link in the security onion?


WSL is unfortunately is less than ideal, not only is it rubbish (it has its own set of issues like weird networking bugs), it also doesn't mitigate any of the security vulnerabilities/bloatware/telemetry/bugs etc present in Windows.

But you can always dual-boot between Windows and Linux. Just uninstall all your browsers (to mitigate risk) and other non-essential app in your Windows install, configure the firewall to block everything except games. And boot into Linux for everything else.


> (it has its own set of issues like weird networking bugs)

WSL2 is mostly just a virtual machine. All of its networking bugs aren't that weird, they are pretty common networking issues you'd see from any other virtual machine configuration. Depending on what you are trying to do, switching WSL2's networking mode to "Mirrored" can be a useful way to fix networking issues by more closely aligning the VM network stack with the host network stack. This is often the fix for VM networking issues in other environments, too. Things like the host's VPN get reused directly instead of the VM needing to run its own VPN copy, for instance.


In case you want to hold on to Win 10, you can use their LTSC version + validate with massgrave / your own activation emulator. Win 10 21H2 IoT (LTS) end of life is 13 Jan 2032. You would have 7 years to figure out where to go from there, maybe straight to Windows 12.


WSL is effectively a virtual machine. The weakest link is still going to be the virtual machine host (Windows).


It's an interesting read. I guess the difference between this situation and the ideal case is that he would have been admitted for observation as a precaution in a world where there was plenty of room and staff to take care of even the less obvious emergency cases.

Even in a very well functioning system similar cases might happen eventually, anyway (but at a much lower frequency). ROC plots come to mind.

https://en.wikipedia.org/wiki/Receiver_operating_characteris...


I wonder what the prognosis was right after the operation. The article makes it sound a bit like this outcome was totally unexpected.

Insulin from pigs can be used by humans, right? But maybe there's more to diabetes than just a new pancreas. Interesting development, in any case. Thanks for sharing.


I think the major worry is that the body rejects the transplant. If that happens, things can turn really bad really fast. The body will attack the kidney and completely wreck it. They'd probably need surgery to remove it.

He'd probably need to go on dialysis if that were to happen. How long he'd survive IDK. I think I've read that people survive around 2 years on dialysis.

> Insulin from pigs can be used by humans, right? But maybe there's more to diabetes than just a new pancreas.

It can be. That was the first developed insulin. I believe it's completely synthetic at this point.


I wonder where platforms like slack would land in all of this, and how would they go about akeeping people from just using their own encryption e.g. pgp over unencrypted channels? Is public key cryptography too weak to matter?


Slack is not end-to-end encrypted and belongs to a US company. So there is no need for ChatControl there: the US government already has access to everything that is written on Slack.


I believe they are referring to using GPG to encrypt data before putting it into Slack, much like using the out of band OTR. In that case all the data shared between those using GPG or OTR would only be accessible to those with the right out of band keys. There are probably not a lot of people doing this, or not enough for governments to care. I do this in IRC using irssi-otr [1].

If that ever became illegal because encryption then groups of people could simply use scripts or addons to pipe through different types of encoding to make AI fuzzy searches harder. They can try to detect these chains of encoding but it will be CPU expensive to do every combination at scale given there are literally thousands of forms of encoding that could be chained in any order and number.

Mon -> base64 -> base2048 [2]

Tue -> base2048 -> base131072 [3]

...and so on.

[1] - https://irssi.org/documentation/help/otr/

[2] - https://github.com/qntm/base2048

[3] - https://github.com/qntm/base131072


> I believe they are referring to using GPG to encrypt data before putting it into Slack

In good approximation, nobody does that.

And anyone who is capable of communicating over PGP won't be covered by ChatControl anyway. They can keep using PGP over whatever they want, or just compile Signal from sources.

> If that ever became illegal because encryption then groups of people could simply use scripts or addons to pipe through different types of encoding to make AI fuzzy searches harder.

I don't think that this makes any sense at all. This is some kind of poor encryption. Either you honour the law and you send your messages in plaintext, or you don't and you use proper encryption. There is nothing worth anything in-between.

If encryption is illegal, those who really need it can still use steganography.


This is some kind of poor encryption.

In fact it is exactly zero encryption both technically and legally. By using encoding I would not be breaking the law at all assuming encryption itself is actually outlawed. Encryption is mathematical obfuscation. This is only useful for text to/from the server of course. Local storage is still being scanned which means one still have to use a device that does not have local scanning if the files are sensitive such as financial documents or those files would also have to be encoded. Encoding may not have value to some people but it has value to me. Obviously if I am trying to hide something that is highly sensitive like a master password database then I would probably do something a tad bit stronger, maybe 64 to 256 chains of encoding. This is still sufficient to break fuzzy scanning.

Here's an easy one:

    MDExMTAxMTEwMDExMDAwMDAwMT
    EwMDAwMDExMTAxMDAwMTExMDEx
    MTAwMTEwMDAwMDAxMTAwMDAw
    MTExMDEwMDAwMTAwMDAwMDEwMA
    oxMDAxMDAxMDAwMDAwMTEwMDAw
    MTAxMTAxMTAxMDAxMDAwMDAw
    MTEwMDAwMTAwMTAwMDAwMDExMT
    AwMTEwMTEwMTAwMTAxMTAxMTAw
    CjAxMTAxMTAwMDExMTEwMDEw
    MDEwMDAwMDAxMTEwMTExMDExMD
    ExMTEwMTEwMTEwMTAxMTAwMDEw
    MDExMDAwMDEwMTExMDEwMAo=
Zero encryption but it might take people a while to figure this out. The commands I used are installed by default to most Linux distributions. If I wanted to get really crazy I would add different levels and types of compression in the middle of the chain.


> In fact it is exactly zero encryption both technically and legally.

I am not a lawyer (are you?), but technically you're wrong.

"In cryptography, encryption (more specifically, encoding) is the process of transforming information in a way that, ideally, only authorized parties can decode."

A caesar cipher is encryption. I don't see why chaining encodings wouldn't be. Technically and legally.


That's for lawyers and judges to work out. I will not concern myself with it.

Here is a legal definition: [1]

Making text or code unreadable to secure it for transmission or transport. Both the sender, the encryptor, and the receiver, the decryptor, have the means to translate text or code from source language to unreadable, undecipherable gibberish and back. The devices used have a key that is unique to the sender and receiver.

They used the word key. Our court case will have to decide if chained commands are legally equivalent to a "key". It's not even clear to me that is the universal legal definition. Time and grey wigs will tell. If they decide chained commands are a key that could have an interesting precedence on cases involving protecting users data, banking, military secrets, etc...

[1] - https://thelawdictionary.org/encryption/


I get your point, we disagree.


If you really want to use encryption under a state where it's forbidden and communication are monitored you rather want to hide your encrypted messages inside cat pictures and tiktok videos. Because blatant obfuscation might trigger warning and draw attention.

In the end it's not about making encryption technically impossible but illegal, and if you use it you'll be prosecuted.


Me personally, I will use chained encoding because technically and legally that is not encryption. I am fine with drawing attention. If my adversaries wish to spend a gazillian mega-bucks to try to win the arms race of decoding my chained encoding to see my mid-wit comments and pictures of a moose then I am doing a good job. When they change the laws to prevent encoding then we move on to another technique. There are nearly infinite ways to limit communication to a group of people and evade fuzzy scans.


Respectfully, you completely miss the point. You personally, you will still be able to use proper encryption.

It's mainstream platforms who won't be. Those platforms will be mandated to scan their own communications.

There is absolutely no reason to do this weird encoding stuff. Nobody says that it is illegal to encrypt stuff properly.


Well in that case I will use my silly chained encoding and their fuzzy scans of files on the mainstream platforms will have to figure out what to do with it.

For what it's worth I myself do not use these platforms. I just want to get people thinking about mitigating options. I use my own self hosted forums, chat servers, sftp servers, chan servers, voice chat servers and so on. Even then it can be useful to obfuscate text and files in the event someone is using a fondle-slab. I try to discourage fondle slabs.


> I just want to get people thinking about mitigating options.

You should refrain from making dangerous suggestions, I think. Some people may actually need proper encryption.


My suggestion has always been to use PGP or OTR for individual messages or individual files. dm-crypt plain with a random cipher/hash/mode combo for filesystems using a 240 to 480 character passphrase which can also be layered and chained.

This is just an alternative if people believe they are not permitted to encrypt something. The threat vector in this topic is fuzzy scanning local and remote. ChatControl uses fuzzy scanning. Encoding can do just as good a job of mitigating fuzzy scans as any level of encryption. Even manual intervention should take a lot of effort just as much as brute forcing a simple encryption password. If we are being honest encrypted files are most often protected by a weak password and the cipher/hash are already disclosed and the key space is usually small. LUKS for example discloses cipher, hash, mode making brute force just a factor of compute power. If an app is chain-encoding and the chain is shared out of band I suspect it will take orders of magnitude more compute time to cycle through every possible combination of encoding and compression.

For fun has anyone decoded my simple message in the thread?


I fear when its become illegal to not have a remote scanner on my computer broadcasting file contents, invoking GPG will be of much less use.


You are already looking for workarounds like people struggling under authoritarian regimes.

This is completely unacceptable.


This legislation makes every digital communication open to being policed at the source. It is far too overreaching and too rife for abuse.


It depends on how intelligence is defined. In the traditional AI sense it is usually "doing things that, when done by people, would be thought of as requiring intelligence". So you get things like planning, forecasting, interpreting texts falling into "AI" even though you might be using a combinatorial solver for one, curve fitting for the other and training a language model for the third. People say that this muddies the definition of AI, but it doesn't really need to be the case.

Sentience as in having some form of self-awareness, identity, personal goals, rankings of future outcomes and current states, a sense that things have "meaning" isn't part of the definition. Some argue that this lack of experience about what something feels like (I think this might be termed "qualia" but I'm not sure) is why artificial intelligence shouldn't be considered intelligence at all.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: