Hacker Newsnew | past | comments | ask | show | jobs | submit | jugglinmike's commentslogin

Great catch. I was getting ready to mention the theoretical risk of asking an LLM be your arbiter of truth; it didn't even occur to me to check the chosen example for correctness. In a way, this blog post is a useful illustration not just of the hazards of LLMs, but also of our collective tendency to eschew verity for novelty.


> Great catch. I was getting ready to mention the theoretical risk of asking an LLM be your arbiter of truth; it didn't even occur to me to check the chosen example for correctness.

It's beyond parody at this point. Shit just doesn't work, but this fundamental flaw of LLMs is just waved away or simply not acknowledged at all!

You have an algorithm that rewrites textA to textB (so nice), where textB potentially has no relation to textB (oh no). Were it anything else this would mean "you don't have an algorithm to rewrite textA to textB", but for gen ai? Apparently this is not a fatal flaw, it's not even a flaw at all!

I should also note that there is no indication that this fundamental flaw can be corrected.


> the theoretical risk of asking an LLM be your arbiter of truth

"Theoretical"? I think you misspelled "ubiquitous".


That part was straightforward thanks to the "d3-force" plugin for D3.js:

https://github.com/d3/d3-force

I probably ought to spend a little more time tuning the parameters, though.


> Maybe there is a way to build a new web, a new kind of social media using a > hash graph to implement a decentralized web of trust, something that can > allow content verification without forcing everyone to sacrifice their right > to remain anonymous online.

Comparisons to OpenGPG are likely to make anyone cringe, but if you're willing to suspend disbelief about the usability issues, it seems like such an apt metaphor. Where signing a key is vouching for the owner's humanity, not their identity. I can imagine websites integrating my browser to attach a "humanity key" to the content I post, and my browser maintaining my collection of keys and my preferences for divulging them.

Granted, pseudonymity is not the same as anonymity. Maybe if the meaning (and lifespan?) behind these keys was constrained, then they could be sufficiently disposable to approximate anonymity.

This system is either fundamentally flawed or already implemented, but I don't know how determine which. Can anyone here share writing along these lines?


"Proof of humanity" is an active area of research, and there are several attempts at it out there. One that's worth looking at is BrightID[0] (although some people might be put off by it being blockchain-adjacent, in that it uses a DAO as part of its consensus-building system).

As for the difference between pseudonymity and anonymity, it's worth noting that once everyone has a cryptographic identity, we can start layering on clever things like zero-knowledge proofs, for example, which would allow people to issue themselves new pseudonyms that carry the same level of trust as their core identity, without ever revealing what that core identity is (or which other accounts vouched for the core identity to give it that level of trust).

[0] https://www.brightid.org/


The real irony comes from the author's claims:

> Unlike reading books and long magazine articles (which > require thinking), we can swallow limitless quantities > of news flashes, which are bright-coloured candies for > the mind.

...and the manner of publication:

> This is an edited extract from an essay first published > at dobelli.com. The Art of Thinking Clearly: Better > Thinking, Better Decisions by Rolf Dobelli is published > by Sceptre, £9.99. Buy it for £7.99 at > guardianbookshop.co.uk

But there's a good case to be made for knowing your audience. In that sense, this version is actually much more likely to reach those who might be influenced by it.



This is good. Thanks :D I’m thinking along the line of what jQuery UI did — offering ability to customize a version of jQuery you can download from the site.


Based on the full quote (below), it seems that he was referring to Valve's offering only:

"Well certainly our hardware will be a very controlled environment," he said. "If you want more flexibility, you can always buy a more general purpose PC. [...]"


Two points that I haven't seen covered yet:

1. Waste. If some component on your motherboard goes, you're on the hook for a new CPU (and vice versa). This seems tremendously wasteful. Maybe bigger repair shops will support mail-in refurbishing? Will people take advantage of that? Or just buy new for convenience?

2. Competition. Smaller motherboard vendors won't be able to sell direct anymore. I'm wondering if anyone can comment on how bad of a thing this is.


I don't think I understand #2, wouldn't the small vendor just build and sell their boards pre-populated with an intel CPU part? They'd have to carry more inventory but not a crazy amount more.


Yeah, that's a good point. This introduces a new inventory consideration, namely CPU model demands (they'll have to determine the best distribution of proc models across each product). Probably not too big a deal


The OP doesn't mention expected client count, but the number of concurrent connections is a crucial factor that these tests overlook.

I'm currently doing some Socket.io stress testing, and it's clear that XHR-polling is significantly harder on the CPU for the reasons you've mentioned. This detail is lost in the noise for single-client tests, but it will become increasingly relevant as more clients connect.


"Showing up every day is very interesting because it’s the least visible indicator of success. No successful person tallies how much they show up every day. Well except maybe this guy."

I really expected this link to point to Jerry Seinfeld's productivity secret. The story has made the rounds at this point, but in case anyone missed it:

http://lifehacker.com/281626/jerry-seinfelds-productivity-se...


There was a good discussion on this kind of service a while back--check out "Javascript Cryptography Considered Harmful"

HN: http://news.ycombinator.com/item?id=2935220 Direct link: http://www.matasano.com/articles/javascript-cryptography/


If the messaging was peer-to-peer, the objections in that post wouldn't apply quite as much. Then you would connect to the NoPlainText server to download the JS client by HTTPS, and use it to encrypt the direct connections to your messaging partners. I think JS crypto could still sensibly be used for this kind of "separation of trust" problem.

For instance, I am considering a service at the moment which would involve people uploading confidential information. The uploads will be fairly large, so a lot of bandwidth will be required (optimistically assuming it gets traction.) One architecture I am considering is a small dedicated HTTPS server which provides a self-contained webpage-plus-JS program to encrypt the upload and send it for storage on Amazon S3. Then I will pull the results off Amazon and decrypt them on a machine which is not even connected to the network. The advantage to this architecture is that it will scale arbitrarily but require me to secure only relatively modest dedicated resources, despite being used for transmission of confidential information. Because it uses a dedicated HTTPS server serving a self-contained page doing all the crypto, it avoids tptacek's objections to JS crypto in the browser (E.g., the server can provide the random seed in the JS itself, HTTPS prevents MITM attacks, etc.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: