Hacker Newsnew | past | comments | ask | show | jobs | submit | mercer's commentslogin

ngl


in some ways Elixir is a child of Clojure!

> JOSE: Yeah, so what happened is that it was the old concurrency story in which the Clojure audience is going to be really, really familiar. I’ve learned a lot also from Clojure because, at the time I was thinking about Elixir, Clojure was already around. I like to say it’s one of the top three influences in Elixir, but anyway it tells this whole story about concurrency, right?

https://www.cognitect.com/cognicast/120


I work with elixir daily and I would concur. elixir's semantics line up nearly 1:1 with the clojure code I used to write a few years ago. Its basically if you replaced the lisp brackets with ruby like syntax. The end result is a language that is much easier to read and write on the daily with the disadvantage of making macros more difficult. I would argue that it should be difficult since you should avoid using it until absolutely necessary. Lisps on the other hand, practically beg you to use macros as the entire language is optimized for their use.


wouldn't that still add a lot of value, where the person in the loop (sadly, usually) becomes little more than the verifier, but can process a lot more work?

Anecdotally what I'm hearing is that this is pretty much how LLMs are helping programmers get more done, including the work being less enjoyable because it involves more verification and rubber-stamping.

For the business owner, it doesn't matter that the nature of the work has changed, as long as that one person can get more work done. Even worse, the business owner probably doesn't care as much about the quality of the resulting work, as long as it works.

I'm reminded of how much of my work has involved implementing solutions that took less careful thought, where even when I outlined the drawbacks, the owner wanted it done the quick way. And if the problems arose, often quite a bit later, it was as if they hadn't made that initial decision in the first place.

For my personal tinkering, I've all but defaulted to the LLMs returning suggested actions at logical points in the workflow, leaving me to confirm or cancel whatever it came up with. this definitely still makes the process faster, just not as magically automatic.


That's a brilliant short story right there!


!


I have the same experience, but with Snow Crash...


Also don't forget the 'memory' feature. As LLM providers get better at tailoring the LLM to the user, and probably obfuscating the details on how this user-specific memory works, it will be harder to switch to another provider.


I'm somewhat skeptical, but also honestly curious: what game-changing applications came out of the blockchain/web3 hype craze?


The area I was briefly interested in was in the fintech/lending space. Not an expert though. I saw some cool ideas out of it at that time and interviewed several rounds with a company in that space.


One for the few interesting ones is Polymarket, the betting platform.


Drug and firearm escrow services.


That and crypto-blackmailing.

I am convinced that the only use case for cryptocurrencies is criminal activities.


I get the impression after using language models for quite a while that perhaps the one thing that is riskiest to anthropomorphise is the conversational UI that has become the default for many people.

A lot of the issues I'd have when 'pretending' to have a conversation are much less so when I either keep things to a single Q/A pairing, or at the very least heavily edit/prune the conversation history. Based on my understanding of LLM's, this seems to make sense even for the models that are trained for conversational interfaces.

so, for example, an exchange with multiple messages, where at the end I ask the LLM to double-check the conversation and correct 'hallucinations', is less optimal than something like asking for a thorough summary at the end, and then feeding that into a new prompt/conversation, as the repetition of these falsities, or 'building' on them with subsequent messages, is more likely to make them a stronger 'presence' and as a result perhaps affect the corrections.

I haven't tested any of this thoroughly, but at least with code I've definitely noticed how a wrong piece of code can 'infect' the conversation.


This. If an AI spits out incorrect code then i immediately create a new chat and reprompt with additional context.

'Dont use regex for this task' is a common addition for the new chat. Why does AI love regex for simple string operations?


I used to do this as well, but Gemini 2.5 has improved on this quite a bit and I don't find myself needing to do it as much anymore.


Is that currently not allowed?


well, no. You can say that's what you're doing, and so can your future wife, but it's still incredibly easy to back out of if you change your mind, with few legal, economic, or social barriers to doing so.

On the one hand, these barriers can keep people trapped in truly abusive situations, and it is important for such people to be able to escape. But on the other, 'I don't love my husband/wife anymore', is not any great horror, and I'd hazard that most people who are happily married till death have had at least one long period, potentially of multiple years, where they don't feel as though they love their spouse. But they work through it and things improve. There's something about being 'trapped' with someone that motivates people to make things work in a way that they wouldn't if they know there is an out.


I believe the world ha changed and there is no way this would work in our culture anymore. Personally I treat not being divorced as a kind of achievement, but I realized many if not most people don't share my sentiment.


Yeh, I agree. I don't think there's any sort of legal solution to the problem, since the problem is social, not legal. I was just trying to explain what I think the other person was getting at.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: