Hacker Newsnew | past | comments | ask | show | jobs | submit | muh_gradle's commentslogin

I'm curious how you managed to get access to testing centers in NYC. I thought there was a law that restricted this last I checked.


In our case, we're providing the testing as part of a comprehensive program (including a review of results with a doctor, customized nutrition plans, etc) -- so it's slightly different than companies that only offer the labs in isolation.


Don't bother, it's always crickets on this one. China doesn't play by the rules in literally anything and "well that's too bad".


I used to feel this way as well. But meetings can spiral out of control. In my previous company, my stand ups were taking 30 min minimum and sometimes up to an hour. At one point, I was looking at 3 hours of meetings a day. As an IC, I felt pretty powerless because it was my EM that was making these decisions.


This is what the Internet is for.


Poor comparison


No so! Either both the comments are meaningful, or both are meaningless.


Absolutely not. One thing happens because of a set of physical laws that govern the universe. These laws were discovered due to a massive number of observations of multiple phenomena by a huge number of individuals over (literally) thousands of years, leading to a standard model that is broadly comprehensive and extremely robust in its predictions of millions or possibly even billions of seperate events daily.

The other thing we have a small number of observations of happening over the last 50 or 60 years but mostly the last 5 years or so. We know some of the mathematical features of the phenomena we are observing but not all and there is a great deal going on that we don't understand (emergence in particular). The things we are seeing contradict most of the academic field of linquistics so we don't have a theoretical basis for them either outside of the maths. The maths (linear algebra) we understand well, but we don't really understand why this particular formulation works so well on language related problems.

Probably the models will improve but we can't naively assume this will just continue. One very strong result we have seen time and time again is that there seems to be an exponential relationship between computation and trainingset size required and capability. So for every delta x increase we want in capability, we seem to pay (at least) x^n (n>1) in computation and training required. That says at some point increases in capability become infeasible unless much better architectures are discovered. It's not clear where that inflection point is.


Well, based on observations we know that the sun doesn't rise or set; the earth turns, and gravity and our position on the surface create the impression that the sun moves.

There are two things that might change- the sun stops shining, or the earth stops moving. Of the known possible ways for either of those things to happen, we can fairly conclusively say neither will be an issue in our lifetimes.

An asteroid coming out of the darkness of space and blowing a hole in the surface of the earth, kicking up such a dust cloud that we don't see the sun for years is a far more likely, if still statically improbable, scenario.

LLMs, by design, create combinations of characters that are disconnected from the concept of True, False, Right or Wrong.


Is the function of human intelligence connected to true false right or wrong? These things are 'programmed' into you after you are born and from systematic steps.


Yes, actually. People may disagree on how to categorize things, but we are innately wired to develop these concepts. Erikson and Piaget are two examples of theorists in the field of child psychology who developed formalizations for emotional and mental stages of development. Understanding that a thing "is" is central to these developmental stages.

A more classic example is Freud's deliniation between the id, ego and super-ego. Only the last is built upon imparted cultural mores; the id and ego are purely internal things. Disorders within the ego (excessive defense mechanisms) inhibit perception of what is true and false.

Chatbots / llms don't consider any of these things; they consider only what is the most likely response to a given input?. The result may, by coincidence, happen to be true.


I don't understand why that is necessarily true.


Because they are both statements about the future. Either humans can inductively reason about future events in a meaningful way, or they can’t. So both statements are equally meaningful in a logical sense. (Hume)

Models have been improving. By induction they’ll continue until we see them stop. There is no prevailing understanding of models that lets us predict a parameter and/or training set size after which they’ll plateau. So arguing “how do we know they’ll get better” is the same as arguing “how do we know the sun will rise tomorrow”… We don’t, technically, but experience shows it’s the likely outcome.


It's comparing the outcome that a thing that has never happened before will (no specified time frame), versus the outcome that a thing that has happened billions of times will suddenly not happen (tomorrow). The interesting thing is, we know for sure the sun will eventually die. We do not know at all that LLMs will ever stop hallucinating to a meaningful degree. It could very well be that the paradigm of LLMs just isn't enough.


What? LLMs have been improving for years and years as we’ve been researching and iterating on them. “Obviously they’ll improve” does not require “solving the hallucination problem”. Humans hallucinate too, and we’re deemed good enough.


Humans hallucinate far less readily than any LLM. And "years and years" of improvement have made no change whatsoever to their hallucinatory habits. Inductively, I see no reason to believe why years and years of further improvements would make a dent in LLM hallucination, either.


As my boss used to say, "well, now you're being logical."

The LLM true believers have decided that (a) hallucinations will eventually go away as these models improve, it's just a matter of time; and (b) people who complain about hallucinations are setting the bar too high and ignoring the fact that humans themselves hallucinate too, so their complaints are not to be taken seriously.

In other words, logic is not going to win this argument. I don't know what will.


I don’t know if it’s my fault or what but my “LLMs will obviously improve” comment is specifically not “llms will stop hallucinating”. I hate the AI fad (or maybe more annoyed with it) but I’ve seen enough to know these things are powerful and going to get better with all the money people are throwing at them. I mean you’d have to be willfully ignoring reality recently to not have been exposed to this stuff.

What I think is actually happening is that some people innately have taken the stance that it’s impossible for an AI model to be useful if it ever hallucinates, and they probably always will hallucinate to some degree or under some conditions, ergo they will never be useful. End of story.

I agree it’s stupid to try and inductively reason that AI models will stop hallucinating, but that was never actually my argument.


> Humans hallucinate far less readily than any LLM.

This is because “hallucinate” means very different things in the human and LLM context. Humans have false/inaccurate memories all the time, and those are closer to what LLM “hallucination” represents than humam hallucinations are.


Not really, because LLMs aren't human brains. Neural nets are nothing like neurons. LLMs are text predictors. They predict the next most likely token. Any true fact that happens to fall out of them is sheer coincidence.


This for me is the gist, if we are always going to be playing pachinko when we hit go then where would a 'fact' emerge from anyway, LLM don't store facts, correct me if I am wrong, as my topology knowledge is somewhat rudimentary, so here goes, first, my take, after this, I'll past GPT4's attempt to pull this into something with more clarity!

We are interacting with multidimensional topological manifolds, and the context we create has a topology within this manifold that constrains the range of output to the fuzzy multidimensional boundary of a geodesic that is the shortest route between our topology and the LLM.

I think some visualisation tools are badly needed, viewing what is happening is for me a very promising avenue to explore with regards to emergent behaviour.

GPT4 says; When interacting with a large language model (LLM) like GPT-4, we engage in a complex and multidimensional process. The context we establish – through our inputs and the responses of the LLM – forms a structured space of possibilities within the broader realm of all possible interactions.

The current context shapes the potential responses of the model, narrowing down the vast range of possible outputs. This boundary of plausible responses could be seen as a high-dimensional 'fuzzy frontier'. The model attempts to navigate this frontier to provide relevant and coherent responses, somewhat akin to finding an optimal path – a geodesic – within the constraints of the existing conversation.

In essence, every interaction with the LLM is a journey through this high-dimensional conversational space. The challenge for the model is to generate responses that maintain coherence and relevancy, effectively bridging the gap between the user's inputs and the vast knowledge that the LLM has been trained on."


If you believe humans hallucinate far less then you have a lot more to learn about humans.

There are a few recent Nova specials from PBS that are on YouTube that show just how much bullshit we imagine and make up at any given time. It's mostly our much older and simpler systems below intelligence that keep us grounded in reality.


It's like you said, "...our much older and simpler systems... keep us grounded in reality."

Memory is far from infallible but human brains do contain knowledge and are capable of introspection. There can be false confidence, sure, but there can also be uncertainty, and that's vital. LLMs just predict the next token. There's not even the concept of knowledge beyond the prompt, just probabilities that happen to fall mostly the right way most of the time.


We don't know that the mechanism used to predict the next token would not be described by the model as "introspection" if the model was "embodied" (otherwise given persistent context and memory) like a human. We don't really know that LLMs operate any differently than essentially an ego-less human brain... and any claims that they work differently than the human brain would need to be supported with an explanation of how the human brain does work, which we don't understand enough to say "it's definitely not like an LLM".


I'm trying to interpret what you said in a strong, faithful interpretation. To that end, when you say "surely it will improve", I assume what you mean is, it will improve with regards to being trustworthy enough to use in contexts where hallucination is considered to be a deal-breaker. What you seem to be pushing for is the much weaker interpretation that they'll get better at all, which is well, pretty obviously true. But that doesn't mean squat, so I doubt that's what you are saying.

On the other hand, the problem of getting people to trust AI in sensitive contexts where there could be a lot at stake is non-trivial, and I believe people will definitely demand better-than-human ability in many cases, so pointing out that humans hallucinate is not a great answer. This isn't entirely irrational either: LLMs do things that humans don't, and humans do things that LLMs don't, so it's pretty tricky to actually convince people that it's not just smoke and mirrors, that it can be trusted in tricky situations, etc. which is made harder by the fact that LLMs have trouble with logical reasoning[1] and seem to generally make shit up when there's no or low data rather than answering that it does not know. GPT-4 accomplishes impressive results with unfathomable amounts of training resources on some of the most cutting edge research, weaving together multiple models, and it is still not quite there.

If you want to know my personal opinion, I think it will probably get there. But I think in no way do we live in a world where it is a guaranteed certainty that language-oriented AI models are the answer to a lot of hard problems, or that it will get here really soon just because the research and progress has been crazy for a few years. Who knows where things will end up in the future. Laugh if you will, but there's plenty of time for another AI winter before these models advance to a point where they are considered reliable and safe for many tasks.

[1]: https://arxiv.org/abs/2205.11502


> What you seem to be pushing for is the much weaker interpretation that they'll get better at all, which is well, pretty obviously true. But that doesn't mean squat, so I doubt that's what you are saying.

I mean this is what I was saying. I just don't think that the technology has to become hallucination-free to be useful. So my bad if I didn't catch the implicit assumption that "any hallucination is a dealbreaker so why even care about security" angle of the post I initially responded to.

My take is simply just that "these things are going to be used more and more as they improve so we better start worrying about supply chain and provenance sooner than later". I strongly doubt hallucination is going to stop them from being used despite the skeptics, and I suspect hallucination is a problem of lack of context moreso than innate shortcomings, but I'm no expert on that front.

And I'm someone who's been asked to try and add AI to a product and had the effort ultimately fail because the model hallucinated at the wrong times... so I well understand the dynamics.


Just because you can inductively reason about one thing doesn't mean you can inductively reason about all things.

In particular you absolutely can't just continue to extrapolate short-term phenomena out blindly into the future and pretend that has the same level of meaning as things like the sun rising which are the result of fundamental mechanisms that have been observed, explored and understood iteratively better and better over an extremely long time.


> Turns out reading HN comments is not a good way to run a company. Who would have thought?

Literally everyone thought this, and probably even the talentless hacks that you hired. Strange that you had to create a throwaway account to figure this one out.


Why hire these people then


I live in NYC, have taken the tube when I lived in London. I've also lived in other cities like Seoul, Tokyo with superior public transportation. The "aesthetics" aspect that you describe is an incredible understatement. The nearest MTA station is covered in feces and used syringes and I'm not exaggerating. Trains are constantly late. Apparently building a barrier and a gate on the platform is a 10 year, trillion dollar project. I have to put my back to a wall because I'm worried some crazy person will push me onto the track. Yeah, I wouldn't concur with the statement on MTA being so functional.


I have literally never seen a subway station “covered in feces and used syringes” - or even one single syringe in a station - in my life. How many times have you actually ridden the subway?


Have you ever visited the 155th station? Maybe you should actually try living in the city before talking. Or maybe Penn station which is next to the biggest methadone clinic in the city. If you think that I'm saying the station is literally covered from floor to ceiling with feces and syringes, then no that's not what I'm saying. But maybe you're just so comfortable with the griminess that it doesn't bother you.


I live in New York City, but no, I haven’t spent time in a random local station near Sugar Hill. Not sure why that would invalidate the rest of my experience. Penn Station is fine. Methadone is taken orally (as a drink) to treat addiction and no syringes are involved, so the methadone clinic wouldn’t have anything to do with syringes in the station.


No matter where you live, the subway elevators always pull double duty as restrooms, and I've never stepped into one without feeling that pungent scent of urine. But hey, you're absolutely right, the subway is just peachy.


> Penn Station is fine.

Yeah you either don't live in NYC, or you just have a lower standard for basic hygiene. But let's agree to disagree.


Exactly. I worked across the street for 2 years and the things I saw there on a weekly basis were interesting to say the last. My wife worked a block away for 4 years, same story.

Was also the first place I saw an extremely locked down Duane Reade where I had to ask for help to get almost any product off the shelf. Understandable, given the above.


Huh, what station? I also live in NYC but haven’t seen any subway stations nearly that bad. Grimy, definitely. But never what you’re describing.


Sports betting has gotten out of control. I used to be an avid UFC and NBA fan, but the constant betting lines being pushed has really hurt sports.


Agree. I think reddit will eventually die like Digg in a few years. I've been on reddit for 15 years now. It's been getting steadily worse.


And the users have been getting steadily worse too...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: