Hacker Newsnew | past | comments | ask | show | jobs | submit | chrisfosterelli's commentslogin

Among endurance athletes collagen supplements have become increasingly popular the past couple years -- from what I understand the evidence is kind of mixed though

e.g. https://thefeed.com/products/pillar-performance-collagen-1?v...


I don't get what's the supposed mechanism of action here. Collagen is a hard to digest protein and it has to get digested to be processed and then it's no longer collagen. Why not just eat any other protein source instead?

Yes, that seems to sort of be the criticism and mixed results. Although not everyone has a complete protein diet so theoretically although it breaks down the idea is you then have all the things you need, should your body choose to use it to build collagen.

But I agree, I'd rather start solving deficiencies at the diet level than the supplement level and haven't integrated collagen personally so far.

TBH I suspect marketing plays a big role. "Collagen = good, therefore just buy it and eat it" makes logical sense if you don't actually do any research first.


Different protein sources have different amino acid compositions and they have different effects on the body.

Still, if you eat enough complete protein sources you'll have all the amino acids you need.

Likely yes, but I've had enough chicken breast for life during my fit 20s. I just don't feel like stuffing myself with tasteless sources of protein anymore and testy ones (burgers, steaks, grilled salmon, etc.) will cause unwanted side effects and risks when consumed in high amounts.

Even bodybuilders and powerlifters admit that 2g protein per kg of body weight is about all you need. You can get that with a normal diet and a couple of protein shakes, which taste fine if you use milk and half a banana. You don't need to eat a whole chicken breast for every meal.

I made spaghetti bolognese last night and it had 60 grams of protein per 800 kcal serve. Admittedly I used lean kangaroo mince, because I'm Australian and it was on sale. Still: three meals like that and you wouldn't even need a protein shake.


> lean kangaroo mince

I first read "mice" which was startling, then realized it was "mince" and then realized it was kangaroo! How would you describe the flavor?


Gamy, a bit like deer.

I have yet to see anything against regular eating of fish and fatty fish other than the "scare" of heavy metals, which only applies for wild fish.

Maybe, maybe not. It would depend on a variety of factors including the activities you do, your age, etc. Maybe athletes need more collagen compared to people who don’t exercise, etc, etc.

Also complete protein sources are definitely not easy to get. Good luck if you have dietary constraints.


Notably, whey is a complete protein source and very easy to get, while collagen is a crappy source.

I just wish whey was as easy to get unflavored. Let me handle the flavoring, you provide the protein.

Yes, I'm aware you can buy unflavored whey protein, but it's more expensive and you have to order it online. I can get a huge tub of "Delicious quadruple chocolate delight" BS from Costco for comparative pennies.


Quinoa is a complete protein, containing all amino acids that the body cannot produce on its own, and is gluten free.

You need to eat a ton of quinoa. Only the most dedicated vegan bodybuilder will eat that much quinoa. No thanks.

Also, only animal sources contain hydroxyproline amino acid in significant amounts, which you pretty much only get from collagen sources.

So while quinoa and other like even whey might advertise themselves as complete protein sources, no, they do not contain all the amino acids humans can use in significant amounts.


Hydroxyproline isn't essential though, humans can produce their need from collagen, which they also produce as well as most other animals.

> You need to eat a ton of quinoa

Most don't: at 4.4% protein, a 65kg man like me needs 1.5kg of cooked quinoa per day and it's not a big deal:

- You'll digest it like a king: quinoa is full of soluble and insoluble fibers and you won't feel puffy for eating too much. Easy in, easy out.

- Like milk or wheat, there's many transformations possible like flour, flakes, marinades, beverages, soups... alway a joy to cook and eat, no boredom with that grain.

Can't speak for the bodybuilders though but I'm sure most manage they nutrition. I think soy/pea is more popular.

By the way, very few eats only quinoa or any other single aliment. They also get amino acids from grains, pulses and seeds... even fruits like tomatoes but it's obviously negligible.

Quinoa is also fulled with minerals, vitamins and it's proteins have the same biological value (BV) as beef - or more depending on the source.

> its high-quality protein, complete set of amino-acids, and high content of minerals and vitamins. [0]

> exceptional balance between oil, protein and fat [1]

> Quinoa has a high biological value (73%), similar to that of beef (74%) [2]

0 https://www.tandfonline.com/doi/full/10.1080/15528014.2022.2...

1 https://scijournals.onlinelibrary.wiley.com/doi/10.1002/jsfa...

2 https://www.researchgate.net/publication/303845280_Quinoa_Ch...


There are plenty of studies showing that collagen supplementation helps athletes. Which means there are cases where the body doesn’t produce enough collagen for itself. And as you age, your body produces less collagen. Reasons enough to supplement with collagen.

Vegan bodybuilders would just use pea/rice protein extracts, which are about as commercially available as whey (very).

> Also complete protein sources are definitely not easy to get.

... all of the essential amino acids? What is difficult about that?


I was going to say that too, thanks for beating me to it!

> 3. With all the modern corporate doublespeak trainings, there is 0 chance that something would be called “desperation score” in us business.

This is a good point. It'd almost certainly be called something like 'payrate sensitivity factor'


Something like this can be reworded to make it sound like a “good” thing: “mission-centricity” or “dedication” or some other label that spins this as being committed to the company (leaving out that this is at their own expense).

Acceptance Elasticity

I wonder how air Canada reconciles this. There was a popular globe and mail article a while ago that gave awful rankings to air Canada's water tanks -- so the company put up signs in the bathroom saying the water is non-potable and called it a day.

Not super comforting if they're then using the same 'non-potable' water to make coffee...


>Not super comforting if they're then using the same 'non-potable' water to make coffee...

It's presumably boiled, which makes it potable?


I guess I don't know exactly how these airplane machines work but in general ideal coffee brewing does not reach the full boiling point.

boiling it will remove bacterias, but not toxins (if there are any).

Is there any reason to expect there would be "toxins", given that it's just water? I can imagine how there might be accumulated toxins it's a pack of chicken breasts left in a hot car for 8 hours, but if it's water it should be fine? After all, boiling water is a tried and true way of making water safe to drink.

Heavy metals [1] Nitrate and nitrite [2] PFAs most probably (couldn’t find anything about this, but since it’s everywhere…)

[1] https://www.webpronews.com/study-exposes-airline-water-conta... [2] https://www.ncbi.nlm.nih.gov/books/NBK310709/#:~:text=Beside...


> After all, boiling water is a tried and true way of making water safe to drink.

It's not.


https://en.wikipedia.org/wiki/Boiling#For_making_water_potab...

Yes, there are substances that slip through, but it works well enough for most cases that it's probably fine. Otherwise you get into weird edge cases like "what if there are prions in the water?!?" or whatever.


Heavy metals are a big problem, especially from cheap brass fittings common in outdoor water hoses. Indoor plumbing, by contrast, uses copper and/or plex tubing and so there’s near zero risk of metal poisoning (caveat on cheap plex fittings- don’t do that.)

I was recently talking to a colleague I went to school with and they said the same thing, but for a different reason. We both did grad studies with a focus on ML, and at the time ML as a field seemed to be moving so fast. There was a lot of excitement around AI again finally after the 'AI winter'. It was easy to participate in bringing something new to the field, and there was so many unique and interesting models coming about every day. There was genuine discussion about a viable path to AGI.

Now, basically every new "AI" feature feels like a hack on top of yet another LLM. And sure the LLMs seem to keep getting marginally better, but the only people with the resources to actually work on new ones anymore are large corporate labs that hide their results behind corporate facades and give us mere mortals an API at best. The days of coding a unique ML algorithm for a domain specific problem are pretty much gone -- the only thing people pay attention to is shoving your domain specific problem into an LLM-shaped box. Even the original "AI godfathers" seem mostly disinterested in LLMs these days, and most people in ML seem dubious that simply scaling up LLMs more and more will be a likely path to AGI.

It seems like there's more excitement around AI for the average person, which is probably a good thing I suppose, but for a lot of people that were into the field they're not really that fun anymore.

In terms of programming, I think they can be pretty fun for side projects. The sort of thing you wouldn't have had time to do otherwise. For the sort of thing you know you need to do anyway and need to do well, I notice that senior engineers spend more time babysitting them than benefitting from them. LLMs are good at the mechanics of code and struggle with the architecture / design / big picture. Seniors don't really think much about the mechanics of code, it's almost second nature, so they don't seem to benefit as much there. Juniors seem to get a lot more benefit because the mechanics of the code can be a struggle for them.


> Now, basically every new "AI" feature feels like a hack on top of yet another LLM.

LLM user here with no experience of ML besides fine-tuning existing models for image classification.

What are the exciting AI fields outside of LLMs? Are there pending breakthroughs that could change the field? Does it look like LLMs are a local maxima and other approaches will win through - even just for other areas?

Personally I'm looking forward to someone solving 3D model generation as I suck at CAD but would 3D print stuff if I didn't have to draw it. And better image segmentation/classification models. There's gotta be other stuff that LLMs aren't the answer to?


Well one of the inherent issues is assuming that text is the optimal modality for every thing we try to use an LLM for. LLMs are statistical engines designed to predict the most likely next token in a sequence of words. Any 'understanding' they do is ultimately incidental to that goal and once you look at them that way a lot of the shortcomings we see become more intuitive.

There's a lot of problems LLMs are really useful for because generating text is what you want to do. But there's tons of problems which we would want some sort of intelligent, learning behaviour that do not map to language at all. There's also a lot of problems that can "sort of" be mapped to a language problem but make pretty extraneous use of resources compared to a (existing or potential) domain specific solution. For purposes of AGI, you could argue that trying to express "general intelligence" via language alone is fundamentally flawed altogether -- although that quickly becomes a debate about what actually counts as intelligence.

I pay less attention to this space lately so I'm probably not the most informed. Everyone seems so hyped about LLMs that I feel like a lot of other progress gets buried, but I'm sure it's happening. There's some problem domains that are obviously solved better with other paradigms currently: self-driving tech, recommendation systems, robotics, game AIs, etc. Some of the exciting stuff that can likely solve some problems better in the future is some of the work on world models, graph neural nets, multi modality, reinforcement learning, alternatives to gradient descent, etc. I think it's a debate whether or not LLMs are a local maxima but many of the leading AI researchers seem to think so -- Yann Lecun recently for e.g. said LLMs 'are not a path to human-level AI'


It’s now moving faster than ever. Huge strides have been made in interpretability, multi modality, and especially the theoretical understanding of how training interacts with high dimensional spaces. E.g.: https://transformer-circuits.pub/2022/toy_model/index.html

Thanks, this seems interesting. I'll give it a read. I admittedly don't keep tabs as much as I should these days. I feel like every piece of AI news is about LLMs. I suppose I should know other people are still doing interesting things :)

I often do this in meetings and have gotten into the habit of saying "I'm thinking". It's not much but it gives both of us time to think and explicitly makes it clear I don't expect the person to say something. I think that helps.


I just blurt out "processing" when they start looking at me weird. People tend to take it well.


I read this at first as "I just blurt out "processing" and people look at me weird", which made me smile.


Fair enough, I do like parent’s a bit better, “blurting processing” feels like a too high default setting right after seeing “I’m thinking” :) - not that any of it matters anyways, communicating _something_ gets you there. Rest it just triaging around the edges what people will call you weird for, and if they are, they were going to anyway.


"Give me a second" is something I say when someone just has to break the silence with some unproductive comment. Having 20-30 seconds to think silently should be a completely normal thing.


”Thinking longer for a better answer”


A whole other part of this argument that could be made is about the inherent assumption that a ping timeout is caused by an event that only affects one machine.


For sure. Having lived on IRC for a while many years ago, I assure any bystanders that this is assuredly not always the case.


Imagine them trying to sue every person on one side of a netsplit


...and back in my day (yeah I am becoming an old fart), it was dead simple to cause a netsplit on most networks.


I'll admit to sending a couple of the messages that made Linksys routers restart. I also set up automatic k-lines on Snoonet for these very strings, years ago


Ergo isn't a federated server, it's meant to scale vertically


The internet is a "federated" network though, so their point still applies.


No, Ergo doesn't have netsplits because there isn't anything to split with. The point does not apply.


There are events that may affect more than one machine which are not netsplits.

e.g. an ISP with common users experiences an outage, an IRC client with common users has a bug, common users within the same time zone have automated system updates run at the same time, the IRC server experiences an upstream network disruption affecting only some routes, a regional power outage occurs, a hosted bouncer service with common users has an outage, etc, etc, etc...


What's the right amount of standards to have when you're writing 9 million lines of code that controls a 30,000lb machine moving through the sky at mach 1 with a human life inside?



Player compatibility. Netflix can use AV1 and send it to the devices that support it while sending H265 to those that don't. A release group puts out AV1 and a good chunk of users start avoiding their releases because they can't figure out why it doesn't play (or plays poorly).


OK this would obviously be bad, I think everyone gets that.

But the note in the article is getting at something that feels interesting. I think there's a more fruitful conversation around "how might this work in spirit?" instead of "would this work literally?"


> Please, look up how many people it is capable of moving per hour, and compare that to any light rail or street car service in any city.

I have a hard time understanding this criticism. Why not do both?

It seems to me like underground highways make sense as an alternative to above ground highways in urban areas, not that they're an alternative to rail. There's lots of cities with excellent public transport that also make use of underground car travel (Melbourne for e.g.). If a company can figure out how to (safely) make underground highways more quickly and more affordably, it seems like that means we may need to do above-ground roads less frequently -- why would that not be a good thing?

Further, obviously Musk has a PR angle in facilitating tesla traffic here as the test bed in early days, but I don't see any reason that this couldn't be repurposed to rail use at scale.


In urban areas, they're usually an alternative. If you're going past the city, you could build a ground level highway around the city for a lot cheaper. If you're going into the city, it makes more economic sense to leave your car at the periphery of the city and take a rail system in because of the difference in throughput per $ spent building it (as well as the space occupied by parking for people who need to leave their cars in the city). Plus the people leaving the highway will get onto surface streets, and back up the highway.

Being able to make underground tunnels cheaper and faster is cool. Using them for cars is mostly a boondoggle with clearly superior alternatives.


I think that's reasonable. I suppose I also think it idealist that cities will actually act that way in practice in the short term. I'm specifically thinking of examples like the Corniche highway in Alexandria or Marine drive in Mumbai which shows cities are willing to give up gorgeous public space throughout incredibly dense areas to support car traffic. But there's also examples like Boston's "big dig" which shows cities are willing to spend extra to move those auto pathways underground. At least in the short term it seems that 1) cities aren't giving up entirely on cars, but 2) are willing to pay more to have them underground.

I suspect in practice the actual approach is going to be a mix of all of the above. So my reasoning is primarily that if all cities won't give up cars anyway, it seems objectively better to make it easier to at least move more of them underground. I suppose one case where I would change my mind is if there was evidence that more affordable underground roads reduced the investment in public transit.


> I suppose one case where I would change my mind is if there was evidence that more affordable underground roads reduced the investment in public transit.

It's Friday night so I lack the motivation to go on a stats-finding expedition, but anecdotally this seems like a circular issue to me. Public transportation sucks, so no one wants to fund it and we invest money into car infrastructure. Traffic gets worse, but public transportation is still bad because we haven't improved it, so we dump more money into car infrastructure, and etc.

I do hear you about the practical realities, though. Most people will drive if they can, because it is more convenient (so long as we can keep building more roads, even at exorbitant prices).

I think there would be far less support if people could see what they're actually spending on car infrastructure. At least in the US, it's currently so fractured it's hard to get an idea. Registration fees, gas taxes, federal taxes that get pumped into highway maintenance, etc. There's no clear "we spend $X on car infrastructure, and we could have really good public transportation for $Y".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: