Hacker Newsnew | past | comments | ask | show | jobs | submit | more FiberBundle's commentslogin

I'd say it isn't in the US though, at least there's precedent that makes me think that: https://en.m.wikipedia.org/wiki/National_Basketball_Ass%27n_...


I think you misunderstand that case "compiled its scores and statistics by employing people to listen or watch the games, then enter the scores on the computer which transmits the scores to STATS' on-line service, to be sent out to anyone using a SportsTrax pager.[1]"

Notice how they watched the game and got the statistics like that. The restrictions are about using the scoreboard and the data displays and reselling/commercialising that data. It is however legal to watch the game and compile and distribute your own stats due to the game entering public domain.

Due to this many betting companies and data collection companies have to pay people to watch the game vs just scraping the scoreboard (which is the context from which I learnt about this). ironically at venue OCR is a common way to get scoreboard data.


I'm not a lawyer, but my interpretation of the lawsuit based on the Wikipedia article is that game results/scores are public facts and hence not copyrightable data. I don't see how the method by which that public data is collected changes anything materially about that case. Are you saying that inferring the score based on the scoreboard is what makes this illegal (why?)? What if they would infer the score using motion/ball tracking instead?


if you using CV to track the player, the ball etc from a broadcast it is fine, the scoreboard however not so straight forward. fwiw, doing CV from broadcast for accurate scoring of sports is neigh near impossible due to edges, but human in the loop systems exist. there are also numerous in venue CV systems which auto collect game and player information.


In my experience you just don't keep as good a map of the codebase in your head when you have LLMs write a large part of your codebase as when you write everything yourself. Having a really good map of the codebase in your head is what brings you large productivity boosts when maintaining the code. So while LLMs do give me a 20-30% productivity boost for the initial implementation, they bring huge disadvantages after that, and that's why I still mostly write code myself and use LLMs only as a stackoverflow alternative.


I have enough projects that I'm responsible for now (over 200 packages on PyPI, over 800 GitHub repositories) that I gave up on keeping a map of my codebases in my head a long time ago - occasionally I'll stumble across projects I released that I don't even remember existing!

My solution for this is documentation, automated tests and sticking to the same conventions and libraries (like using Click for command line argument parsing) across as many projects as possible. It's essential that I can revisit a project and ramp up my mental model of how it works as quickly as possible.

I talked a bit more about this approach here: https://simonwillison.net/2022/Nov/26/productivity/


You're an extreme outlier. Most programmers work with 1-3 codebases probably. Obviously you can't keep 800 codebases in your head, and you have to settle for your approach out of necessity. I find it hard to believe you get anywhere close to the productivity benefits of having a good mental map of a codebase with just good documentation and extensive test coverage. I don't have any data on this, but from experience I'd say that people who really know a codebase can be 10-50x as fast at fixing bugs than those with only a mediocre familiarity.


The evolution of a codebase is an essential missing piece of our development processes. Barring detailed design docs that no one has time to write and then update, understanding that evolution is the key to understanding the design intent (the "why") of the codebase. Without that why, there will be no consistency, and less chance of success.

"Short cuts make long delays." --Tolkien


This has nothing to do with being left-leaning. If you disagree with HN's sentiment here, it just means as you have completely lost the plot.


A yeah the deer are just confused, no need to worry. Keep partying!


It didn't appear in the last two years. We have had deep learning based autoregressive language models (like Word2Vec) for at least 10 years.


Early computer networks appeared in the 1960s and the public internet as we know it in the 1990s.

We are still early in AI.


totally, and i’ve been working with attention since at least 2017. but i’m colloquially referring to the real breakout and substantial scale up in resources being thrown at it


This is actually the result that I find way more impressive. Elite mathematicians think these problems are challenging and thought they were years away from being solvable by AI.



DHH + Basecamp aren't exactly a run of the mill company hiring mediocre talent.

They have the skills, cash flow, and resources to do whatever they want.


Why can't you just debug this yourself? I don't think completely relying on LLMs for something like this will do you any good in the long run.


Well... why ask LLMs to do anything for us? :) Sure, I could debug it myself, but the whole point is to have a second brain fix the issue so that I can focus on the next feature.

If you're curious, I knew nothing about shader programming when I first played around. In that specific experiment, I wanted to see how far I could push Claude to implement shaders and how capable it is of correcting itself. In the end, I got a pretty nice dynamic lighting system with some cool features, such as cast shadows, culling, multiple shader passes, etc. Asking questions along the way taught me many things about computer graphics, which I later checked on different sources, it was like a tailored-made tutorial where I was "working" on exactly the kind of project I wanted.


Why not? It depends on how you use these systems. Let the LLM debug this for me, give me a nice explanation for what's happening and what solution paths could be and then it's on me to evaluate and make the right decision there. Don't rely blindly on these systems, in the same vein as you shouldn't rely blindly on some solution found while using Google.


A reasonable answer is that this is our future one way or another: the complexity of programs is exceeding the ability of humans to properly manage them, and cybernetic augmentation of the process is the way forward.

i.e. there would be a lot of value if an AI could maintain a detailed understanding of say, the Linux kernel code base, when someone is writing a driver and actively prompt about possible misuses, bugs or implementation misunderstandings.


That's a different question though. The person you replied to was asked to explain why they think Sonnet 3.5 works well/better compared to GPT-4o. To which they gave a good answer of Sonnet actually taking context and new information better into account when following up.

They might be able to debug it themselves, maybe they should be able to debug it themselves. But I feel like that is a completely different conversation.


There have been lawsuits. I'm unable to find the actual documents, but this article [0] reports on it and has some unbelievable quotes in it.

> Lin rejected Tesla's argument that LoSavio should have known earlier. "Although Tesla contends that it should have been obvious to LoSavio that his car needed lidar to self-drive and that his car did not have it, LoSavio plausibly alleges that he reasonably believed Tesla's claims that it could achieve self-driving with the car's existing hardware and that, if he diligently brought his car in for the required updates, the car would soon achieve the promised results," Lin wrote.

EDIT: found the court document; the quote is in the last paragraph on page 5: https://regmedia.co.uk/2024/05/16/teslaamendedcomplaint.pdf

[0] https://arstechnica.com/tech-policy/2024/05/tesla-must-face-...


That's a very strange defense: you should have known we're lying you.


> Coca-Cola dismissed the allegations as "ridiculous," on the grounds that "no consumer could reasonably be misled into thinking Vitaminwater was a healthy beverage"

https://en.wikipedia.org/wiki/Energy_Brands#Legal_disputes


Doesn't seem as if the public broadcasters are doing such a fine job, if their viewers obviously don't even know the definition of a fascist.


I'm not sure it's worthwhile to have this discussion here, but please elaborate.


Would agree with that, but just very briefly: the main characteristic that defines fascism is a strong cult of personality. The Afd doesn't have that at all. They also don't display any kind of expansionist ambitions, but are rather nationalistic in an isolationist kind of way similar to most of the right-wing populists who have gained popularity throughout Europe. I have very little doubts that some of the hardliners in the party and possibly even some of the more moderate people are authoritarian, but fascists they are not.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: