I used to follow Fireship, I even have him connected on LinkedIn. Look where he is now. I also used to follow simonw, but I think he is going down the same spiral.
If a document is used to train an LLM, that means that an AI checker would be more likely to give a false positive that the document is written by AI?
I guess it depends on the AI checker.
I would think that that LLM, after generating the fragment "When in the course of.." would just be a bit more likely to generate "human" as the next word, yes?
Admittedly I don't have a tremendous amount of varied experience with AI-assisted coding, but I have used VSCode copilot quite a bit with Python and it has worked quite well for me. I am sometimes very surprised how well it figures out my intent.
I'm next planning on looking at Cursor and Claude Code, so the GH Copilot CLI preview caught my attention.
What exactly do you dislike about VSCode copilot compared to the competition?
It pales in comparison to cursor and cursor itself is so buggy, slow on features and now pushing a “cloud offering” that everyday I think about an alternative. So both aren’t perfect but cursor is superior in every way in terms of actual work flow vs “features” that tick some manager’s box but aren’t part of a congruent whole.
Nice UI btw, plus good use of AI for scoring/summarization.
I wrote something kinda similar that scrapes HN (using Firebase) for particular keywords I'm interested in. It gathers all hyperlinks mentioned in comments and uses NLTK to summarize. Kind of a curated HN reading list.
I'm currently working on using an LLM for the summaries.
Your project has given me a few ideas for mine. Thanks!
Glad to hear it. Your NLTK approach sounds interesting — would love to hear more about it. BTW, I’m planning to improve my project’s documentation. Funny enough, under “consideration” it flagged itself: "Documentation quality is not explicitly high. The effectiveness of the LLM’s scoring criteria is subjective and not deeply explained." Sadly, that’s true.
I had not thought that save that data since I had no use for it.
My scraper only gets the url and then hit that url to get all the other fields of data manually. Sorry.
I've followed SimonW for quite some time and bullshit/grifting is just NOT something he does.
On the contrary, I've learned a great deal from him and appreciate his contributions.
reply