Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been playing with something similar, but far less thought out than what you have.

I have a script for it, but am basically waiting until I can run a powerful enough LLM locally to chug through it with good results.

Basically like the knowledge tree you mention towards the end, but attempt to create a knowledge DAG by asking a LLM "does card (A) imply knowledge of card (B) or vice versa". Then, take that DAG and use it to schedule the cards in a breadth first ordering. So, when reviewing a new deck with a lot of new cards, I'll be sure to get questions like "what was the primary cause of the civil war", before I get questions like "who was the Confederate general who fought at bull run"



I'd love to see it.

What I like about your approach is that it circumvents the data problem. You don't need a dataset with review histories and flashcard content in order to train a model.


Andy also tested this idea. You can read his notes here:

GPT-4 can probably estimate whether two flashcards are functionally equivalent

https://notes.andymatuschak.org/zJ7PMGzjcgBUoPjLUHBF9jn

GPT-4 can probably estimate whether one prompt will spoil retrieval of another

https://notes.andymatuschak.org/zK9Y15pCnRMLoxUahLCzdyc




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: