Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been using ALBERT (the HuggingFace port) for a few weeks. It works fine on GPUs, and it isn't noticeably slower for inference than other large models on CPUs.

It's worth noting that TPUs are available for free on Google Colab.



It's worth noting that TPUs are available for free on Google Colab.

Yes, and you can also get a research grant, which gives you several TPUs for a month. But that does not mean that you can easily deploy TPUs in your own infrastructure, unless you use Google Cloud and suck up the costs (which may not be possible in academia).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: