Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You mentioned it took 100 gpu hours, what gpu did you train on?


Mostly 1xA10 (though I switched to 1xGH200 briefly at the end, lambda has a sale going). The network used in the post is very tiny, but I had to train a really long time w/ large batch to get somewhat-stable results.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: