You could also wait for cheaper used 3090s but I doubt people will be selling many of them, we are still dealing with the post-shortage effects, many people didn't have those gpus for a while and many will keep them since they probably paid quite a lot for them, way more than msrp, if anyone will sell them it won't be for cheap, certainly there will be those hoping to sell these used gpus for msrp (good luck to them, only idiots would buy used gpus for msrp) just like it was when 3000 series came out and people were selling 2080Tis (except 3090 are really good gpus still and 2000 series was garbage because both rt corese and tensor cores were not properly developed, dlss didn't work well, rtx was too taxing). If Nvidia releases a 4060 that has 12GB but manages to be cheaper it will be the new sweetspot but we are all hoping for something slightly more expensive (3070/3080 price) with 16GB of VRAM.Ĭlick to expand.If you don't have 3000 series then don't buy it, go straight for whatever 4000 series with 12GB or 16GB (still nice upgrade from 8GB 1080 and speed increase alone will be worth it). Whatever AMD will have might have the same limiations as Nvidia's tensor cores implementation so good chance it will not be possible to utilize those extra cores for DFL.Īs far as Intel goes, it should run on DX12 build like AMD, if it suprasses AMD in DX12 performance then it migh also be faster in DFL.Ĭurrently sweetspot is 3060 with 12GB when it comes to price to performance for DFL, then you've got garbage gpus that are faster but have less VRAM so not that capable and then you get 3090 and Ti with 24GB. DFL was never able to utilize tensor cores due to low precision of the math they can do (fp16), not precise enough for DFL, there was an fp16 option added to DFL that would use them, it was added twice, removed twice and never worked (models would collapse, training not stable), not likely it will do now unless they offer higher precision of iperov rewrites how DFL talks to the gpus, perhaps still not full tensor core utilization but more low level access for better performance and memory management (less vram overhead and usage, thus higher model parameters possible).
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |