BlessedSquirrel
We love you Ukraine
This is really big news, of course how it works in real game scenarios remains to be seen, but it looks promising.
TLDR: They have developed a new procedurally generated method to draw trees, using the GPU rather than the CPU, doing this means that those assets never need to be loaded into VRAM as it’s being created on the fly.
In this tech demo, a complex scene of trees and bushes with leaves being blown in the wind when drawn through the normal method of pre-loading assets would require 38GB of VRAM! With their new procedural generation technique, it only required 52KB of VRAM! That’s not a typo!
www.tomshardware.com
The actual scene is here:
Currently I haven’t been able to determine if this is using on board AI accelerators on the GPU which would mean its only possible on the RX9000 series, I suspect that may be the case.
But this could bring the possibility of having a modular PCIe hardware AI accelerator card, just like we used to have back in the early days of PhysX. This is where considerations on PCIe lanes on the motherboard come into importance and why we’ll often strongly recommend X870e over X870. AI is still in its infancy, but the possible applications are starting to become clearer, so it’s worthwhile planning for future expandability.
TLDR: They have developed a new procedurally generated method to draw trees, using the GPU rather than the CPU, doing this means that those assets never need to be loaded into VRAM as it’s being created on the fly.
In this tech demo, a complex scene of trees and bushes with leaves being blown in the wind when drawn through the normal method of pre-loading assets would require 38GB of VRAM! With their new procedural generation technique, it only required 52KB of VRAM! That’s not a typo!

AMD researchers reduce graphics card VRAM capacity of 3D-rendered trees from 38GB to just 52 KB with work graphs and mesh nodes — shifting CPU work to the GPU yields tremendous results
What would take 38GB of VRAM to hold now takes just a measly 52KB.
The actual scene is here:
Currently I haven’t been able to determine if this is using on board AI accelerators on the GPU which would mean its only possible on the RX9000 series, I suspect that may be the case.
But this could bring the possibility of having a modular PCIe hardware AI accelerator card, just like we used to have back in the early days of PhysX. This is where considerations on PCIe lanes on the motherboard come into importance and why we’ll often strongly recommend X870e over X870. AI is still in its infancy, but the possible applications are starting to become clearer, so it’s worthwhile planning for future expandability.
Last edited: