1

I've been investigating a performance difference in my application between AMD and nVidia cards related to sparse textures. I've recently discovered that when using an AMD GPU, creating sparse textures appears to have some kind of massive cost in terms of CPU memory.

On a windows 10 machine with an R9 390 with 8 GB of GPU memory and 16 GB of CPU memory, I'm running the following code attached to a timer set to fire 10 times a second

static std::vector<GLuint> _textures;
static const size_t MAX_TEXTURES = 1000;
static const glm::uvec3 TEXTURE_SIZE(512, 512, 1);
if (_textures.size() < MAX_TEXTURES) {
    GLuint texture = 0;
    uint16_t mips = evalNumMips(TEXTURE_SIZE);
    glCreateTextures(GL_TEXTURE_2D, 1, &texture);
    _textures.push_back(texture);
    glTextureParameteri(texture, GL_TEXTURE_SPARSE_ARB, GL_TRUE);
    glTextureParameteri(texture, GL_VIRTUAL_PAGE_SIZE_INDEX_ARB, 0);
    glTextureStorage2D(texture, mips, GL_RGBA8, TEXTURE_SIZE.x, TEXTURE_SIZE.y);
    GLuint maxSparseLevel;
    glGetTextureParameterIuiv(texture, GL_NUM_SPARSE_LEVELS_ARB, &maxSparseLevel);
    for (uint16_t mip = 0; mip < maxSparseLevel; ++mip) {
        auto dims = evalMipDimensions(TEXTURE_SIZE, mip);
        glTexturePageCommitmentEXT(texture, mip, 0, 0, 0, dims.x, dims.y, dims.z, GL_TRUE);
    }
}

Where...

static const double LOG_2 = log(2.0);

uint16_t evalNumMips(const glm::uvec3& size) {
    double dim = glm::compMax(size);
    double val = log(dim) / LOG_2;
    return 1 + (uint16_t)val;
}

glm::uvec3 evalMipDimensions(const glm::uvec3& size, uint16_t mip) {
    auto result = size;
    result >>= mip;
    return glm::max(result, glm::uvec3(1));
}

This should, as far as I know, create a texture, allocate virtual memory and then commit that memory. It should consume GPU memory at a rate of about 10 MB per second. On an nVidia card, this code behaves as expected. On an AMD card, however, this code starts consuming CPU physical memory at a rate of about 1 GB per second.

This can be seen in process explorer here, where both the system commit size and physical memory size immediately start to skyrocket as soon as I start the application.

enter image description here

Why is the AMD driver suddenly consuming massive amounts of CPU memory whenever I create a sparse texture?

4

0 回答 0