1

I need a data structure for storing float values at an uniformly sampled 3D mesh:

x = x0 + ix*dx where 0 <= ix < nx

y = y0 + iy*dy where 0 <= iy < ny

z = z0 + iz*dz where 0 <= iz < nz

Up to now I have used my Array class:

Array3D<float> A(nx, ny,nz);
A(0,0,0) = 0.0f; // ix = iy = iz = 0

Internally it stores the float values as an 1D array with nx * ny * nz elements.

However now I need to represent an mesh with more values than I have RAM, e.g. nx = ny = nz = 2000.

I think many neighbour nodes in such an mesh may have similar values so I was thinking if there was some simple way that I could "coarsen" the mesh adaptively.

For instance if the 8 (ix,iy,iz) nodes of an cell in this mesh have values that are less than 5% apart; they are "removed" and replaced by just one value; the mean of the 8 values.

How could I implement such a data structure in a simple and efficient way?

EDIT: thanks Ante for suggesting lossy compression. I think this could work the following way:

#define BLOCK_SIZE 64
struct CompressedArray3D {
    CompressedArray3D(int ni, int nj, int nk) {
        NI = ni/BLOCK_SIZE + 1;
        NJ = nj/BLOCK_SIZE + 1;
        NK = nk/BLOCK_SIZE + 1;

        blocks = new float*[NI*NJ*NK];
        compressedSize = new unsigned int[NI*NJ*NK];
    }

    void setBlock(int I, int J, int K, float values[BLOCK_SIZE][BLOCK_SIZE][BLOCK_SIZE]) {
        unsigned int csize;
        blocks[I*NJ*NK + J*NK + K] = compress(values, csize);
        compressedSize[I*NJ*NK + J*NK + K] = csize;
    }

    float getValue(int i, int j, int k) {
        int I = i/BLOCK_SIZE;
        int J = j/BLOCK_SIZE;
        int K = k/BLOCK_SIZE;

        int ii = i - I*BLOCK_SIZE;
        int jj = j - J*BLOCK_SIZE;
        int kk = k - K*BLOCK_SIZE;

        float *compressedBlock = blocks[I*NJ*NK + J*NK + K];
        unsigned int csize = compressedSize[I*NJ*NK + J*NK + K];

        float values[BLOCK_SIZE][BLOCK_SIZE][BLOCK_SIZE];
        decompress(compressedBlock, csize, values);
        return values[ii][jj][kk];   
    }

    // number of blocks:
    int NI, NJ, NK;

    // number of samples:
    int ni, nj, nk;

    float** blocks;
    unsigned int* compressedSize; 
};

For this to be useful I need a lossy compression that is:

  • extremely fast, also on small datasets (e.g. 64x64x64)
  • compress quite hard > 3x, never mind if it looses quite a bit of info.

Any good candidates?

4

2 回答 2

1

听起来您正在寻找 LOD(细节级别)自适应网格。这是视频游戏和地形模拟中反复出现的主题。

对于地形,请参见此处:http: //vterrain.org/LOD/Papers/ - 寻找 ROAM 视频,该视频不仅可以根据距离自适应,还可以根据视角方向进行自适应。

对于非地形实体,有大量工作(这里有一个示例:通用自适应网格细化)。

于 2013-04-07T18:14:47.223 回答
0

我建议使用OctoMap来处理大型 3D 数据。并按此处所示对其进行扩展以处理几何属性。

于 2013-09-04T16:10:15.293 回答