在玩了一段时间之后,我们找到了另一种方法,我们可以将 C++11 的可变参数模板和 lambda 函数与模板元编程结合起来,展开所需数量的 for 循环:
template <unsigned int DIMENSION>
inline unsigned long tuple_to_index_fixed_dimension(const_tup_t tup, const_tup_t shape) {
unsigned long res = 0; unsigned int k;
for (k=0; k<DIMENSION−1; ++k) {
res += tup[k];
res *= shape[k+1];
}
res += tup[k];
return res;
}
template <unsigned int DIMENSION, unsigned int CURRENT>
class ForEachFixedDimensionHelper {
public:
template <typename FUNCTION, typename ...TENSORS>
inline static void apply(tup_t counter, const_tup_t shape, FUNCTION function, TENSORS & ...args) {
for (counter[CURRENT]=0; counter[CURRENT]<shape[CURRENT]; ++counter[CURRENT])
ForEachFixedDimensionHelper<DIMENSION−1, CURRENT+1>::template apply<FUNCTION, TENSORS...>(counter, shape, function, args...);
}
};
template <unsigned int CURRENT>
class ForEachFixedDimensionHelper<1u, CURRENT> {
public:
template <typename FUNCTION, typename ...TENSORS>
inline static void apply(tup_t counter, const_tup_t shape, FUNCTION function, TENSORS & ...args) {
for (counter[CURRENT]=0; counter[CURRENT]<shape[CURRENT]; ++counter[CURRENT])
function(args[tuple_to_index_fixed_dimension<CURRENT+1>(counter, args.data_shape())]...); /* tensor.data_shape() is an accessor for returning the shape member. */
}
};
template <unsigned char DIMENSION>
class ForEachFixedDimension {
public:
template <typename FUNCTION, typename ...TENSORS>
inline static void apply(const_tup_t shape, FUNCTION function, TENSORS & ...args) {
unsigned long counter[DIMENSION];
memset(counter, 0, DIMENSION*sizeof(unsigned long));
ForEachFixedDimensionHelper<DIMENSION,0>::template apply<FUNCTION, TENSORS...>(counter, shape, function, args...);
}
};
另请注意,元组值和形状可以安全地声明为 __restrict,这意味着它们指向不同的内存位置,因为它们将专门为迭代而构建,然后被释放。当另一个指针被取消引用和更改时(“指针别名”问题),由此类指针索引的值不需要从内存中重新读取。当调用 ForEachFixedDimension::template apply 时,可以在编译时根据张量参数的内容...和函数的参数类型。
可以在运行时查找所需的展开 for 循环数:
typedef unsigned int TEMPLATE_SEARCH_INT_TYPE;
template <TEMPLATE_SEARCH_INT_TYPE MINIMUM, TEMPLATE_SEARCH_INT_TYPE MAXIMUM, template <TEMPLATE_SEARCH_INT_TYPE> class WORKER>
class LinearTemplateSearch {
public:
template <typename...ARG_TYPES>
inline static void apply(TEMPLATE_SEARCH_INT_TYPE v, ARG_TYPES && ... args) {
if (v == MINIMUM)
WORKER<MINIMUM>::apply(std::forward<ARG_TYPES>(args)...);
else
LinearTemplateSearch<MINIMUM+1, MAXIMUM, WORKER>::apply(v, std::forward<ARG_TYPES>(args)...);
}
};
template <TEMPLATE_SEARCH_INT_TYPE MAXIMUM, template <TEMPLATE_SEARCH_INT_TYPE> class WORKER >
class LinearTemplateSearch<MAXIMUM, MAXIMUM, WORKER> {
public:
template <typename...ARG_TYPES>
inline static void apply(TEMPLATE_SEARCH_INT_TYPE v, ARG_TYPES && ... args) {
assert(v == MAXIMUM);
WORKER<MAXIMUM>::apply(std::forward<ARG_TYPES>(args)...);
}
};
请注意,即使使用了模板递归,在运行时也不需要知道维度。这基本上是通过使用模板作为即时 (JIT) 编译的一种形式,为所有感兴趣的维度预先计算策略,然后在运行时查找正确的策略来实现的。
所以这些方法都用Benchmarks进行了测试。在基准 1 中,数据从形状为 (2 10 , 2 9 , 2 8 ) 的张量复制到形状为 (2 9 , 2 9 , 2 5 ) 的张量。在基准 2 中,两个形状张量 (2 10 , 2 9 , 2 8 ) 和 (2 9 , 2 9 , 2 5 )之间的内积) 被计算(仅访问两者共享的元组索引)。将模板递归的实现与其他替代方法进行了比较:元组迭代;在编译时维度已知的元组迭代;整数重新索引;整数重新索引,其中轴限制为 2 的幂;麻木; C 风格的 for 循环(硬编码);矢量化 Fortran 代码;Go 中的 for 循环。
事实证明,模板递归比元组索引和boost使用的方法更快:


灰色数字代表平均运行时间,误差线代表最小值和最大值。以下是针对每种方法为基准 1 实施的方法:
// Tuple iteration (DIMENSION must be compile−time constant): vector<unsigned long> t(DIMENSION);
t.fill(0);
unsigned long k;
for (k=0; k<x.flat.size(); advance_tuple_fixed_dimension<DIMENSION>(&t[0], &x.data_shape()[0]), ++k)
x[k] = y[tuple_to_index_fixed_dimension<DIMENSION>(&t[0], &y.data_shape()[0])];
// boost:
x[ boost::indices[range(0, x.shape[0])][range(0,x.shape[1])][range(0,x.shape[2])] ] = y[ boost::indices[range(0,x.shape[0])][range(0,x.shape[1])][range(0,x.shape[2])] ];
! Fortran 95
x = y(1:2**5,1:2**9,1:2**9)
// Hard−coded for loops in C: unsigned long k;
for (k=0; k<x.data_shape()[0]; ++k) {
for (unsigned long j=0; j<x.data_shape()[1]; ++j) {
unsigned long x_bias = (k*x.data_shape()[1] + j)*x.data_shape()[2];
unsigned long y_bias = (k*y.data_shape()[1] + j)*y.data_shape()[2];
for (unsigned long i=0; i<x.data_shape()[2]; ++i)
x[x_bias + i] = y[y_bias + i];
}
}
// Integer reindexing:
unsigned long k;
for (k=0; k<x.flat.size(); ++k)
x[k] = y[reindex(k, &x.data_shape()[0], &y.data_shape()[0], DIMENSION)];
// Integer reindexing (axes are powers of 2):
unsigned long k;
for (k=0; k<x.flat.size(); ++k)
x[k] = y[reindex_powers_of_2(k, &x_log_shape[0], &y_log_shape[0], DIMENSION)];
// Tuple iteration (DIMENSION unknown at compile time):
vector<unsigned long> t(DIMENSION);
t.fill(0);
unsigned long k;
for (k=0; k<x.flat_size(); advance_tuple(&t[0], &x.data_shape()[0], DIMENSION), ++k)
x[k] = y[t];
# numpy (python):
x_sh = x.shape.
x = np.array(y[:x_sh[0], :x_sh[1], :x_sh[2]])
// Go:
for i:=0; i<1<<9; i++ {
for j:=0; j<1<<9; j++{
for k:=0; k<1<<5; k++{
x[i][j][k] = y[i][j][k]
}
}
}
// TRIOT (DIMENSION unknown at compile time):
apply_tensors([](double & xV, double yV) {
xV = yV;
},
x.data_shape(),
x, y);
令人惊讶的是,整数重新索引(即使轴是 2 的幂)比创建元组计数器要慢得多。并且带有模板递归的版本有时要快得多(包括比 boost 快 30%,即使 boost::multi_array 必须在编译时知道维度)。
这是另一个示例,说明如何将这种嵌套 for 循环技巧与模板递归结合使用:
double dot_product(const Tensor & x<double>, const Tensor<double> & y) { // This function written for homogeneous types, but not unnecessary
double tot = 0.0;
for_each_tensors([&tot](double xV, double yV) {
tot += xV * yV;
},
x.data_shape(), /* Iterate over valid tuples for x.data_shape(); as written, this line assumes x has smaller shape*/
x, y);
return tot;
}
并且通过元组迭代实现多维卷积,还通过卷积两个矩阵来比较具有模板递归和numpy的版本,每个矩阵具有形状(2 8 ,2 3)。
Tensor<double> triot_naive_convolve(const Tensor<double> & lhs, const Tensor<double> & rhs) {
assert(lhs.dimension() == rhs.dimension());
Tensor<double> result(lhs.data_shape() + rhs.data_shape() − 1ul);
result.flat().fill(0.0);
Vector<unsigned long> counter_result(result.dimension());
enumerate_for_each_tensors([&counter_result, &result, &rhs](const_tup_t counter_lhs, const unsigned int dim_lhs, double lhs_val) {
enumerate_for_each_tensors([&counter_result, &result, &rhs, &counter_lhs, &lhs_val](const_tup_t counter_rhs, const unsigned int dim_rhs, double rhs_val) {
for (unsigned int i=0; i<dim_rhs; ++i)
counter_result[i] = counter_lhs[i] + counter_rhs[i];
unsigned long result_flat = tuple_to_index(counter_result, result.data_shape(), dim_rhs);
result.flat()[result_flat] += lhs_val * rhs_val;
},
rhs.data_shape(),
rhs);
},
lhs.data_shape(), lhs);
return result;
}
基准测试在 2.0 GHz Intel Core i7 芯片上进行了优化(-std=c++11 -Ofast -march=native - mtune=native -fomit-frame-pointer)。所有 Fortran 实现都以相反的顺序使用轴并以缓存优化的方式访问数据,因为 Fortran 使用列优先数组格式。详细信息和源代码(一个简单的多维数组库,在编译时不需要知道维度)可以在这篇小期刊文章中找到。