我正在测试OpenJDK Panama Vector API jdk.incubator.vector 并在 amazon c5.4xlarge 实例上进行了测试。但是在每种情况下,简单的展开矢量点积都无法执行 Vector API 代码。
我的问题是:为什么我无法获得Richard Startin 博客中所示的性能提升。英特尔人员在这次会议聚会中也讨论了同样的性能改进。什么不见了?
JMH 基准测试结果:
Benchmark (size) Mode Cnt Score Error Units
FloatVector256DotProduct.unrolled 1048576 thrpt 25 2440.726 ? 21.372 ops/s
FloatVector256DotProduct.vanilla 1048576 thrpt 25 807.721 ? 0.084 ops/s
FloatVector256DotProduct.vector 1048576 thrpt 25 909.977 ? 6.542 ops/s
FloatVector256DotProduct.vectorUnrolled 1048576 thrpt 25 887.422 ? 5.557 ops/s
FloatVector256DotProduct.vectorfma 1048576 thrpt 25 916.955 ? 4.652 ops/s
FloatVector256DotProduct.vectorfmaUnrolled 1048576 thrpt 25 877.569 ? 1.451 ops/s
JavaDocExample.simpleMultiply 1048576 thrpt 25 2096.782 ? 6.778 ops/s
JavaDocExample.simpleMultiplyUnrolled 1048576 thrpt 25 1627.320 ? 6.824 ops/s
JavaDocExample.vectorMultiply 1048576 thrpt 25 2102.654 ? 11.637 ops/s
AWS 实例类型:c5.4xlarge
CPU详细信息:
$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8124M CPU @ 3.00GHz
Stepping: 4
CPU MHz: 3404.362
BogoMIPS: 5999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 25344K
NUMA node0 CPU(s): 0-15
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
代码片段。请参阅此 github 存储库中的完整代码
JavaDocExample :这是在 OpenJDK 的 vectorIntrinsic 分支的 java doc 中共享的。
@Benchmark
public void simpleMultiplyUnrolled() {
for (int i = 0; i < size; i += 8) {
c[i] = a[i] * b[i];
c[i + 1] = a[i + 1] * b[i + 1];
c[i + 2] = a[i + 2] * b[i + 2];
c[i + 3] = a[i + 3] * b[i + 3];
c[i + 4] = a[i + 4] * b[i + 4];
c[i + 5] = a[i + 5] * b[i + 5];
c[i + 6] = a[i + 6] * b[i + 6];
c[i + 7] = a[i + 7] * b[i + 7];
}
}
@Benchmark
public void simpleMultiply() {
for (int i = 0; i < size; i++) {
c[i] = a[i] * b[i];
}
}
@Benchmark
public void vectorMultiply() {
int i = 0;
// It is assumed array arguments are of the same size
for (; i < SPECIES.loopBound(a.length); i += SPECIES.length()) {
FloatVector va = FloatVector.fromArray(SPECIES, a, i);
FloatVector vb = FloatVector.fromArray(SPECIES, b, i);
FloatVector vc = va.mul(vb);
vc.intoArray(c, i);
}
for (; i < a.length; i++) {
c[i] = a[i] * b[i];
}
}
FloatVector256DotProduct :此代码无耻地从Richard Startin 的博客中复制。感谢理查德富有洞察力的博客。
@Benchmark
public float vectorfma() {
var sum = FloatVector.zero(F256);
for (int i = 0; i < size; i += F256.length()) {
var l = FloatVector.fromArray(F256, left, i);
var r = FloatVector.fromArray(F256, right, i);
sum = l.fma(r, sum);
}
return sum.reduceLanes(ADD);
}
@Benchmark
public float vectorfmaUnrolled() {
var sum1 = FloatVector.zero(F256);
var sum2 = FloatVector.zero(F256);
var sum3 = FloatVector.zero(F256);
var sum4 = FloatVector.zero(F256);
int width = F256.length();
for (int i = 0; i < size; i += width * 4) {
sum1 = FloatVector.fromArray(F256, left, i).fma(FloatVector.fromArray(F256, right, i), sum1);
sum2 = FloatVector.fromArray(F256, left, i + width).fma(FloatVector.fromArray(F256, right, i + width), sum2);
sum3 = FloatVector.fromArray(F256, left, i + width * 2).fma(FloatVector.fromArray(F256, right, i + width * 2), sum3);
sum4 = FloatVector.fromArray(F256, left, i + width * 3).fma(FloatVector.fromArray(F256, right, i + width * 3), sum4);
}
return sum1.add(sum2).add(sum3).add(sum4).reduceLanes(ADD);
}
@Benchmark
public float vector() {
var sum = FloatVector.zero(F256);
for (int i = 0; i < size; i += F256.length()) {
var l = FloatVector.fromArray(F256, left, i);
var r = FloatVector.fromArray(F256, right, i);
sum = l.mul(r).add(sum);
}
return sum.reduceLanes(ADD);
}
@Benchmark
public float vectorUnrolled() {
var sum1 = FloatVector.zero(F256);
var sum2 = FloatVector.zero(F256);
var sum3 = FloatVector.zero(F256);
var sum4 = FloatVector.zero(F256);
int width = F256.length();
for (int i = 0; i < size; i += width * 4) {
sum1 = FloatVector.fromArray(F256, left, i).mul(FloatVector.fromArray(F256, right, i)).add(sum1);
sum2 = FloatVector.fromArray(F256, left, i + width).mul(FloatVector.fromArray(F256, right, i + width)).add(sum2);
sum3 = FloatVector.fromArray(F256, left, i + width * 2).mul(FloatVector.fromArray(F256, right, i + width * 2)).add(sum3);
sum4 = FloatVector.fromArray(F256, left, i + width * 3).mul(FloatVector.fromArray(F256, right, i + width * 3)).add(sum4);
}
return sum1.add(sum2).add(sum3).add(sum4).reduceLanes(ADD);
}
@Benchmark
public float unrolled() {
float s0 = 0f;
float s1 = 0f;
float s2 = 0f;
float s3 = 0f;
float s4 = 0f;
float s5 = 0f;
float s6 = 0f;
float s7 = 0f;
for (int i = 0; i < size; i += 8) {
s0 = Math.fma(left[i + 0], right[i + 0], s0);
s1 = Math.fma(left[i + 1], right[i + 1], s1);
s2 = Math.fma(left[i + 2], right[i + 2], s2);
s3 = Math.fma(left[i + 3], right[i + 3], s3);
s4 = Math.fma(left[i + 4], right[i + 4], s4);
s5 = Math.fma(left[i + 5], right[i + 5], s5);
s6 = Math.fma(left[i + 6], right[i + 6], s6);
s7 = Math.fma(left[i + 7], right[i + 7], s7);
}
return s0 + s1 + s2 + s3 + s4 + s5 + s6 + s7;
}
@Benchmark
public float vanilla() {
float sum = 0f;
for (int i = 0; i < size; ++i) {
sum = Math.fma(left[i], right[i], sum);
}
return sum;
}
如this SO question所示,编译和使用OpenJDK Panama dev vectorIntrinsic分支的过程
hg clone http://hg.openjdk.java.net/panama/dev/
cd dev/
hg checkout vectorIntrinsics
hg branch vectorIntrinsics
bash configure
make images
我检查了为什么它应该起作用的事情。
- lscpu 显示各种 avx 标志。
- 我选择了应该支持 AVX 指令集的 HVM AMI。https://aws.amazon.com/ec2/instance-types/ 说:† AVX、AVX2 和增强联网仅在使用 HVM AMI 启动的实例上可用。
- 我可以编译矢量代码,这意味着我正在使用 OpenJDK 的适当分支。我使用 --add-modules=jdk.incubator.vector VM 参数运行我的代码。我还在 [this intel blog](https://software.intel.com/en-us/articles/vector-api-developer-program-for-java) 中添加了一些其他 VM 参数,例如 state:-XX:TypeProfileLevel= 121
- 我检查了它确实包含 vmulps 指令的编译汇编代码。虽然很难找到它们,因为我在向量 api 代码中调用方法,并且乘法发生在调用的 mul/fma 方法中的其他一些地方。
- 我已经使用 64、128、256、512 等不同的 SIZE 以及使用“FloatVector.SPECIES_PREFERRED”进行了更多测试。在所有情况下,矢量 api 代码都比带有展开的简单乘法代码慢得多。