3

是否可以使用 Apple 的 Metal Performance Shaders 执行 Hadamard 产品?我看到可以使用this执行正常的矩阵乘法,但我特别在寻找一种元素乘法,或者一种巧妙的构造方法。(例如,是否可以将 MPSMatrix 转换为 MPSVector,然后使用向量执行乘积?)

更新:我很欣赏使用着色器的建议!我正在研究一个实现,这看起来很有希望!一旦我有工作,我会发布解决方案。

4

2 回答 2

2

好的,根据评论者的建议在这里回答我自己的问题 - 尝试编写我自己的着色器!

这是着色器代码:

#include <metal_stdlib>
using namespace metal;

/*
 hadamardProduct:
 Perform an element-wise multiplication (hadamard product) of the two input matrices A and B, store the result in C
 */
kernel void hadamardProductKernel(
texture_buffer<float, access::read> A [[texture(0)]],
texture_buffer<float, access::read> B [[texture(1)]],
texture_buffer<float, access::write> C [[texture(2)]],
uint gid [[thread_position_in_grid]]) {
    // C[i,j] = A[i,j] * B[i,j]
    C.write(A.read(gid) * B.read(gid), gid);
}

以及在两个 4x4 矩阵上执行着色器的 swift:

import Foundation
import Metal
import MetalKit

guard
    let gpu = MTLCreateSystemDefaultDevice(),
    let commandQueue = gpu.makeCommandQueue(),
    let commandBuffer = commandQueue.makeCommandBuffer(),
    let defaultLibrary = gpu.makeDefaultLibrary(),
    let kernelFunction = defaultLibrary.makeFunction(name: "hadamardProductKernel")
else {exit(1)}

// Create the matrices to multiply (as row-major matrices)
var A:[Float] = [2,0,0,0,
                 0,2,0,0,
                 0,0,2,0,
                 0,0,0,2]

var B:[Float] = [1,0,0,0,
                 0,2,0,0,
                 0,0,3,0,
                 0,0,0,4]

let A_buffer = gpu.makeTexture(descriptor: MTLTextureDescriptor.textureBufferDescriptor(with: .r32Float,
                                                                                                                                                                                width: 16,
                                                                                                                                                                                resourceOptions: .storageModeManaged,
                                                                                                                                                                                usage: .shaderRead))
let B_buffer = gpu.makeTexture(descriptor: MTLTextureDescriptor.textureBufferDescriptor(with: .r32Float,
                                                                                                                                                                                width: 16,
                                                                                                                                                                                resourceOptions: .storageModeManaged,
                                                                                                                                                                                usage: .shaderRead))
let C_buffer = gpu.makeTexture(descriptor: MTLTextureDescriptor.textureBufferDescriptor(with: .r32Float,
                                                                                                                                                                                width: 16,
                                                                                                                                                                                resourceOptions: .storageModeManaged,
                                                                                                                                                                                usage: .shaderWrite))
A_buffer?.replace(region: MTLRegionMake1D(0, 16),
                  mipmapLevel: 0,
                  withBytes: UnsafeRawPointer(A),
                  bytesPerRow: 64)
B_buffer?.replace(region: MTLRegionMake1D(0, 16),
                  mipmapLevel: 0,
                  withBytes: UnsafeRawPointer(B),
                  bytesPerRow: 64)

let computePipelineState = try gpu.makeComputePipelineState(function: kernelFunction)
let computeEncoder = commandBuffer.makeComputeCommandEncoder()
computeEncoder?.setComputePipelineState(computePipelineState)
computeEncoder?.setTexture(A_buffer, index: 0)
computeEncoder?.setTexture(B_buffer, index: 1)
computeEncoder?.setTexture(C_buffer, index: 2)
let threadGroupSize = MTLSize(width: 16, height: 1, depth: 1)
let threadGroupCount = MTLSize(width: 1, height: 1, depth: 1)
computeEncoder?.dispatchThreadgroups(threadGroupCount, threadsPerThreadgroup: threadGroupSize)
computeEncoder?.endEncoding()
commandBuffer.commit()
commandBuffer.waitUntilCompleted()

print("done")

感谢任何链接到资源的评论,以进一步了解这类事情。

于 2019-01-09T02:48:13.370 回答
0

其他选项是使用 MTLBuffers (在我的示例中,我将结果存储在第一个输入缓冲区中):

#include <metal_stdlib>
using namespace metal;

kernel void hadamardProductKernel(
    device float *a [[ buffer(0) ]],
    const device float *b [[ buffer(1) ]],
    uint id [[ thread_position_in_grid ]]
)
{
    a[id] = a[id] * b[id];
}

a->data这里是在两个 float32 数组(和)上执行 Hadamard 乘积的 Objective C 代码b->data

id<MTLLibrary> library = [device newDefaultLibrary];
id<MTLFunction> function = [library newFunctionWithName:@"hadamardProductKernel"];
id<MTLCommandBuffer> commandBuffer = [commandQueue commandBuffer];
id<MTLComputePipelineState> computePipelineState = [device newComputePipelineStateWithFunction:function error:NULL];
id<MTLComputeCommandEncoder> computeCommandEncoder = [commandBuffer computeCommandEncoder];
[computeCommandEncoder setComputePipelineState:computePipelineState];
id<MTLBuffer> buffer_a = (__bridge id<MTLBuffer>)(a->data);
[computeCommandEncoder setBuffer:buffer_a offset:0 atIndex:0];
id<MTLBuffer> buffer_b = (__bridge id<MTLBuffer>)(b->data);
[computeCommandEncoder setBuffer:buffer_b offset:0 atIndex:1];
MTLSize threadGroupSize = MTLSizeMake(<<ELEMENTS COUNT HERE>>, 1, 1);
MTLSize threadGroupCount = MTLSizeMake(1, 1, 1);
[computeCommandEncoder dispatchThreadgroups:threadGroupSize threadsPerThreadgroup:threadGroupCount];
[computeCommandEncoder endEncoding];
[commandBuffer commit];
[commandBuffer waitUntilCompleted];
于 2020-07-24T07:41:09.720 回答