对于模型的 GPU 训练,我正在使用
dudt = Chain(Dense(3,100,tanh),
Dense(100,3)) |> gpu
相对
中央处理器训练
dudt = FastChain(
FastDense(3,100,tanh),
FastDense(100,3))
超过 1000 次迭代,Fastchain 比运行 GPU Tesla K40c 快几个数量级。这是预期的行为吗?否则,我在 GPU 上实现模型时会做错什么吗?GPU实现的MWE如下:
function lorenz(du,u,p,t)
σ = p[1]; ρ = p[2]; β = p[3]
du[1] = σ*(u[2]-u[1])
du[2] = u[1]*(ρ-u[3]) - u[2]
du[3] = u[1]*u[2] - β*u[3]
return
end
u0 = Float32[1.0,0.0,0.0]
tspan = (0.0,1.0)
para = [10.0,28.0,8/3]
prob = ODEProblem(lorenz, u0, tspan, para)
t = range(tspan[1],tspan[2],length=101)
ode_data = Array(solve(prob,Tsit5(),saveat=t))
ode_data = cu(ode_data)
u0train = [1.0,0.0,0.0] |> gpu
tspantrain = (0.0,1.0)
ttrain = range(tspantrain[1],tspantrain[2],length=101)
dudt = Chain(Dense(3,100,tanh),
Dense(100,3)) |> gpu
n_ode = NeuralODE((dudt),tspantrain,Tsit5(),saveat=ttrain)
function predict_n_ode(p)
n_ode(u0train,p)
end
function loss_n_ode(p)
pred = predict_n_ode(p) |> gpu
loss = sum(abs2, pred .- ode_data)
loss,pred
end
res1 = DiffEqFlux.sciml_train(loss_n_ode, n_ode.p, ADAM(0.01), cb=cb, maxiters = 1000)