问题标签 [torch]
For questions regarding programming in ECMAScript (JavaScript/JS) and its various dialects/implementations (excluding ActionScript). Note JavaScript is NOT the same as Java! Please include all relevant tags on your question; e.g., [node.js], [jquery], [json], [reactjs], [angular], [ember.js], [vue.js], [typescript], [svelte], etc.
image-processing - How to load images and labels in Torch for a Convolutional Neural Network
I'm new to Torch and would like to load some images from two directories (one for each label). I'm trying to build a convolutional neural network that will classify images as belonging to one class or another (i.e. a binary classifier) but I am unsure how load images, label those images and get the data into the correct format. I'm using the following tutorial, however the training data is loaded in a different way which I am not familiar with.
http://code.madbits.com/wiki/doku.php?id=tutorial_supervised
Hope someone can help me get started and point me in the right direction.
Many thanks in advance.
lua - Torch 教程:1_data.lua 中“trainData.data[{ {},i,{},{} }]:mean()”的含义
在火炬教程中,我找到了这一行:
有没有人可以解释索引 { {},i,{},{} } 在做什么?我可以猜到,但想知道确切的机制。
提前致谢。
image - 在 Torch (lua) 中可视化中间层中的图像
在 conv-nets 模型中,我知道如何可视化过滤器,我们可以做 itorch.image(model:get(1).weight)
但是我怎样才能有效地可视化卷积后的输出图像呢?尤其是深度神经网络中第二层或第三层的那些图像?
谢谢。
c - 这个 C 代码(来自 lua 库,Torch)是如何编译/工作的?
见https://github.com/torch/nn/blob/master/generic/Tanh.c
例如,
首先,我不知道如何解释第一行:
这里的论点是什么?Tanh_updateOutput 指的是什么?“nn_”有特殊含义吗?
其次,“TH_TENSOR_APPLY2”和“THTensor_(...)”都被使用了,但我看不出它们是在哪里定义的?此文件中没有其他包含?
c - 这是用 Torch 从 LuaJit 解决“内存不足”的实用方法吗
StanfordNLP 的 TreeLSTM与具有 > 30K 实例的数据集一起使用时,会导致 LuaJit 出现“内存不足”错误。我正在使用LuaJit Data Structures解决这个问题。为了从 lua 的堆中获取数据集,需要将树放置在 LDS.Vector 中。
由于 LDS.Vector 保存 cdata,第一步是将 Tree 类型变成 cdata 对象:
还需要在 read_data.lua 中进行一些小的更改来处理新的 cdata CTree 类型。到目前为止,使用 LDS 似乎是解决内存限制的合理方法。但是,CTree 需要一个名为“composer”的字段。
Composer 属于 nn.gModule 类型。要继续此解决方案,将涉及创建 nn.gModule 的 typedef 作为 cdata,包括为其成员创建 typedef。在继续之前,这似乎是正确的方向吗?有没有人有这个问题的经验?
lua - 使用torch7计算混淆矩阵的召回率和精度
我正在使用受监督的教程,我想计算召回率和精度。有没有办法在教程中计算它们?
lua - Torch.gesv B 应该是二维的
我刚开始参加牛津机器学习课程,而且我是 lua 和 torch 的新手。
我正在尝试用火炬解决一个简单的线性方程问题。问题就像 AX = B
但是,我无法做到这一点,因为 B 只是一维张量(向量)。我认为 B 是向量的情况应该是常见的。将 B 复制到二维张量是浪费的。
我会得到:
有什么建议么?
lua - 网格搜索torch / lua中的超参数
我是 torch/lua 的新手,我正在尝试评估一些不同的优化算法和它们中的每一个的不同参数。
算法:optim.sgd optim.lbfgs
参数:
- 学习率:{1e-1、1e-2、1e-3}
- weight_decay: {1e-1, 1e-2}
所以我想要实现的是尝试超参数的每种组合,并为每个算法获得最佳参数集。
那么有没有类似的东西:
如http://scikit-learn.org/stable/modules/grid_search.html中可用的 Torch 来处理它?
任何建议都会很好!
indexing - 在 Torch 中,如何从整数标签列表中创建 1-hot 张量?
我有一个整数类标签的字节张量,例如来自 MNIST 数据集。
如何使用它来创建 1-hot 向量的张量?
我知道我可以用一个循环来做到这一点,但我想知道是否有任何聪明的 Torch 索引可以在一行中为我提供它。
neural-network - Neural Network Reinforcement Learning Requiring Next-State Propagation For Backpropagation
I am attempting to construct a neural network incorporating convolution and LSTM (using the Torch library) to be trained by Q-learning or Advantage-learning, both of which require propagating state T+1 through the network before updating the weights for state T.
Having to do an extra propagation would cut performance and that's bad, but not too bad; However, the problem is that there is all kinds of state bound up in this. First of all, the Torch implementation of backpropagation has some efficiency shortcuts that rely on the back propagation happening immediately after the forward propagation, which an additional propagation would mess up. I could possibly get around this by having a secondary cloned network sharing the weight values, but we come to the second problem.
Every forward propagation involving LSTMs is stateful. How can I update the weights at T+1 when propagating network(T+1) may have changed the contents of the LSTMs? I have tried to look at the discussion of TD weight updates as done in TD-Gammon, but it's obtuse to me and that's for feedforward anyway, not recurrent.
How can I update the weights of a network at T without having to advance the network to T+1, or how do I advance the network to T+1 and then go back and adjust the weights as if it were still T?