0

下面使用 Rails 4 的书中的特定代码我相信 rails 5 需要强大的参数,这可能会导致 Rails 5 出错。请任何人都可以分享我。如何申请许可证和要求以下代码)

来自模型 user.rb

def remember
    remember_token = User.new_token
    update_attribute(:remember_digest, User.digest(remember_token))
end

错误(在轨道 5 中)

NoMethodError in SessionsController#create
undefined method `update_attribute' for #<Class

这就是我所做的。

def remember
    remember_token = User.new_token
    update_attribute(:remember_digest, User.digest(params:[remember_token]))
end

错误:

NoMethodError in SessionsController#create
undefined method `update_attribute' for #<Class:0x007fd23b30a790> Did you mean? _default_attributes

明确一点:从 sessionhelper.erb 调用的那些方法

module SessionsHelper

  # Logs in the given user.
  def log_in(user)
    session[:user_id] = user.id
  end

  # Remembers a user in a persistent session.
  def remember(user)
    User.remember
    cookies.permanent.signed[:user_id] = user.id
    cookies.permanent[:remember_token] = user.remember_token
  end


    # Returns the user corresponding to the remember token cookie.
  def current_user
    if (user_id = session[:user_id])
      @current_user ||= User.find_by(id: user_id)
    elsif (user_id = cookies.signed[:user_id])
      user = User.find_by(id: user_id)
      if user && user.authenticated?(cookies[:remember_token])
        log_in user
        @current_user = user
      end
    end
  end


  # Returns true if the user is logged in, false otherwise.
  def logged_in?
    !current_user.nil?
  end

# Forgets a persistent session.
  def forget(user)
    user.forget
    cookies.delete(:user_id)
    cookies.delete(:remember_token)
  end
  # Logs out the current user.
  def log_out
    forget(current_user)
    session.delete(:user_id)
    @current_user = nil
  end
end

用户.rb

class User < ApplicationRecord
  attr_accessor :remember_token, :activation_token, :reset_token, :remember_digest
  before_save { self.email = email.downcase }
  validates :name,  presence: true, length: { maximum: 50 }
  VALID_EMAIL_REGEX = /\A[\w+\-.]+@[a-z\d\-.]+\.[a-z]+\z/i
  validates :email, presence: true, length: { maximum: 255 },
                    format: { with: VALID_EMAIL_REGEX },
                    uniqueness: { case_sensitive: false }
  has_secure_password   
  validates :password, presence: true, length: { minimum: 6 }
class << self
  # Returns the hash digest of the given string.
  def digest(string)
    cost = ActiveModel::SecurePassword.min_cost ? BCrypt::Engine::MIN_COST :
                                                  BCrypt::Engine.cost
    BCrypt::Password.create(string, cost: cost)
  end

  def remember
    remember_token = User.new_token
    update_attribute(:remember_digest, User.digest(remember_token))
  end



  # Returns a random token.
  def new_token
    SecureRandom.urlsafe_base64
  end
end
  # Returns true if the given token matches the digest.
  def authenticated?(remember_token)
    BCrypt::Password.new(remember_digest).is_password?(remember_token)
  end
  # Forgets a user.
  def forget
    update_attribute(:remember_digest, nil)
  end

  def activate
    update_attribute(:activated,    true)
    update_attribute(:activated_at, Time.zone.now)
  end

end

会话控制器.rb

class SessionsController < ApplicationController

  def new
  end

  def create
    user = User.find_by(email: params[:session][:email].downcase)
    if user && user.authenticate(params[:session][:password])
      log_in user
      remember user
      redirect_to user
    else
      flash.now[:danger] = 'Invalid email/password combination'
      render 'new'
    end
  end

  def destroy
    log_out
    redirect_to url_for(:controller => :sessions, :action => :new)  
  end
end

欧罗


为什么相同的神经架构在 Keras 中有效,但在 Tensorflow(叶子分类)中无效?

最近在玩 Kaggle 的叶子分类问题。我见过一个笔记本Simple Keras 1D CNN + features split。但是当我尝试用 Tensorflow 构建相同的模型时,它产生的准确率非常低,损失变化很小。这是我的代码:

import tensorflow as tf
import numpy as np
import pandas as pd
from sklearn.preprocessing import scale,StandardScaler

#preparing data
train=pd.read_csv('E:\\DataAnalysis\\Kaggle\\leaf\\train.csv',sep=',')
test=pd.read_csv('E:\\DataAnalysis\\Kaggle\\leaf\\test.csv',sep=',')
subexp=pd.read_csv('E:/DataAnalysis/Kaggle/leaf/sample_submission.csv')

x_train=np.asarray(train.drop(['species','id'],axis=1),dtype=np.float32)
x_train=scale(x_train).reshape([990,64,3])
ids=list(subexp)[1:]
spec=np.asarray(train['species'])
y_train=np.asarray([[int(x==ids[i]) for i in range(len(ids))] for x in spec],dtype=np.float32)

drop=0.75
batch_size=16
max_epoch=10
iter_per_epoch=int(990/batch_size)
max_iter=int(max_epoch*iter_per_epoch)
features=192
keep_prob=0.75

#inputs, weights, and biases
x=tf.placeholder(tf.float32,[None,64,3])
y=tf.placeholder(tf.float32,[None,99])

w={
    'w1':tf.Variable(tf.truncated_normal([1,3,512],dtype=tf.float32)),
    'wd1':tf.Variable(tf.truncated_normal([64*512,2048],dtype=tf.float32)),
    'wd2':tf.Variable(tf.truncated_normal([2048,1024],dtype=tf.float32)),
    'wd3':tf.Variable(tf.truncated_normal([1024,99],dtype=tf.float32))
}

b={
    'b1':tf.Variable(tf.truncated_normal([512],dtype=tf.float32)),
    'bd1':tf.Variable(tf.truncated_normal([2048],dtype=tf.float32)),
    'bd2':tf.Variable(tf.truncated_normal([1024],dtype=tf.float32)),
    'bd3':tf.Variable(tf.truncated_normal([99],dtype=tf.float32))
}

#model.
def conv(x,we,bi):
    l1a=tf.nn.relu(tf.nn.conv1d(value=x,filters=we['w1'],stride=1,padding='SAME'))
    l1a=tf.reshape(tf.nn.bias_add(l1a,bi['b1']),[-1,64*512])

    l1=tf.nn.dropout(l1a,keep_prob=0.4)
    l2a=tf.nn.relu(tf.add(tf.matmul(l1,we['wd1']),bi['bd1']))
    l3a=tf.nn.relu(tf.add(tf.matmul(l2a,we['wd2']),bi['bd2']))
    out=tf.nn.softmax(tf.matmul(l3a,we['wd3']))

    return out

#optimizer and accuracy
out=conv(x,w,b)
cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=out,targets=y))
train_op=tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)

correct_pred = tf.equal(tf.argmax(out, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

#train
with tf.Session() as sess :
    sess.run(tf.global_variables_initializer())
    step=0
    while step<max_iter :
        d =(step % iter_per_epoch)*batch_size
        batch_x=x_train[d:d+batch_size:1]
        batch_y=y_train[d:d+batch_size:1]
        sess.run(train_op,feed_dict={x: batch_x,y: batch_y})

        if step%10==0:
            loss, acc = sess.run([cost, accuracy], feed_dict={x: batch_x,
                                                              y: batch_y,})
            print("Iter: ", step,"  loss:",loss, "  accuracy:",acc)
        step+=1
    print('Training finished!')

结果是这样的:

Iter:  0   loss: 0.69941   accuracy: 0.0
Iter:  10   loss: 0.69941   accuracy: 0.0
Iter:  20   loss: 0.69941   accuracy: 0.0
Iter:  30   loss: 0.69941   accuracy: 0.0
Iter:  40   loss: 0.69941   accuracy: 0.0
Iter:  50   loss: 0.698778   accuracy: 0.0625
Iter:  60   loss: 0.698778   accuracy: 0.0625
Iter:  70   loss: 0.69941   accuracy: 0.0
Iter:  80   loss: 0.69941   accuracy: 0.0
Iter:  90   loss: 0.69941   accuracy: 0.0
Iter:  100   loss: 0.69941   accuracy: 0.0
Iter:  110   loss: 0.69941   accuracy: 0.0
Iter:  120   loss: 0.69941   accuracy: 0.0
Iter:  130   loss: 0.69941   accuracy: 0.0
Iter:  140   loss: 0.69941   accuracy: 0.0
Iter:  150   loss: 0.69941   accuracy: 0.0
Iter:  160   loss: 0.69941   accuracy: 0.0
Iter:  170   loss: 0.698778   accuracy: 0.0625
......

但是在 Keras 中使用相同的数据和模型时,确实会产生非常好的结果。代码:

import numpy as np
import pandas as pd
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import StratifiedShuffleSplit
from keras.models import Sequential
from keras.layers import Dense, Activation, Flatten, Convolution1D, Dropout
from keras.optimizers import SGD
from keras.utils import np_utils

model = Sequential()
model.add(Convolution1D(nb_filter=512, filter_length=1, input_shape=(64, 3)))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dropout(0.4))
model.add(Dense(2048, activation='relu'))
model.add(Dense(1024, activation='relu'))
model.add(Dense(99))
model.add(Activation('softmax'))

sgd = SGD(lr=0.01, nesterov=True, decay=1e-6, momentum=0.9)
model.compile(loss='categorical_crossentropy',optimizer=sgd,metrics=['accuracy'])

model.fit(x_train, y_train, nb_epoch=5, batch_size=16)

结果:

Epoch 1/5
990/990 [==============================] - 78s - loss: 4.3229 - acc: 0.1404          
Epoch 2/5
990/990 [==============================] - 76s - loss: 1.6020 - acc: 0.6384     
Epoch 3/5
990/990 [==============================] - 74s - loss: 0.2723 - acc: 0.9384     
Epoch 4/5
990/990 [==============================] - 73s - loss: 0.1061 - acc: 0.9758

顺便说一句,keras 使用了 Tensorflow 后端。有什么建议吗?

4

6 回答 6

0

您应该为用户对象调用 update_attributes。

助手.rb

  def remember(user)
    user.remember
    cookies.permanent.signed[:user_id] = user.id
    cookies.permanent[:remember_token] = user.remember_token
  end

模型.rb

  def remember
    remember_token = new_token
    self.update_attributes(:remember_digest, digest(remember_token))
  end
于 2017-01-20T08:58:38.473 回答
0

你应该update_attribute打电话user 试试

current_user.update_attribute(a: b)

在你的SessionsController#create

于 2017-01-20T05:22:47.363 回答
0

update_attributes 是一个实例方法而不是类方法,所以首先你需要在 User 类的实例上调用它。

喜欢:

 @user.update_attributes(:remember_digest, User.digest(params:[remember_token]))

试试这个。

于 2017-01-20T05:23:53.000 回答
0

问题出在助手中,行

您正在remember使用该类调用该方法User,因此它显示undefined method for class. 而是调用user object

User.remember

应该,

user.remember

def remember(user)
    user.remember
    cookies.permanent.signed[:user_id] = user.id
    cookies.permanent[:remember_token] = user.remember_token
end
于 2017-01-20T04:59:35.217 回答
0

现在有效

发生的事情是

  1. user.rb 中的 class << self

导致以下方法不起作用。

def remember
  remember_token = User.new_token
  update_attribute(:remember_digest, User.digest(remember_token))
end
  1. SessionsHelper 中的 User.remember 应该是

    user.remember

感谢大家

于 2017-01-21T02:48:17.883 回答
0

我有一个类似的实现,也许这可以帮助你

user.rb

  def create_confimation_token
    generate_token(:confirmation_token)
    update_attribute(:expiration,Time.zone.now + 2.days)
    save!  
  end

  def generate_token(column)
      begin
        self[column] = SecureRandom.urlsafe_base64
      end 
  end   
于 2017-01-20T04:55:30.653 回答