1

我正在尝试使用多个变量(实际上只有 2 个)实现线性回归。我正在使用来自斯坦福大学 ML-Class 的数据。我让它在单变量情况下正常工作。相同的代码应该适用于多个,但是,没有。

链接到数据:

http://s3.amazonaws.com/mlclass-resources/exercises/mlclass-ex1.zip

特征归一化:

''' This is for the regression with multiple variables problem .  You have to normalize features before doing anything. Lets get started'''
from __future__ import division
import os,sys
from math import *

def mean(f,col):
    #This is to find the mean of a feature
    sigma = 0
    count = 0
    data = open(f,'r')
    for line  in data:
        points = line.split(",")
        sigma = sigma + float(points[col].strip("\n"))
        count+=1
    data.close()
    return sigma/count
def size(f):
    count = 0
    data = open(f,'r')

    for line in data:
        count +=1
    data.close()
    return count
def standard_dev(f,col):
    #Calculate the standard_dev . Formula : Sqrt ( Sigma ( x - x') ** (x-x') ) / N ) 
    data = open(f,'r')
    sigma = 0
    mean = 0
    if(col==0):
        mean = mean_area
    else:
        mean = mean_bedroom
    for line in data:
        points = line.split(",")
        sigma  = sigma + (float(points[col].strip("\n")) - mean) ** 2
    data.close()
    return sqrt(sigma/SIZE)

def substitute(f,fnew):
    ''' Take the old file.  
        1. Subtract the mean values from each feature
        2. Scale it by dividing with the SD
    '''
    data = open(f,'r')
    data_new = open(fnew,'w')
    for line in data:
        points = line.split(",")
        new_area = (float(points[0]) - mean_area ) / sd_area
        new_bedroom = (float(points[1].strip("\n")) - mean_bedroom) / sd_bedroom
        data_new.write("1,"+str(new_area)+ ","+str(new_bedroom)+","+str(points[2].strip("\n"))+"\n")
    data.close()
    data_new.close()
global mean_area
global mean_bedroom
mean_bedroom = mean(sys.argv[1],1)
mean_area = mean(sys.argv[1],0)
print 'Mean number of bedrooms',mean_bedroom
print 'Mean area',mean_area
global SIZE
SIZE = size(sys.argv[1])
global sd_area
global sd_bedroom
sd_area = standard_dev(sys.argv[1],0)
sd_bedroom=standard_dev(sys.argv[1],1)
substitute(sys.argv[1],sys.argv[2])

我在代码中实现了均值和标准差,而不是使用 NumPy/SciPy。将值存储在文件中后,其快照如下:

X1 X2 X3 COST OF HOUSE

1,0.131415422021,-0.226093367578,399900
1,-0.509640697591,-0.226093367578,329900
1,0.507908698618,-0.226093367578,369000
1,-0.743677058719,-1.5543919021,232000
1,1.27107074578,1.10220516694,539900
1,-0.0199450506651,1.10220516694,299900
1,-0.593588522778,-0.226093367578,314900
1,-0.729685754521,-0.226093367578,198999
1,-0.789466781548,-0.226093367578,212000
1,-0.644465992588,-0.226093367578,242500

我对其进行回归以找到参数。代码如下:

''' The plan is to rewrite and this time, calculate cost each time to ensure its reducing. Also make it  enough to handle multiple variables '''
from __future__ import division
import os,sys

def computecost(X,Y,theta):
    #X is the feature vector, Y is the predicted variable
    h_theta=calculatehTheta(X,theta)
    delta = (h_theta - Y) * (h_theta - Y)
    return (1/194) * delta 



def allCost(f,no_features):
    theta=[0,0]
    sigma=0
    data = open(f,'r')
    for line in data:
        X=[]
        Y=0
        points=line.split(",")
        for i in range(no_features):
            X.append(float(points[i]))
        Y=float(points[no_features].strip("\n"))
        sigma=sigma+computecost(X,Y,theta)
    return sigma

def calculatehTheta(points,theta):
    #This takes a file which has  (1,feature1,feature2,so ... on)
    #print 'Points are',points
    sigma  = 0 
    for i in range(len(theta)):

        sigma = sigma + theta[i] * float(points[i])
    return sigma



def gradient_Descent(f,no_iters,no_features,theta):
    ''' Calculate ( h(x) - y ) * xj(i) . And then subtract it from thetaj . Continue for 1500 iterations and you will have your answer'''


    X=[]
    Y=0
    sigma=0
    alpha=0.01
    for i in range(no_iters):
        for j in range(len(theta)):
            data = open(f,'r')
            for line in data:
                points=line.split(",")
                for i in range(no_features):
                    X.append(float(points[i]))
                Y=float(points[no_features].strip("\n"))
                h_theta = calculatehTheta(points,theta)
                delta = h_theta - Y
                sigma = sigma + delta * float(points[j])
            data.close()
            theta[j] = theta[j] - (alpha/97) * sigma

            sigma = 0
    print theta

print allCost(sys.argv[1],2)
print gradient_Descent(sys.argv[1],1500,2,[0,0,0])

它打印以下作为参数:

[-3.8697149722857996e-14, 0.02030369056348706, 0.979706406501678]

这三个都是可怕的错误:(完全相同的事情适用于 Single variable 。

谢谢 !

4

1 回答 1

2

全局变量和四重嵌套循环让我担心。那并多次读取数据并将其写入文件。

您的数据是否如此之大以至于无法轻松放入内存中?

为什么不使用csv模块进行文件处理?

为什么不使用Numpy作为数字部分?

不要重新发明轮子

假设您的数据条目是行,您可以规范化您的数据并在两行中​​进行最小二乘:

normData = (data-data.mean(axis = 0))/data.std(axis = 0)
c = numpy.dot(numpy.linalg.pinv(normData),prices)

回复原始海报的评论

好的,那么我能给你的唯一其他建议就是试着把它分成更小的部分,这样更容易看到发生了什么。并且更容易对小部件进行完整性检查。

这可能不是问题,但您正在使用i该四重循环中两个循环的索引。这就是您可以通过将其切割成更小的范围来避免的问题。

我想自从我写一个显式嵌套循环或声明一个全局变量以来已经有好几年了。

于 2012-01-23T13:01:17.887 回答