6

我很高兴使用这段代码运行:

z=lapply(filename_list, function(fname){
    read.zoo(file=fname,header=TRUE,sep = ",",tz = "")
    })
xts( do.call(rbind,z) )

直到脏数据出现在一个文件的末尾:

                        Open     High      Low    Close Volume
2011-09-20 21:00:00 1.370105 1.370105 1.370105 1.370105      1

这在下一个文件的开头:

                        Open     High      Low  Close Volume
2011-09-20 21:00:00 1.370105 1.371045 1.369685 1.3702   2230

所以rbind.zoo抱怨重复。

我不能使用类似的东西

 y <- x[ ! duplicated( index(x) ),  ]

因为它们在不同的动物园对象中,在一个列表中。而且我不能按照这里的建议aggregate使用,因为它们是动物园对象的列表,而不是一个大型动物园对象。而且我不能得到一个大对象的副本'cos。第 22 条军规。

因此,当事情变得艰难时,艰难的破解了一些 for 循环(请原谅打印和停止,因为这还不是工作代码):

indexes <- do.call("c", unname(lapply(z, index)))
dups=duplicated(indexes)
if(any(dups)){
    duplicate_timestamps=indexes[dups]
    for(tix in 1:length(duplicate_timestamps)){
        t=duplicate_timestamps[tix]
        print("We have a duplicate:");print(t)
        for(zix in 1:length(z)){
            if(t %in% index(z[[zix]])){
                print(z[[zix]][t])
                if(z[[zix]][t]$Volume==1){
                    print("-->Deleting this one");
                    z[[zix]][t]=NULL    #<-- PROBLEM
                    }
                }
            }
        }
    stop("There are duplicate bars!!")
    }

The bit I've got stumped on is assigning NULL to a zoo row doesn't delete it (Error in NextMethod("[<-") : replacement has length zero). OK, so I do a filter-copy, without the offending item... but I'm tripping up on these:

> z[[zix]][!t,]
Error in Ops.POSIXt(t) : unary '!' not defined for "POSIXt" objects

> z[[zix]][-t,]
Error in `-.POSIXt`(t) : unary '-' is not defined for "POSIXt" objects

P.S. While high-level solutions to my real problem of "duplicates rows across a list of zoo objects" are very welcome, the question here is specifically about how to delete a row from a zoo object given a POSIXt index object.


A small bit of test data:

list(structure(c(1.36864, 1.367045, 1.370105, 1.36928, 1.37039, 
1.370105, 1.36604, 1.36676, 1.370105, 1.367065, 1.37009, 1.370105, 
5498, 3244, 1), .Dim = c(3L, 5L), .Dimnames = list(NULL, c("Open", 
"High", "Low", "Close", "Volume")), index = structure(c(1316512800, 
1316516400, 1316520000), class = c("POSIXct", "POSIXt"), tzone = ""), class = "zoo"), 
    structure(c(1.370105, 1.370115, 1.36913, 1.371045, 1.37023, 
    1.37075, 1.369685, 1.36847, 1.367885, 1.3702, 1.36917, 1.37061, 
    2230, 2909, 2782), .Dim = c(3L, 5L), .Dimnames = list(NULL, 
        c("Open", "High", "Low", "Close", "Volume")), index = structure(c(1316520000, 
    1316523600, 1316527200), class = c("POSIXct", "POSIXt"), tzone = ""), class = "zoo"))

UPDATE: Thanks to G. Grothendieck for the row-deleting solution. In the actual code I followed the advice of Joshua and GSee to get a list of xts objects instead of a list of zoo objects. So my code became:

z=lapply(filename_list, function(fname){
    xts(read.zoo(file=fname,header=TRUE,sep = ",",tz = ""))
    })
x=do.call.rbind(z)

(As an aside, please note the call to do.call.rbind. This is because rbind.xts has some serious memory issues. See https://stackoverflow.com/a/12029366/841830 )

Then I remove duplicates as a post-process step:

dups=duplicated(index(x))
if(any(dups)){
    duplicate_timestamps=index(x)[dups]
    to_delete=x[ (index(x) %in% duplicate_timestamps) & x$Volume<=1]
    if(nrow(to_delete)>0){
        #Next line says all lines that are not in the duplicate_timestamp group
        #     OR are in the duplicate timestamps, but have a volume greater than 1.
        print("Will delete the volume=1 entry")
        x=x[ !(index(x) %in% duplicate_timestamps) | x$Volume>1]
        }else{
        stop("Duplicate timestamps, and we cannot easily remove them just based on low volume.")
        }
    }
4

3 回答 3

9

If z1 and z2 are your zoo objects then to rbind while removing any duplicates in z2:

rbind( z1, z2[ ! time(z2) %in% time(z1) ] )

Regarding deleting points in a zoo object having specified times, the above already illustrates this but in general if tt is a vector of times to delete:

z[ ! time(z) %in% tt ]

or if we knew there were a single element in tt then z[ time(z) != tt ] .

于 2012-08-14T03:05:27.947 回答
3

rbind.xts will allow duplicate index values, so it could work if you convert to xts first.

x <- lapply(z, as.xts)
y <- do.call(rbind, x)
# keep last value of any duplicates
y <- y[!duplicated(index(y),fromLast=TRUE),]
于 2012-08-14T02:19:40.673 回答
2

I think you'll have better luck if you convert to xts first.

a <- structure(c(1.370105, 1.370105, 1.370105, 1.370105, 1), .Dim = c(1L, 
5L), index = structure(1316570400, tzone = "", tclass = c("POSIXct", 
"POSIXt")), .indexCLASS = c("POSIXct", "POSIXt"), tclass = c("POSIXct", 
"POSIXt"), .indexTZ = "", tzone = "", .Dimnames = list(NULL, 
    c("Open", "High", "Low", "Close", "Volume")), class = c("xts", 
"zoo"))

b <- structure(c(1.370105, 1.371045, 1.369685, 1.3702, 2230), .Dim = c(1L, 
5L), index = structure(1316570400, tzone = "", tclass = c("POSIXct", 
"POSIXt")), .indexCLASS = c("POSIXct", "POSIXt"), tclass = c("POSIXct", 
"POSIXt"), .indexTZ = "", tzone = "", .Dimnames = list(NULL, 
    c("Open", "High", "Low", "Close", "Volume")), class = c("xts", 
"zoo"))


(comb <- rbind(a, b))
#                        Open     High      Low    Close Volume
#2011-09-20 21:00:00 1.370105 1.370105 1.370105 1.370105      1
#2011-09-20 21:00:00 1.370105 1.371045 1.369685 1.370200   2230

dupidx <- index(comb)[duplicated(index(comb))] # indexes of duplicates
tail(comb[dupidx], 1) #last duplicate
# now rbind the last duplicated row with all non-duplicated data
rbind(comb[!index(comb) %in% dupidx], tail(comb[dupidx], 1)) 
于 2012-08-14T02:14:21.940 回答