my question is about automatic filtering of measurement data, because I have several hundred files to process. The file-structure looks like:
test1 <- read.table("~/test1.txt",sep="\t",dec=".",skip=17,header=TRUE)
Number Time.s Potential.V Current.A
1 0.0000 0.060 -0.7653
2 0.0285 0.060 -0.7597
3 0.0855 0.060 -0.7549
.....
17 0.8835 0.060 -0.7045
18 0.9405 0.060 -0.5983
19 0.9975 0.061 -0.1370
20 1.0545 0.062 0.1295
21 1.1115 0.063 0.2680
......
8013 456.6555 0.066 -1.1070
8014 456.7125 0.065 -1.1850
8015 456.7695 0.063 -1.2610
8016 456.8265 0.062 -1.3460
8017 456.8835 0.061 -1.4380
8018 456.9405 0.060 -1.4350
8019 456.9975 0.060 -1.0720
8020 457.0545 0.060 -0.8823
8021 457.1115 0.060 -0.7917
8022 457.1685 0.060 -0.7481
I need to get rid off the beginning and ending extra lines with the Potential.V == 0.06. My problem is that the number of lines in the beginning and at the end of the various files isn't fix.
Next restriction is that the file includes several measurements after each other, so I can't just remove all lines with 0.06 in the data.frame.
I the moment I do the cutting manually, not very elegant but I don't know of a better solution:
test_b1 <- data.frame(test1$Number[18:8018],test1$Time.s[18:8018],test1$Potential.V[18:8018],test1$Current.A[18:8018])
I tried using iterations like
for (c in 1:(length(test1))) {
if (counter>1) & ((as.numeric(r[counter])- as.numeric(r[counter-1]))==1) {
cat("Skip \n")}
}
but I didn't got a working solution, because of a lack of skill on my side :/ .
Is there a module on CRAN or a more elegant way to solve such problems ?
Best regards