我的正常搜索 foo 让我失望了。我试图找到一个返回整数所有因子的 R 函数。至少有 2 个带有factorize()
函数的包:gmp 和 conf.design,但是这些函数只返回素数。我想要一个返回所有因素的函数。
显然,搜索这个变得困难,因为 R 有一个称为因子的构造,它在搜索中产生了很多噪音。
我的正常搜索 foo 让我失望了。我试图找到一个返回整数所有因子的 R 函数。至少有 2 个带有factorize()
函数的包:gmp 和 conf.design,但是这些函数只返回素数。我想要一个返回所有因素的函数。
显然,搜索这个变得困难,因为 R 有一个称为因子的构造,它在搜索中产生了很多噪音。
为了跟进我的评论(感谢@Ramnath 的错字),蛮力方法似乎在我的 64 位 8 gig 机器上运行得相当好:
FUN <- function(x) {
x <- as.integer(x)
div <- seq_len(abs(x))
factors <- div[x %% div == 0L]
factors <- list(neg = -factors, pos = factors)
return(factors)
}
几个例子:
> FUN(100)
$neg
[1] -1 -2 -4 -5 -10 -20 -25 -50 -100
$pos
[1] 1 2 4 5 10 20 25 50 100
> FUN(-42)
$neg
[1] -1 -2 -3 -6 -7 -14 -21 -42
$pos
[1] 1 2 3 6 7 14 21 42
#and big number
> system.time(FUN(1e8))
user system elapsed
1.95 0.18 2.14
您可以从主要因素中获得所有因素。gmp
计算这些非常快。
library(gmp)
library(plyr)
get_all_factors <- function(n)
{
prime_factor_tables <- lapply(
setNames(n, n),
function(i)
{
if(i == 1) return(data.frame(x = 1L, freq = 1L))
plyr::count(as.integer(gmp::factorize(i)))
}
)
lapply(
prime_factor_tables,
function(pft)
{
powers <- plyr::alply(pft, 1, function(row) row$x ^ seq.int(0L, row$freq))
power_grid <- do.call(expand.grid, powers)
sort(unique(apply(power_grid, 1, prod)))
}
)
}
get_all_factors(c(1, 7, 60, 663, 2520, 75600, 15876000, 174636000, 403409160000))
这现在在包中实现RcppBigIntAlgos
。有关更多详细信息,请参阅此答案。
该算法已经完全更新,现在实现了多个多项式以及一些消除数百万次检查的巧妙筛选技术。除了原始链接之外,这篇论文以及来自primo的这篇文章对最后阶段非常有帮助(对 primo 表示赞赏)。Primo 在相对较短的篇幅中很好地解释了 QS 的内脏,并且还编写了一个非常了不起的算法(它将在底部的数字分解为 38!+ 1,在 2 秒内!太疯狂了!!)。
正如所承诺的,下面是我对Quadratic Sieve的简陋 R 实现。自从我在一月下旬做出承诺以来,我一直在零星地研究这个算法。我不会尝试完全解释它(除非要求......另外,下面的链接做得很好)因为它非常复杂,希望我的函数名称不言自明。这已被证明是我曾经尝试执行的最具挑战性的算法之一,因为从程序员的角度和数学的角度来看,它都要求很高。我阅读了无数论文,最终,我发现这五篇论文最有帮助(QSieve1、QSieve2、QSieve3、QSieve4、QSieve5)。
注意这个算法,就目前而言,不能很好地用作一般的素数分解算法。如果进一步优化,则需要附有一段代码,以分解出较小的素数(即本文所建议的小于 10^5),然后调用 QuadSieveAll,检查这些是否是素数,如果不是,则在这两个因子上调用 QuadSieveAll,等等,直到剩下所有素数(所有这些步骤都不是那么困难)。然而,这篇文章的重点是强调二次筛的核心,所以下面的例子都是半素数(尽管它会分解大多数不包含正方形的奇数......而且,我还没有看到一个例子QS 没有表现出非半素数)。我知道 OP 正在寻找一种方法来返回所有因子而不是主要因子分解,但是这个算法(如果进一步优化)与上述算法之一相结合将是一种考虑作为一般因子分解算法的力量(特别是考虑到OP 需要一些用于Project Euler的东西,这通常需要比蛮力方法更多的东西)。顺便说一句,这个MyIntToBit
函数是这个答案的一个变体,PrimeSieve
它来自@Dontas 不久前出现的一篇文章(对此也表示赞赏)。
QuadSieveMultiPolysAll <- function(MyN, fudge1=0L, fudge2=0L, LenB=0L) {
### 'MyN' is the number to be factored; 'fudge1' is an arbitrary number
### that is used to determine the size of your prime base for sieving;
### 'fudge2' is used to set a threshold for sieving;
### 'LenB' is a the size of the sieving interval. The last three
### arguments are optional (they are determined based off of the
### size of MyN if left blank)
### The first 8 functions are helper functions
PrimeSieve <- function(n) {
n <- as.integer(n)
if (n > 1e9) stop("n too large")
primes <- rep(TRUE, n)
primes[1] <- FALSE
last.prime <- 2L
fsqr <- floor(sqrt(n))
while (last.prime <= fsqr) {
primes[seq.int(last.prime^2, n, last.prime)] <- FALSE
sel <- which(primes[(last.prime + 1):(fsqr + 1)])
if (any(sel)) {
last.prime <- last.prime + min(sel)
} else {
last.prime <- fsqr + 1
}
}
MyPs <- which(primes)
rm(primes)
gc()
MyPs
}
MyIntToBit <- function(x, dig) {
i <- 0L
string <- numeric(dig)
while (x > 0) {
string[dig - i] <- x %% 2L
x <- x %/% 2L
i <- i + 1L
}
string
}
ExpBySquaringBig <- function(x, n, p) {
if (n == 1) {
MyAns <- mod.bigz(x,p)
} else if (mod.bigz(n,2)==0) {
MyAns <- ExpBySquaringBig(mod.bigz(pow.bigz(x,2),p),div.bigz(n,2),p)
} else {
MyAns <- mod.bigz(mul.bigz(x,ExpBySquaringBig(mod.bigz(
pow.bigz(x,2),p), div.bigz(sub.bigz(n,1),2),p)),p)
}
MyAns
}
TonelliShanks <- function(a,p) {
P1 <- sub.bigz(p,1); j <- 0L; s <- P1
while (mod.bigz(s,2)==0L) {s <- s/2; j <- j+1L}
if (j==1L) {
MyAns1 <- ExpBySquaringBig(a,(p+1L)/4,p)
MyAns2 <- mod.bigz(-1 * ExpBySquaringBig(a,(p+1L)/4,p),p)
} else {
n <- 2L
Legendre2 <- ExpBySquaringBig(n,P1/2,p)
while (Legendre2==1L) {n <- n+1L; Legendre2 <- ExpBySquaringBig(n,P1/2,p)}
x <- ExpBySquaringBig(a,(s+1L)/2,p)
b <- ExpBySquaringBig(a,s,p)
g <- ExpBySquaringBig(n,s,p)
r <- j; m <- 1L
Test <- mod.bigz(b,p)
while (!(Test==1L) && !(m==0L)) {
m <- 0L
Test <- mod.bigz(b,p)
while (!(Test==1L)) {m <- m+1L; Test <- ExpBySquaringBig(b,pow.bigz(2,m),p)}
if (!m==0) {
x <- mod.bigz(x * ExpBySquaringBig(g,pow.bigz(2,r-m-1L),p),p)
g <- ExpBySquaringBig(g,pow.bigz(2,r-m),p)
b <- mod.bigz(b*g,p); r <- m
}; Test <- 0L
}; MyAns1 <- x; MyAns2 <- mod.bigz(p-x,p)
}
c(MyAns1, MyAns2)
}
SieveLists <- function(facLim, FBase, vecLen, sieveD, MInt) {
vLen <- ceiling(vecLen/2); SecondHalf <- (vLen+1L):vecLen
MInt1 <- MInt[1:vLen]; MInt2 <- MInt[SecondHalf]
tl <- vector("list",length=facLim)
for (m in 3:facLim) {
st1 <- mod.bigz(MInt1[1],FBase[m])
m1 <- 1L+as.integer(mod.bigz(sieveD[[m]][1] - st1,FBase[m]))
m2 <- 1L+as.integer(mod.bigz(sieveD[[m]][2] - st1,FBase[m]))
sl1 <- seq.int(m1,vLen,FBase[m])
sl2 <- seq.int(m2,vLen,FBase[m])
tl1 <- list(sl1,sl2)
st2 <- mod.bigz(MInt2[1],FBase[m])
m3 <- vLen+1L+as.integer(mod.bigz(sieveD[[m]][1] - st2,FBase[m]))
m4 <- vLen+1L+as.integer(mod.bigz(sieveD[[m]][2] - st2,FBase[m]))
sl3 <- seq.int(m3,vecLen,FBase[m])
sl4 <- seq.int(m4,vecLen,FBase[m])
tl2 <- list(sl3,sl4)
tl[[m]] <- list(tl1,tl2)
}
tl
}
SieverMod <- function(facLim, FBase, vecLen, SD, MInt, FList, LogFB, Lim, myCol) {
MyLogs <- rep(0,nrow(SD))
for (m in 3:facLim) {
MyBool <- rep(FALSE,vecLen)
MyBool[c(FList[[m]][[1]][[1]],FList[[m]][[2]][[1]])] <- TRUE
MyBool[c(FList[[m]][[1]][[2]],FList[[m]][[2]][[2]])] <- TRUE
temp <- which(MyBool)
MyLogs[temp] <- MyLogs[temp] + LogFB[m]
}
MySieve <- which(MyLogs > Lim)
MInt <- MInt[MySieve]; NewSD <- SD[MySieve,]
newLen <- length(MySieve); GoForIT <- FALSE
MyMat <- matrix(integer(0),nrow=newLen,ncol=myCol)
MyMat[which(NewSD[,1L] < 0),1L] <- 1L; MyMat[which(NewSD[,1L] > 0),1L] <- 0L
if ((myCol-1L) - (facLim+1L) > 0L) {MyMat[,((facLim+2L):(myCol-1L))] <- 0L}
if (newLen==1L) {MyMat <- matrix(MyMat,nrow=1,byrow=TRUE)}
if (newLen > 0L) {
GoForIT <- TRUE
for (m in 1:facLim) {
vec <- rep(0L,newLen)
temp <- which((NewSD[,1L]%%FBase[m])==0L)
NewSD[temp,] <- NewSD[temp,]/FBase[m]; vec[temp] <- 1L
test <- temp[which((NewSD[temp,]%%FBase[m])==0L)]
while (length(test)>0L) {
NewSD[test,] <- NewSD[test,]/FBase[m]
vec[test] <- (vec[test]+1L)
test <- test[which((NewSD[test,]%%FBase[m])==0L)]
}
MyMat[,m+1L] <- vec
}
}
list(MyMat,NewSD,MInt,GoForIT)
}
reduceMatrix <- function(mat) {
tempMin <- 0L; n1 <- ncol(mat); n2 <- nrow(mat)
mymax <- 1L
for (i in 1:n1) {
temp <- which(mat[,i]==1L)
t <- which(temp >= mymax)
if (length(temp)>0L && length(t)>0L) {
MyMin <- min(temp[t])
if (!(MyMin==mymax)) {
vec <- mat[MyMin,]
mat[MyMin,] <- mat[mymax,]
mat[mymax,] <- vec
}
t <- t[-1]; temp <- temp[t]
for (j in temp) {mat[j,] <- (mat[j,]+mat[mymax,])%%2L}
mymax <- mymax+1L
}
}
if (mymax<n2) {simpMat <- mat[-(mymax:n2),]} else {simpMat <- mat}
lenSimp <- nrow(simpMat)
if (is.null(lenSimp)) {lenSimp <- 0L}
mycols <- 1:n1
if (lenSimp>1L) {
## "Diagonalizing" Matrix
for (i in 1:lenSimp) {
if (all(simpMat[i,]==0L)) {simpMat <- simpMat[-i,]; next}
if (!simpMat[i,i]==1L) {
t <- min(which(simpMat[i,]==1L))
vec <- simpMat[,i]; tempCol <- mycols[i]
simpMat[,i] <- simpMat[,t]; mycols[i] <- mycols[t]
simpMat[,t] <- vec; mycols[t] <- tempCol
}
}
lenSimp <- nrow(simpMat); MyList <- vector("list",length=n1)
MyFree <- mycols[which((1:n1)>lenSimp)]; for (i in MyFree) {MyList[[i]] <- i}
if (is.null(lenSimp)) {lenSimp <- 0L}
if (lenSimp>1L) {
for (i in lenSimp:1L) {
t <- which(simpMat[i,]==1L)
if (length(t)==1L) {
simpMat[ ,t] <- 0L
MyList[[mycols[i]]] <- 0L
} else {
t1 <- t[t>i]
if (all(t1 > lenSimp)) {
MyList[[mycols[i]]] <- MyList[[mycols[t1[1]]]]
if (length(t1)>1) {
for (j in 2:length(t1)) {MyList[[mycols[i]]] <- c(MyList[[mycols[i]]], MyList[[mycols[t1[j]]]])}
}
}
else {
for (j in t1) {
if (length(MyList[[mycols[i]]])==0L) {MyList[[mycols[i]]] <- MyList[[mycols[j]]]}
else {
e1 <- which(MyList[[mycols[i]]]%in%MyList[[mycols[j]]])
if (length(e1)==0) {
MyList[[mycols[i]]] <- c(MyList[[mycols[i]]],MyList[[mycols[j]]])
} else {
e2 <- which(!MyList[[mycols[j]]]%in%MyList[[mycols[i]]])
MyList[[mycols[i]]] <- MyList[[mycols[i]]][-e1]
if (length(e2)>0L) {MyList[[mycols[i]]] <- c(MyList[[mycols[i]]], MyList[[mycols[j]]][e2])}
}
}
}
}
}
}
TheList <- lapply(MyList, function(x) {if (length(x)==0L) {0} else {x}})
list(TheList,MyFree)
} else {
list(NULL,NULL)
}
} else {
list(NULL,NULL)
}
}
GetFacs <- function(vec1, vec2, n) {
x <- mod.bigz(prod.bigz(vec1),n)
y <- mod.bigz(prod.bigz(vec2),n)
MyAns <- c(gcd.bigz(x-y,n),gcd.bigz(x+y,n))
MyAns[sort.list(asNumeric(MyAns))]
}
SolutionSearch <- function(mymat, M2, n, FB) {
colTest <- which(apply(mymat, 2, sum) == 0)
if (length(colTest) > 0) {solmat <- mymat[ ,-colTest]} else {solmat <- mymat}
if (length(nrow(solmat)) > 0) {
nullMat <- reduceMatrix(t(solmat %% 2L))
listSol <- nullMat[[1]]; freeVar <- nullMat[[2]]; LF <- length(freeVar)
} else {LF <- 0L}
if (LF > 0L) {
for (i in 2:min(10^8,(2^LF + 1L))) {
PosAns <- MyIntToBit(i, LF)
posVec <- sapply(listSol, function(x) {
t <- which(freeVar %in% x)
if (length(t)==0L) {
0
} else {
sum(PosAns[t])%%2L
}
})
ansVec <- which(posVec==1L)
if (length(ansVec)>0) {
if (length(ansVec) > 1L) {
myY <- apply(mymat[ansVec,],2,sum)
} else {
myY <- mymat[ansVec,]
}
if (sum(myY %% 2) < 1) {
myY <- as.integer(myY/2)
myY <- pow.bigz(FB,myY[-1])
temp <- GetFacs(M2[ansVec], myY, n)
if (!(1==temp[1]) && !(1==temp[2])) {
return(temp)
}
}
}
}
}
}
### Below is the main portion of the Quadratic Sieve
BegTime <- Sys.time(); MyNum <- as.bigz(MyN); DigCount <- nchar(as.character(MyN))
P <- PrimeSieve(10^5)
SqrtInt <- .mpfr2bigz(trunc(sqrt(mpfr(MyNum,sizeinbase(MyNum,b=2)+5L))))
if (DigCount < 24) {
DigSize <- c(4,10,15,20,23)
f_Pos <- c(0.5,0.25,0.15,0.1,0.05)
MSize <- c(5000,7000,10000,12500,15000)
if (fudge1==0L) {
LM1 <- lm(f_Pos ~ DigSize)
m1 <- summary(LM1)$coefficients[2,1]
b1 <- summary(LM1)$coefficients[1,1]
fudge1 <- DigCount*m1 + b1
}
if (LenB==0L) {
LM2 <- lm(MSize ~ DigSize)
m2 <- summary(LM2)$coefficients[2,1]
b2 <- summary(LM2)$coefficients[1,1]
LenB <- ceiling(DigCount*m2 + b2)
}
LimB <- trunc(exp((.5+fudge1)*sqrt(log(MyNum)*log(log(MyNum)))))
B <- P[P<=LimB]; B <- B[-1]
facBase <- P[which(sapply(B, function(x) ExpBySquaringBig(MyNum,(x-1)/2,x)==1L))+1L]
LenFBase <- length(facBase)+1L
} else if (DigCount < 67) {
## These values were obtained from "The Multiple Polynomial
## Quadratic Sieve" by Robert D. Silverman
DigSize <- c(24,30,36,42,48,54,60,66)
FBSize <- c(100,200,400,900,1200,2000,3000,4500)
MSize <- c(5,25,25,50,100,250,350,500)
LM1 <- loess(FBSize ~ DigSize)
LM2 <- loess(MSize ~ DigSize)
if (fudge1==0L) {
fudge1 <- -0.4
LimB <- trunc(exp((.5+fudge1)*sqrt(log(MyNum)*log(log(MyNum)))))
myTarget <- ceiling(predict(LM1, DigCount))
while (LimB < myTarget) {
LimB <- trunc(exp((.5+fudge1)*sqrt(log(MyNum)*log(log(MyNum)))))
fudge1 <- fudge1+0.001
}
B <- P[P<=LimB]; B <- B[-1]
facBase <- P[which(sapply(B, function(x) ExpBySquaringBig(MyNum,(x-1)/2,x)==1L))+1L]
LenFBase <- length(facBase)+1L
while (LenFBase < myTarget) {
fudge1 <- fudge1+0.005
LimB <- trunc(exp((.5+fudge1)*sqrt(log(MyNum)*log(log(MyNum)))))
myind <- which(P==max(B))+1L
myset <- tempP <- P[myind]
while (tempP < LimB) {
myind <- myind + 1L
tempP <- P[myind]
myset <- c(myset, tempP)
}
for (p in myset) {
t <- ExpBySquaringBig(MyNum,(p-1)/2,p)==1L
if (t) {facBase <- c(facBase,p)}
}
B <- c(B, myset)
LenFBase <- length(facBase)+1L
}
} else {
LimB <- trunc(exp((.5+fudge1)*sqrt(log(MyNum)*log(log(MyNum)))))
B <- P[P<=LimB]; B <- B[-1]
facBase <- P[which(sapply(B, function(x) ExpBySquaringBig(MyNum,(x-1)/2,x)==1L))+1L]
LenFBase <- length(facBase)+1L
}
if (LenB==0L) {LenB <- 1000*ceiling(predict(LM2, DigCount))}
} else {
return("The number you've entered is currently too big for this algorithm!!")
}
SieveDist <- lapply(facBase, function(x) TonelliShanks(MyNum,x))
SieveDist <- c(1L,SieveDist); SieveDist[[1]] <- c(SieveDist[[1]],1L); facBase <- c(2L,facBase)
Lower <- -LenB; Upper <- LenB; LenB2 <- 2*LenB+1L; MyInterval <- Lower:Upper
M <- MyInterval + SqrtInt ## Set that will be tested
SqrDiff <- matrix(sub.bigz(pow.bigz(M,2),MyNum),nrow=length(M),ncol=1L)
maxM <- max(MyInterval)
LnFB <- log(facBase)
## N.B. primo uses 0.735, as his siever
## is more efficient than the one employed here
if (fudge2==0L) {
if (DigCount < 8) {
fudge2 <- 0
} else if (DigCount < 12) {
fudge2 <- .7
} else if (DigCount < 20) {
fudge2 <- 1.3
} else {
fudge2 <- 1.6
}
}
TheCut <- log10(maxM*sqrt(2*asNumeric(MyNum)))*fudge2
myPrimes <- as.bigz(facBase)
CoolList <- SieveLists(LenFBase, facBase, LenB2, SieveDist, MyInterval)
GetMatrix <- SieverMod(LenFBase, facBase, LenB2, SqrDiff, M, CoolList, LnFB, TheCut, LenFBase+1L)
if (GetMatrix[[4]]) {
newmat <- GetMatrix[[1]]; NewSD <- GetMatrix[[2]]; M <- GetMatrix[[3]]
NonSplitFacs <- which(abs(NewSD[,1L])>1L)
newmat <- newmat[-NonSplitFacs, ]
M <- M[-NonSplitFacs]
lenM <- length(M)
if (class(newmat) == "matrix") {
if (nrow(newmat) > 0) {
PosAns <- SolutionSearch(newmat,M,MyNum,myPrimes)
} else {
PosAns <- vector()
}
} else {
newmat <- matrix(newmat, nrow = 1)
PosAns <- vector()
}
} else {
newmat <- matrix(integer(0),ncol=(LenFBase+1L))
PosAns <- vector()
}
Atemp <- .mpfr2bigz(trunc(sqrt(sqrt(mpfr(2*MyNum))/maxM)))
if (Atemp < max(facBase)) {Atemp <- max(facBase)}; myPoly <- 0L
while (length(PosAns)==0L) {LegTest <- TRUE
while (LegTest) {
Atemp <- nextprime(Atemp)
Legendre <- asNumeric(ExpBySquaringBig(MyNum,(Atemp-1L)/2,Atemp))
if (Legendre == 1) {LegTest <- FALSE}
}
A <- Atemp^2
Btemp <- max(TonelliShanks(MyNum, Atemp))
B2 <- (Btemp + (MyNum - Btemp^2) * inv.bigz(2*Btemp,Atemp))%%A
C <- as.bigz((B2^2 - MyNum)/A)
myPoly <- myPoly + 1L
polySieveD <- lapply(1:LenFBase, function(x) {
AInv <- inv.bigz(A,facBase[x])
asNumeric(c(((SieveDist[[x]][1]-B2)*AInv)%%facBase[x],
((SieveDist[[x]][2]-B2)*AInv)%%facBase[x]))
})
M1 <- A*MyInterval + B2
SqrDiff <- matrix(A*pow.bigz(MyInterval,2) + 2*B2*MyInterval + C,nrow=length(M1),ncol=1L)
CoolList <- SieveLists(LenFBase, facBase, LenB2, polySieveD, MyInterval)
myPrimes <- c(myPrimes,Atemp)
LenP <- length(myPrimes)
GetMatrix <- SieverMod(LenFBase, facBase, LenB2, SqrDiff, M1, CoolList, LnFB, TheCut, LenP+1L)
if (GetMatrix[[4]]) {
n2mat <- GetMatrix[[1]]; N2SD <- GetMatrix[[2]]; M1 <- GetMatrix[[3]]
n2mat[,LenP+1L] <- rep(2L,nrow(N2SD))
if (length(N2SD) > 0) {NonSplitFacs <- which(abs(N2SD[,1L])>1L)} else {NonSplitFacs <- LenB2}
if (length(NonSplitFacs)<2*LenB) {
M1 <- M1[-NonSplitFacs]; lenM1 <- length(M1)
n2mat <- n2mat[-NonSplitFacs,]
if (lenM1==1L) {n2mat <- matrix(n2mat,nrow=1)}
if (ncol(newmat) < (LenP+1L)) {
numCol <- (LenP + 1L) - ncol(newmat)
newmat <- cbind(newmat,matrix(rep(0L,numCol*nrow(newmat)),ncol=numCol))
}
newmat <- rbind(newmat,n2mat); lenM <- lenM+lenM1; M <- c(M,M1)
if (class(newmat) == "matrix") {
if (nrow(newmat) > 0) {
PosAns <- SolutionSearch(newmat,M,MyNum,myPrimes)
}
}
}
}
}
EndTime <- Sys.time()
TotTime <- EndTime - BegTime
print(format(TotTime))
return(PosAns)
}
使用旧 QS 算法
> library(gmp)
> library(Rmpfr)
> n3 <- prod(nextprime(urand.bigz(2, 40, 17)))
> system.time(t5 <- QuadSieveAll(n3,0.1,myps))
user system elapsed
164.72 0.77 165.63
> system.time(t6 <- factorize(n3))
user system elapsed
0.1 0.0 0.1
> all(t5[sort.list(asNumeric(t5))]==t6[sort.list(asNumeric(t6))])
[1] TRUE
使用新的多项式 QS 算法
> QuadSieveMultiPolysAll(n3)
[1] "4.952 secs"
Big Integer ('bigz') object of length 2:
[1] 342086446909 483830424611
> n4 <- prod(nextprime(urand.bigz(2,50,5)))
> QuadSieveMultiPolysAll(n4) ## With old algo, it took over 4 hours
[1] "1.131717 mins"
Big Integer ('bigz') object of length 2:
[1] 166543958545561 880194119571287
> n5 <- as.bigz("94968915845307373740134800567566911") ## 35 digits
> QuadSieveMultiPolysAll(n5)
[1] "3.813167 mins"
Big Integer ('bigz') object of length 2:
[1] 216366620575959221 438925910071081891
> system.time(factorize(n5)) ## It appears we are reaching the limits of factorize
user system elapsed
131.97 0.00 131.98
旁注:上面的数字n5是一个非常有趣的数字。在这里查看
突破点!!!!
> n6 <- factorialZ(38) + 1L ## 45 digits
> QuadSieveMultiPolysAll(n6)
[1] "22.79092 mins"
Big Integer ('bigz') object of length 2:
[1] 14029308060317546154181 37280713718589679646221
> system.time(factorize(n6)) ## Shut it down after 2 days of running
最新胜利(50 位数)
> n9 <- prod(nextprime(urand.bigz(2,82,42)))
> QuadSieveMultiPolysAll(n9)
[1] "12.9297 hours"
Big Integer ('bigz') object of length 2:
[1] 2128750292720207278230259 4721136619794898059404993
## Based off of some crude test, factorize(n9) would take more than a year.
应该注意的是,QS 在较小的数字上的性能通常不如 Pollard 的 rho 算法,并且随着数字变大,QS 的力量开始变得明显。
下面是我最新的 R 分解算法。它速度更快,并且向rle函数致敬。
算法 3(更新)
library(gmp)
MyFactors <- function(MyN) {
myRle <- function (x1) {
n1 <- length(x1)
y1 <- x1[-1L] != x1[-n1]
i <- c(which(y1), n1)
list(lengths = diff(c(0L, i)), values = x1[i], uni = sum(y1)+1L)
}
if (MyN==1L) return(MyN)
else {
pfacs <- myRle(factorize(MyN))
unip <- pfacs$values
pv <- pfacs$lengths
n <- pfacs$uni
myf <- unip[1L]^(0L:pv[1L])
if (n > 1L) {
for (j in 2L:n) {
myf <- c(myf, do.call(c,lapply(unip[j]^(1L:pv[j]), function(x) x*myf)))
}
}
}
myf[order(asNumeric(myf))] ## 'order' is faster than 'sort.list'
}
以下是新的基准(正如 Dirk Eddelbuettel 所说, “不能与经验争论。”):
案例1(大质因数)
set.seed(100)
myList <- lapply(1:10^3, function(x) sample(10^6, 10^5))
benchmark(SortList=lapply(myList, function(x) sort.list(x)),
OrderFun=lapply(myList, function(x) order(x)),
replications=3,
columns = c("test", "replications", "elapsed", "relative"))
test replications elapsed relative
2 OrderFun 3 59.41 1.000
1 SortList 3 61.52 1.036
## The times are limited by "gmp::factorize" and since it relies on
## pseudo-random numbers, the times can vary (i.e. one pseudo random
## number may lead to a factorization faster than others). With this
## in mind, any differences less than a half of second
## (or so) should be viewed as the same.
x <- pow.bigz(2,256)+1
system.time(z1 <- MyFactors(x))
user system elapsed
14.94 0.00 14.94
system.time(z2 <- all_divisors(x)) ## system.time(factorize(x))
user system elapsed ## user system elapsed
14.94 0.00 14.96 ## 14.94 0.00 14.94
all(z1==z2)
[1] TRUE
x <- as.bigz("12345678987654321321")
system.time(x1 <- MyFactors(x^2))
user system elapsed
20.66 0.02 20.71
system.time(x2 <- all_divisors(x^2)) ## system.time(factorize(x^2))
user system elapsed ## user system elapsed
20.69 0.00 20.69 ## 20.67 0.00 20.67
all(x1==x2)
[1] TRUE
案例 2(较小的数字)
set.seed(199)
samp <- sample(10^9, 10^5)
benchmark(JosephDivs=sapply(samp, MyFactors),
DontasDivs=sapply(samp, all_divisors),
OldDontas=sapply(samp, Oldall_divisors),
replications=10,
columns = c("test", "replications", "elapsed", "relative"),
order = "relative")
test replications elapsed relative
1 JosephDivs 10 470.31 1.000
2 DontasDivs 10 567.10 1.206 ## with vapply(..., USE.NAMES = FALSE)
3 OldDontas 10 626.19 1.331 ## with sapply
案例 3(为了完全彻底)
set.seed(97)
samp <- sample(10^6, 10^4)
benchmark(JosephDivs=sapply(samp, MyFactors),
DontasDivs=sapply(samp, all_divisors),
CottonDivs=sapply(samp, get_all_factors),
ChaseDivs=sapply(samp, FUN),
replications=5,
columns = c("test", "replications", "elapsed", "relative"),
order = "relative")
test replications elapsed relative
1 JosephDivs 5 22.68 1.000
2 DontasDivs 5 27.66 1.220
3 CottonDivs 5 126.66 5.585
4 ChaseDivs 5 554.25 24.438
@RichieCotton 的算法是一个非常好的 R 实现。蛮力方法只会让你到目前为止并且失败大量。我提供了三种算法来满足不同的需求。第一个(是我在 1 月 15 日发布的原始算法,已稍作更新)是一种独立的分解算法,它提供了一种高效、准确且可以轻松翻译成其他语言的组合方法。当您需要快速分解数千个数字时,第二种算法更像是一种非常快速且非常有用的筛子。第三个是一个简短的(上面发布的)但功能强大的独立算法,对于小于 2^70 的任何数字都非常出色(我几乎从原始代码中删除了所有内容)。我从 Richie Cotton 对plyr::count
函数(它启发了我编写我自己的rle
函数,它的返回值与 非常相似plyr::count
),George Dontas 处理琐碎案例的简洁方式(即if (n==1) return(1)
),以及@Zelazny7 为我提出的关于 bigz 向量的问题提供的解决方案。
算法1(原始)
library(gmp)
factor2 <- function(MyN) {
if (MyN == 1) return(1L)
else {
max_p_div <- factorize(MyN)
prime_vec <- max_p_div <- max_p_div[sort.list(asNumeric(max_p_div))]
my_factors <- powers <- as.bigz(vector())
uni_p <- unique(prime_vec); maxp <- max(prime_vec)
for (i in 1:length(uni_p)) {
temp_size <- length(which(prime_vec == uni_p[i]))
powers <- c(powers, pow.bigz(uni_p[i], 1:temp_size))
}
my_factors <- c(as.bigz(1L), my_factors, powers)
temp_facs <- powers; r <- 2L
temp_facs2 <- max_p_div2 <- as.bigz(vector())
while (r <= length(uni_p)) {
for (i in 1:length(temp_facs)) {
a <- which(prime_vec > max_p_div[i])
temp <- mul.bigz(temp_facs[i], powers[a])
temp_facs2 <- c(temp_facs2, temp)
max_p_div2 <- c(max_p_div2, prime_vec[a])
}
my_sort <- sort.list(asNumeric(max_p_div2))
temp_facs <- temp_facs2[my_sort]
max_p_div <- max_p_div2[my_sort]
my_factors <- c(my_factors, temp_facs)
temp_facs2 <- max_p_div2 <- as.bigz(vector()); r <- r+1L
}
}
my_factors[sort.list(asNumeric(my_factors))]
}
算法2(筛)
EfficientFactorList <- function(n) {
MyFactsList <- lapply(1:n, function(x) 1)
for (j in 2:n) {
for (r in seq.int(j, n, j)) {MyFactsList[[r]] <- c(MyFactsList[[r]], j)}
}; MyFactsList}
它在不到 2 秒的时间内对 1 到 100,000 之间的每个数字进行了因式分解。为了让您了解该算法的效率,使用蛮力方法分解 1 - 100,000 的时间接近 3 分钟。
system.time(t1 <- EfficientFactorList(10^5))
user system elapsed
1.04 0.00 1.05
system.time(t2 <- sapply(1:10^5, MyFactors))
user system elapsed
39.21 0.00 39.23
system.time(t3 <- sapply(1:10^5, all_divisors))
user system elapsed
49.03 0.02 49.05
TheTest <- sapply(1:10^5, function(x) all(t2[[x]]==t3[[x]]) && all(asNumeric(t2[[x]])==t1[[x]]) && all(asNumeric(t3[[x]])==t1[[x]]))
all(TheTest)
[1] TRUE
最后的想法
@Dontas 关于分解大数的原始评论让我想到,真正的大数怎么样……比如大于 2^200 的数字。您将在此页面上看到无论您选择哪种算法,它们都将花费很长时间,因为它们中的大多数都依赖于gmp::factorize
使用Pollard-Rho 算法的算法。从这个问题来看,这个算法只对小于 2^70 的数字是合理的。我目前正在研究我自己的因式分解算法,该算法将实现Quadratic Sieve,它将所有这些算法提升到一个新的水平。
自从最初提出这个问题以来,R 语言发生了很多变化。在包的版本0.6-3
中,包含了对于获取数字的所有因数非常有用的功能。它将满足大多数用户的需求,但是如果您正在寻找原始速度或者您正在处理更大的数字,您将需要一种替代方法。我编写了两个新包(部分受到这个问题的启发,我可能会添加),其中包含针对此类问题的高度优化的功能。第一个是,另一个是(以前称为)。
numbers
divisors
RcppAlgos
RcppBigIntAlgos
bigIntegerAlgos
RcppAlgos
RcppAlgos
包含两个函数,用于获得小于 的数字的除数2^53 - 1
:(divisorsRcpp
用于快速获得许多数字的完整因式分解的矢量化函数)和divisorsSieve
(快速生成一个范围内的完整因式分解)。首先,我们使用以下方法分解许多随机数divisorsRcpp
:
library(gmp) ## for all_divisors by @GeorgeDontas
library(RcppAlgos)
library(numbers)
options(scipen = 999)
set.seed(42)
testSamp <- sample(10^10, 10)
## vectorized so you can pass the entire vector as an argument
testRcpp <- divisorsRcpp(testSamp)
testDontas <- lapply(testSamp, all_divisors)
identical(lapply(testDontas, as.numeric), testRcpp)
[1] TRUE
现在,使用以下方法在一个范围内分解许多数字divisorsSieve
:
system.time(testSieve <- divisorsSieve(10^13, 10^13 + 10^5))
user system elapsed
0.242 0.006 0.247
system.time(testDontasSieve <- lapply((10^13):(10^13 + 10^5), all_divisors))
user system elapsed
47.880 0.132 47.922
identical(lapply(testDontasSieve, asNumeric), testSieve)
[1] TRUE
两者divisorsRcpp
和divisorsSieve
都是灵活高效的好功能,但它们仅限于2^53 - 1
.
RcppBigIntAlgos
该RcppBigIntAlgos
包(以前称为bigIntegerAlgos
0.2.0 版之前的版本)直接链接到C 库 gmp和divisorsBig
为非常大的数字设计的功能。
library(RcppBigIntAlgos)
## testSamp is defined above... N.B. divisorsBig is not quite as
## efficient as divisorsRcpp. This is so because divisorsRcpp
## can take advantage of more efficient data types.
testBig <- divisorsBig(testSamp)
identical(testDontas, testBig)
[1] TRUE
这是我在原始帖子中定义的基准(NBMyFactors
被替换为divisorsRcpp
and divisorsBig
)。
## Case 2
library(rbenchmark)
set.seed(199)
samp <- sample(10^9, 10^5)
benchmark(RcppAlgos=divisorsRcpp(samp),
RcppBigIntAlgos=divisorsBig(samp),
DontasDivs=lapply(samp, all_divisors),
replications=10,
columns = c("test", "replications", "elapsed", "relative"),
order = "relative")
test replications elapsed relative
1 RcppAlgos 10 5.236 1.000
2 RcppBigIntAlgos 10 12.846 2.453
3 DontasDivs 10 383.742 73.289
## Case 3
set.seed(97)
samp <- sample(10^6, 10^4)
benchmark(RcppAlgos=divisorsRcpp(samp),
RcppBigIntAlgos=divisorsBig(samp),
numbers=lapply(samp, divisors), ## From the numbers package
DontasDivs=lapply(samp, all_divisors),
CottonDivs=lapply(samp, get_all_factors),
ChaseDivs=lapply(samp, FUN),
replications=5,
columns = c("test", "replications", "elapsed", "relative"),
order = "relative")
test replications elapsed relative
1 RcppAlgos 5 0.083 1.000
2 RcppBigIntAlgos 5 0.265 3.193
3 numbers 5 12.913 155.578
4 DontasDivs 5 15.813 190.518
5 CottonDivs 5 60.745 731.867
6 ChaseDivs 5 299.520 3608.675
下一个基准测试展示了函数中底层算法的真正威力divisorsBig
。被分解的数字是 的幂10
,因此主要分解步骤几乎可以完全忽略(例如我机器上的system.time(factorize(pow.bigz(10,30)))
寄存器0
)。因此,时间上的差异完全取决于主要因素可以多快组合以产生所有因素。
library(microbenchmark)
powTen <- pow.bigz(10, 30)
microbenchmark(divisorsBig(powTen), all_divisors(powTen), unit = "relative")
Unit: relative
expr min lq mean median uq max neval
divisorsBig(powTen) 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 100
all_divisors(powTen) 21.49849 21.27973 21.13085 20.63345 21.18834 20.38772 100
## Negative numbers show an even greater increase in efficiency
negPowTen <- powTen * -1
microbenchmark(divisorsBig(negPowTen), all_divisors(negPowTen), unit = "relative")
Unit: relative
expr min lq mean median uq max neval
divisorsBig(negPowTen) 1.00000 1.0000 1.0000 1.00000 1.00000 1.00000 100
all_divisors(negPowTen) 28.75275 28.1864 27.9335 27.57434 27.91376 30.16962 100
有了divisorsBig
,用非常大的输入获得完全分解是没有问题的。该算法根据输入动态调整,并在不同情况下应用不同的算法。如果使用Lenstra 的椭圆曲线法或二次筛法,我们也可以利用多线程。
以下是在此答案n5
中使用和n9
定义的一些示例。
n5 <- as.bigz("94968915845307373740134800567566911")
system.time(print(divisorsBig(n5)))
Big Integer ('bigz') object of length 4:
[1] 1 216366620575959221 438925910071081891
[4] 94968915845307373740134800567566911
user system elapsed
0.162 0.003 0.164
n9 <- prod(nextprime(urand.bigz(2, 82, 42)))
system.time(print(divisorsBig(n9, nThreads=4)))
Big Integer ('bigz') object of length 4:
[1] 1
[2] 2128750292720207278230259
[3] 4721136619794898059404993
[4] 10050120961360479179164300841596861740399588283187
user system elapsed
1.776 0.011 0.757
这是@Dontas 提供的一个示例,其中一个大素数和一个小素数:
x <- pow.bigz(2, 256) + 1
divisorsBig(x, showStats=TRUE, nThreads=8)
Summary Statistics for Factoring:
115792089237316195423570985008687907853269984665640564039457584007913129639937
| Pollard Rho Time |
|--------------------|
| 479ms |
| Lenstra ECM Time | Number of Curves |
|--------------------|--------------------|
| 1s 870ms | 2584 |
| Total Time |
|--------------------|
| 2s 402ms |
Big Integer ('bigz') object of length 4:
[1] 1
[2] 1238926361552897
[3] 93461639715357977769163558199606896584051237541638188580280321
[4] 115792089237316195423570985008687907853269984665640564039457584007913129639937
将此与使用 查找素数分解进行比较gmp::factorize
:
system.time(factorize(x))
user system elapsed
9.199 0.036 9.248
最后,这是一个大半素数的例子(注意,因为我们知道它是一个半素数,所以我们跳过了扩展的 Pollard 的 rho 算法以及 Lentra 的椭圆曲线方法)。
## https://members.loria.fr/PZimmermann/records/rsa.html
rsa79 <- as.bigz("7293469445285646172092483905177589838606665884410340391954917800303813280275279")
divisorsBig(rsa79, nThreads=8, showStats=TRUE, skipPolRho=T, skipECM=T)
Summary Statistics for Factoring:
7293469445285646172092483905177589838606665884410340391954917800303813280275279
| MPQS Time | Complete | Polynomials | Smooths | Partials |
|--------------------|----------|-------------|------------|------------|
| 2m 49s 174ms | 100 | 91221 | 5651 | 7096 |
| Mat Algebra Time | Mat Dimension |
|--------------------|--------------------|
| 14s 863ms | 12625 x 12747 |
| Total Time |
|--------------------|
| 3m 4s 754ms |
Big Integer ('bigz') object of length 4:
[1] 1
[2] 848184382919488993608481009313734808977
[3] 8598919753958678882400042972133646037727
[4] 7293469445285646172092483905177589838606665884410340391954917800303813280275279
即使在非常大的数字(应该作为字符串传递)的情况下,以下方法也能提供正确的结果。而且它真的很快。
# TEST
# x <- as.bigz("12345678987654321")
# all_divisors(x)
# all_divisors(x*x)
# x <- pow.bigz(2,89)-1
# all_divisors(x)
library(gmp)
options(scipen =30)
sort_listz <- function(z) {
#==========================
z <- z[order(as.numeric(z))] # sort(z)
} # function sort_listz
mult_listz <- function(x,y) {
do.call('c', lapply(y, function(i) i*x))
}
all_divisors <- function(x) {
#==========================
if (abs(x)<=1) return(x)
else {
factorsz <- as.bigz(factorize(as.bigz(x))) # factorize returns up to
# e.g. x= 12345678987654321 factors: 3 3 3 3 37 37 333667 333667
factorsz <- sort_listz(factorsz) # vector of primes, sorted
prime_factorsz <- unique(factorsz)
#prime_ekt <- sapply(prime_factorsz, function(i) length( factorsz [factorsz==i]))
prime_ekt <- vapply(prime_factorsz, function(i) sum(factorsz==i), integer(1), USE.NAMES=FALSE)
spz <- vector() # keep all divisors
all <-1
n <- length(prime_factorsz)
for (i in 1:n) {
pr <- prime_factorsz[i]
pe <- prime_ekt[i]
all <- all*(pe+1) #counts all divisors
prz <- as.bigz(pr)
pse <- vector(mode="raw",length=pe+1)
pse <- c( as.bigz(1), prz)
if (pe>1) {
for (k in 2:pe) {
prz <- prz*pr
pse[k+1] <- prz
} # for k
} # if pe>1
if (i>1) {
spz <- mult_listz (spz, pse)
} else {
spz <- pse;
} # if i>1
} #for n
spz <- sort_listz (spz)
return (spz)
}
} # function factors_all_divisors
#====================================
精制版,速度非常快。代码保持简单、可读和干净。
测试
#Test 4 (big prime factor)
x <- pow.bigz(2,256)+1 # = 1238926361552897 * 93461639715357977769163558199606896584051237541638188580280321
system.time(z2 <- all_divisors(x))
# user system elapsed
# 19.27 1.27 20.56
#Test 5 (big prime factor)
x <- as.bigz("12345678987654321321") # = 3 * 19 * 216590859432531953
system.time(x2 <- all_divisors(x^2))
#user system elapsed
#25.65 0.00 25.67