I am interested in checking if let's say a sample A (n=25) is uniformly distributed. Here is the way I'd check for that in Python:
import scipy.stats as ss
A=[9,9,9,4,9,6,7,8,9,4,5,2,4,9,6,7,3,4,2,4,5,6,8,9,9]
ss.kstest(A,'uniform', args=(min(A),max(A)), N=25)
Which returns: (0.22222222222222221, 0.14499771178796239), that is, with a p-value of ~0.15 the test can't reject that the sample A comes from an uniform distribution.
Now that's how I calculate the same in R:
A=c(9,9,9,4,9,6,7,8,9,4,5,2,4,9,6,7,3,4,2,4,5,6,8,9,9)
ks.test(A,punif,min(A),max(A))
The result: D = 0.32, p-value = 0.01195. With R one should reject the null hypothesis at the usual significance level of 0.05 (!!!)
If I read the documentation correctly, both functions perform a two-sided test as a default. Also, I get that the KS test is mainly intended for continuous variables, but can this explain the contrasting approximations produced by Python and R? Alternatively, am I making some flagrant mistake on the syntax?