我有一个运行超过 1000000 次迭代来模拟服务器负载的程序。请求的到达率是一个变量。例如,如果到达率是 2:这意味着在每 2 次迭代中,应该有 1 个请求传入,这将在模拟结束时生成“大约”500,000 个请求,依此类推。我不能仅仅通过根据到达率在每个第 n 个间隔引入一个新请求来做到这一点。一定有运气因素。
#include<stdio.h>
#include<time.h>
#include <stdlib.h>
//random number generator method
int random_number(int min_num, int max_num){
int result=0,low_num=0,hi_num=0;
if(min_num<max_num){
low_num=min_num;
hi_num=max_num+1; // this is done to include max_num in output.
}else{
low_num=max_num+1;// this is done to include max_num in output.
hi_num=min_num;
}
result = (rand()%(hi_num-low_num))+low_num;
return result;
}
int main(){
srand(time(NULL));
unsigned int arrivalRate = 2;
unsigned int noOfRequests = 0;
unsigned int timer;
for(timer = 0; timer < 1000000; timer++){
//gives a random number between 0 and arrival rate
int x = random_number(0, arrivalRate);
//there is a new request
if(x <= 1){
noOfRequests++;
}
}
printf("No of requests: %d", noOfRequests);
}
因此,如果我使用到达率 2 运行此代码,它会生成大约 600,000 个请求,而这应该只有大约 500,000 个(+-1000 是可以容忍的)请求。如何改进我的代码以产生更合理的结果,它产生的结果比预期的要多。