I am writing a class to model big integers. I am storing my data as unsigned ints in a vector pointer called data. What this function does is it adds n to the current big integer. What seems to be slowing down my performance is having to use long long values. Do any of you know a way around this. I currently have to make sum a long long or else it will overflow.
void Integer::u_add(const Integer & n)
{
std::vector<unsigned int> & tRef = *data;
std::vector<unsigned int> & nRef = *n.data;
const int thisSize = tRef.size();
const int nSize = nRef.size();
int carry = 0;
for(int i = 0; i < nSize || carry; ++i)
{
bool readThis = i < thisSize;
long long sum = (readThis ? (long long)tRef[i] + carry : (long long)carry) + (i < nSize ? nRef[i] : 0);
if(readThis)
tRef[i] = sum % BASE; //Base is 2^32
else
tRef.push_back(sum % BASE);
carry = (sum >= BASE ? 1 : 0);
}
}
Also just wondering if there is any benefit to using references to the pointers over just using the pointers themselves? I mean should I use tRef[i] or (*data)[i] to access the data.