It is a rather broad question, but I replicated an Excel model in R. R produces almost the same results as excel, but there is always a slight % deviation in the range of 10^-8. I assume this is due to numerical precision. The model uses only basic arithmetic operations.
My question would be: is there a simple way to force R to use the same numerical precision as Excel?
Probably not. R always uses double-precision (i.e., 8-byte) floating point, and and it seems that's also what Excel uses.
It is indeed mildly surprising that the results are not identical if you're only doing simple arithmetic, but note that even changing the sequence of arithmetic operations can change the results (i.e., floating-point arithmetic is not associative:
(a+b)+c != a+(b+c), so if arithmetic is done left-to-right,
a+b+c != c+b+a). It's probably impossible to even know what Excel is doing under the hood, although this (which may? only apply to Excel 2003) states points at which Excel deviates from the IEEE 754 standard used by R and most other numerical programs.
You might be able to get exact equality by working up from simple to more complex examples, seeing where the divergence occurs, and rearranging your computations to avoid the problem, but unless this equality is extremely important to you/worth investing a lot of effort, you may be better off testing for approximate equality.
It's conceivable that export to CSV is losing precision (e.g. see here).