I wrote simple benchmark that test performance of multyplying doubles vs BigDecimal. Is my method correct? I use randomized values because compiler optimized multyplying constants many times (eg Math.PI * Math.E
).
But:
- I don t know if generating random numbers inside a test corrupts the result.
- The same for creating new BigDecimal
objects inside a test.
我想只测试乘法的性能(而不是建造者使用的时间)。
如何做到这一点?
import java.math.*;
import java.util.*;
public class DoubleVsBigDecimal
{
public static void main(String[] args)
{
Random rnd = new Random();
long t1, t2, t3;
double t;
t1 = System.nanoTime();
for(int i=0; i<1000000; i++)
{
double d1 = rnd.nextDouble();
double d2 = rnd.nextDouble();
t = d1 * d2;
}
t2 = System.nanoTime();
for(int i=0; i<1000000; i++)
{
BigDecimal bd1 = BigDecimal.valueOf(rnd.nextDouble());
BigDecimal bd2 = BigDecimal.valueOf(rnd.nextDouble());
bd1.multiply(bd2);
}
t3 = System.nanoTime();
System.out.println(String.format("%f",(t2-t1)/1e9));
System.out.println(String.format("%f",(t3-t2)/1e9));
System.out.println(String.format("%f",(double)(t3-t2)/(double)(t2-t1)));
}
}