Here is a sample code:
public class TestIO{
public static void main(String[] str){
TestIO t = new TestIO();
t.fOne();
t.fTwo();
t.fOne();
t.fTwo();
}
public void fOne(){
long t1, t2;
t1 = System.nanoTime();
int i = 10;
int j = 10;
int k = j*i;
System.out.println(k);
t2 = System.nanoTime();
System.out.println("Time taken by 'fOne' ... " + (t2-t1));
}
public void fTwo(){
long t1, t2;
t1 = System.nanoTime();
int i = 10;
int j = 10;
int k = j*i;
System.out.println(k);
t2 = System.nanoTime();
System.out.println("Time taken by 'fTwo' ... " + (t2-t1));
}
}
This gives the following output:
100
Time taken by ‘fOne’ … 390273
100
Time taken by ‘fTwo’ … 118451
100
Time taken by ‘fOne’ … 53359
100
Time taken by ‘fTwo’ … 115936
Press any key to continue . . .
Why does it take more time (significantly more) to execute the same method for the first time than the consecutive calls?
I tried giving -XX:CompileThreshold=1000000 to the command line, but there was no difference.
There are several reasons. The JIT (Just In Time) compiler may not have run. The JVM can do optimizations that differ between invocations. You’re measuring elapsed time, so maybe something other than Java is running on your machine. The processor and RAM caches are probably “warm” on subsequent invocations.
You really need to make multiple invocations (in the thousands) to get an accurate per method execution time.