Runtime vs. compile time (JIT vs. AOT) optimizations in Java and C++
Ionuț Baloșin is software Architect @ Luxoft with 10+ years of experience in a wide variety of business applications. Particularly interested in software architecture and performance & tuning topics. Regular speaker at external conferences (e.g. GeeCon, JokerConf, XP Days, Voxxed, Bucharest Java User Group, Logeek, SoftLabs, DevTalks, Agile Tour) and occasionally technical writer (InfoQ, DZone, etc).
During this talk I present my research study in regards to several runtime optimizations done by Just In Time Compiler inside HotSpot/OpenJDK versus the one similar triggered in an ahead of time time manner by LLVM clang in C++.
The talk reveals how Just In Time Compiler (e.g. JIT C2) from HotSpot/OpenJDK internally manages runtime optimizations for hot methods in comparison to ahead of time approach triggered by LLVM clang on similar C++ source code, emphasizing all of the internals and strategies used by each Compiler to achieve better performance. For each optimization there is similar Java and C++ source code and corresponding generated assembly code in order to prove what really happens under the hood. Each test is covered by a dedicated language benchmark and conclusions. Main topics of the agenda: – Different sequential sums (e.g. N elements array, N integers, two arrays, etc.) – Loop unrolling, loop peeling – Fields object layout – Null checks – Uncommon traps – Lock coarsening – Lock elision – Virtual calls – Scalar replacement – Concurrency implications (e.g. memory access optimization which are prevented in case inlining do not happen). – … The tools used during our research study are: JITWatch, Java Measurement Harness, C++ Google Benchmark and perf. All test scenarios are launched against latest official Java release (e.g. 9.0.1) and a recent LLVM clang version (e.g. 5.0.0).