Which is very interesting because:
- Someone had at least heard of our efforts, so that's a win.
- But they had left with the impression that it was still in a beginning phase.
When people are talking about a 'JIT' colloquially and especially in the context of dynamic languages, they usually refer to a system that has both a dynamic specialization functionality, as well as a machine-code emitting functionality. MoarVM has had support for both since 2014. Historically we've called the specializer 'spesh' and the machine code emitter 'jit'. Maybe for communicating with the rest of the world it is better to call both 'the JIT' frontend and backend, respectively.
Without further ado, let me list the history of the MoarVM JIT compiler:
- The April 2014 release of MoarVM introduced the dynamic specialization framework spesh (the 'frontend').
- In June 2014, I started working in earnest on the machine code emitter (the part that we call the JIT), and halfway through that month had compiled the first function.
- In July 2014 we saw the introduction of inlining and on-stack-replacement in spesh.
- Also July 2014 saw the introduction of the invokish mechanism by which control flow can be returned to the interpreter.
- In August 2014 the JIT backend was completed and merged into MoarVM master.
- In June 2015, I started work on a new JIT compiler backend, first by patching the assembler library we use (DynASM), then proceeded with a new intermediate representation and instruction selection.
- After that progress slowed until in March 2017 I completed the register allocator. (I guess register allocation was harder than I thought it would be). A little later, the register allocator would support call argument placement.
- The new 'expression' JIT backend was merged in August 2017. Since then, many contributors have stepped up to develop expression JIT templates (that are used to generate machine code for MoarVM opcodes).
- In August 2017, Jonathan started working on reorganizing spesh, starting with moving specialization to a separate thread, central optimization planning, improving optimizations and installing argument guards (which makes the process of selecting the correct specialized variant more efficient).
- Somewhere after this - I'm not exactly sure when - nine implemented inlinig the 'NativeCall' foreign function interface into JIT compiled code.
- Most recently Jonathan has started work to write specializer plugins - ways for rakudo to inform MoarVM on how to cache the result of method lookups, which should help increase the effectiveness of optimization for Perl 6.
- Arround the same time I reworked the way that the interpreter handles control flow changes for JIT compiled code (e.g. in exception handling).
- At YAPC::EU 2014 and FOSDEM 2015, Jonathan gave a presentation on the MoarVM dynamic specialization system.
- At YAPC::EU 2015 (Granada) I also gave a presentation. Sadly I can no longer find the presentation online or offline.
- At the Swiss Perl Workshop in 2017 Jonathan gave a presentation on deoptimization and how it is used for supporting speculative optimizations.
- At The Perl Conference in Amsterdam (2017) I gave a presentation on how to implement JIT expression templates in MoarVM.
PS: You might read this and be reasonably surprised that Rakudo Perl 6 is not, after all this, very fast yet. I have a - not entirely serious - explanation for that:
- All problems in computer science can be solved with a layer of indirection.
- Many layers of indirection make programs slow.
- Perl 6 solves many computer science problems for you ;-)
PPS: Also, it is important to note that many of practical speed improvements that Rakudo users have come to enjoy did not come from VM improvements per se, but from better use of the VM by core library routines, for which many volunteers are responsible.