Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JIT compilation in MathInterpreter? #10

Open
mspraggs opened this issue May 27, 2016 · 3 comments
Open

JIT compilation in MathInterpreter? #10

mspraggs opened this issue May 27, 2016 · 3 comments
Assignees

Comments

@mspraggs
Copy link
Contributor

The math interpreter is obviously very powerful. Performance is currently (and understandably) worse than native machine code. So my question: how hard would it be to add JIT compilation to speed up the interpreted code? I mean I don't expect this to be a trivial task, but are we talking weeks, months or years of time? I wouldn't expect a wide variety of instruction sets to be supported, but x86 and perhaps x86_64 might be a nice feature.

@mspraggs
Copy link
Contributor Author

mspraggs commented May 28, 2016

Been doing some reading on this and it looks like this might be a bit simpler to achieve if the interpreted language were manipulated using LLVM.

EDIT: see http://llvm.org/docs/tutorial/LangImpl1.html, though I think you've already done the first couple of chapters worth by the looks of it (lexer and AST generation).

EDIT 2: There's also this: http://luajit.org/dynasm.html, though I'm not sure it'd work quite as well as LLVM.

@aportelli aportelli self-assigned this May 31, 2016
@aportelli
Copy link
Owner

In principle we are talking about a week of work probably, as you said the AST generation is already implemented. One just have to rewrite the different virtual instructions to map them to whatever instruction generating function in the JIT framework.
When I did the interpreter I thought about using LLVM rather than programming my own virtual machine. LLVM is a rather big dependency to have and is not a straightforward thing to compile by yourself. Another issue is that everything will be fine on x86, there will be probably unsolvable compatibility issues on other architectures like IBM one.
Although it is doable it will need quite some thinking and testing and I am not sure that there is a strong requirement for it now. For fitting the interpreter is useful to quickly test different forms, I think it is reasonably fast and it is really straightforward to hardcode the model if necessary. What could be more concerning would be an intense use of latan_sample_combine for things like AMA but also there I don't remember seeing anything that needed a dramatic improvement.

@mspraggs
Copy link
Contributor Author

All good points. I envisaged some sort of preprocessor/autoconf/template magic that would detect which hardware platform the library was being compiled on and declare the correct instructions for the current platform, or even disable the JIT compiler if the platform is unknown. This would of course mean more work, but it would at least overcome the problem of cross-platform compatibility.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants