Appreciate the comment actually! It's good feedback -- we weren't sure if it's mixing work/memes too much, and keeping our materials clean like engineering docs are probably a good way to go. We may edit it out of the post.
>> The engine is closed source. You cannot see how fft or ode45 are implemented under the hood. For high-stakes engineering, not being able to audit your tools is a risk. This is just a lie. Open matlab and you can inspect all the implementation details behind ode45. It is not a black box.
How do I see the .c files / trace how `ode45` will execute on my machine? Can I see the JIT's source code?
--
Entitled to your view, but clearly difference of opinion here. From perspective of open / closed source -- maybe for you it qualifies as open source, but I can't follow the logic chain, so to me MATLAB is not open source.
I explicitly pointed out what the article was lying about.
"You cannot see how fft or ode45 are implemented under the hood." is a totally false statement. You absolutely can do exactly that. This is not a matter of opinion. Right click the function and open it, you can view it like any other matlab function.
> From perspective of open / closed source -- maybe for you it qualifies as open source
Matlab is obviously not open source. Who said anything about that? The article claims you can not audit ode45, that is false and it seems pretty embarrassing for someone speaking authoritatively about matlab to make such basic claims, which every matlab user can disprove with two clicks. Every single matlab user has the ability to view exactly how ode45 is implemented and has the ability to audit that function. This is not a matter of opinion, this is a matter about being honest about what matlab offers.
Yep! Makes sense. Though I think the cost of writing these toolboxes is lim --> 0.
Will have a really solid rust inspired package manager soon, and a single #macro to expose a rust function in the RunMat script's namespace (= easy to bring any aspects of the rust ecosystem to RunMat).
I wouldn't be so sure that writing those toolboxes is cheap. You need an aerospace engineer to write the aero toolbox, or you are going to miss subtleties. I assume you need a biologist to write the biology toolboxes. All of these domain experts are really expensive, and I would not trust a toolbox that hadn't been review by them.
Even then... the reason we use the aero toolbox is because everybody in the aero industry trusts that MATLAB's results are accurate. I don't need to prove that the ECEF<->Keplerian conversions are correct, I can just show that I'm using the toolbox function and people assume it's correct. The aero toolbox is trusted.
When I've had to write similar code in Python, it's a massive pain to "prove" that my conversation code is correct. Often I've resorted to using MATLAB's trusted functions to generate "truth" data and then feeding that to Python to verify it gets the same results.
Obviously this is more work than just using the premade stuff that comes with the toolbox.
Any MATLAB alternative faces the same trust issue. Until it reaches enough mindshare that people assume that it's too popular to have incorrect math (which might not be a good assumption but it is one that people make about MATLAB) then it doesn't actually mimic the main benefit of MATLAB which is that I don't need to check its work.
In Julia, you explicitly need to still reason about and select GPU drivers + manage residency of tensors; in RunMat we abstract that away, and just do it for you. You just write math, and we do an equivalent of a JIT to just figure out when to run it on GPU for you.
Our goal is to make a runtime that lets people stay at the math layer as much as possible, and run the math as fast as possible.
> The first thing they teach about performant Matlab code is that simple for-loops will tank performance.
Yes! Since in RunMat we're building a computation graph and fusing operations into GPU kernels, we built the foundations to extend this to loop fusion.
That should allow RunMat to take loops as written, and unwrap the matrix math in the computation graph into singular GPU programs -- effectively letting loop written math run super fast too.
Will share more on this soon as we finish loop fusion, but see `docs/fusion/INTERNAL_NOTE_FLOOPS_VM_OPS.md` in the repo if curious (we're also creating VM ops for math idioms where they're advantageous).
> Would love to see something with the convenient math syntax of Matlab, but with broader ease of use of something like JS.
What does "convenient math syntax of Matlab, but with broader ease of use of something like JS" look like to you? What do you wish you could do with Matlab but can't / it doesn't do well with?
Piggybacking on this comment to say, I bet a lot of people's first question will be, why aren't you contributing to Octave instead of starting a new project? After reading this declaration of the RunMat vision, the first thing I did was ctrl-f Octave to make sure I hadn't missed it.
Honest question, Octave is an old project that never gained as much traction as Julia or NumPy, so I'm sure it has problems, and I wouldn't be surprised if you have excellent reasons for starting fresh. I'm just curious to hear what they are, and I suspect you'll save yourself some time fielding the same question over and over if you add a few sentences about it. I did find [1] on the site, and read it, but I'm still not clear on if you considered e.g. adding a JIT to Octave.
Fair question, and agreed we should make this clearer on the site.
We like Octave a lot, but the reason we started fresh is architectural: RunMat is a new runtime written in Rust with a design centered on aggressive fusion and CPU/GPU execution. That’s not a small feature you bolt onto an older interpreter; it changes the core execution model, dataflow, and how you represent/optimize array programs.
Could you add a JIT to Octave? Maybe in theory, but in practice you’d still be fighting the existing stack and end up with a very long, risky rewrite inside a mature codebase. Starting clean let us move fast (first release in August, Fusion landed last month, ~250 built-ins already) and build toward things that depend on the new engine.
This isn’t a knock on Octave, it’s just a different goal: Octave prioritizes broad compatibility and maturity; we’re prioritizing a modern, high-performance runtime for math workloads.
Piggybacking also to say that I hope you succeed, as your work aligns closely with the type of runtime that I had hoped to write someday when I first used MATLAB in the early 2000s (now mostly GNU Octave for small hobby projects).
The loop fusion idea sounds amazing. Another point of friction which I ran into is that MATLAB uses 1-based offsets instead of 0-based offsets for matrices/arrays, which can make porting code examples from other languages tricky. I wish there was a way to specify the offset base with something like a C #define or compiler directive. Or a way to rewrite code in-place to use the other base, a bit like running Go's gofmt to format code. Apologies if something like this exists and I'm just too out of the loop.
I'd like to point out one last thing, which is that working at the fringe outside of corporate sponsorship causes good ideas to take 10 or 20 years to mature. We all suffer poor tooling because the people that win the internet lottery pull up the ladder behind them.
The experience with this has been quite mixed, creating a new surface for bugs to appear. Used well, it can be very convenient for the reasons you state.
julia> A = collect(1:5)
5-element Vector{Int64}:
1
2
3
4
5
julia> B = OffsetArray(A, -1)
5-element OffsetArray(::Vector{Int64}, 0:4) with eltype Int64 with indices 0:4:
1
2
3
4
5
julia> A[1]
1
julia> B[0]
1
Unfortunately, mathworks is a quite litigious company. I guess you are aware of mathworks versus AccelerEyes (now makers of ArrayFire) or Comsol.
For our department, we mostly stop to use MATLAB about 7 years ago, migrating to python, R or Julia. Julia fits the "executable math" quite well for me.
Checkout PythonCall.jl and juliacall (on the python side). Not to mention that now you can literally write python wrappers of Julia compiled libraries like you would c++ ones.
> you can literally write python wrappers of Julia compiled libraries like you would c++ ones
Yes, please. What do I google? Why can't julia compile down to a module easily?
No offense but once you learn to mentally translate between whiteboard math and numpy... it's really not that hard. And if you were used to Matlab before Mathworks added a jit you were doing the same translation to vectored operations because loops are dog slow in Matlab (coincidentally Octave is so much better than Matlab syntax wise).
And again python has numba and maybe mojo, etc. Because julia refused to fill the gap. I don't understand why there's so much friction between julia and python. You should be able to trivially throw a numpy array at julia and get a result back. I don't think the python side of this is holding things back. At least back in the day there was a very anti-python vibe from julia and the insistence that all the things should be re-implemented in julia (webservers etc) because julia was out to prove it was more than a numerical language. I don't know if that's changed but I doubt it. Holy wars don't build communities well.
>> you can literally write python wrappers of Julia compiled libraries like you would c++ ones.
> Yes, please. What do I google? Why can't julia compile down to a module easily?
That said Julia's original design focused on just-in-time compilation rather than ahead-of-time compilation, so the AOT process is still rough.
> I don't understand why there's so much friction between julia and python. You should be able to trivially throw a numpy array at julia and get a result back.
Thanks!! It was originally for Octave users whose scripts were running painfully slow.
The goal was to keep the MATLAB frontend capture syntax, but run it fast.
When we dug into why people were still using Octave, it was because it let them focus on their math, and was easier for them to read - was especially important for people that aren’t programmers; eg scientists and engineers.
I suppose this is also why we write in higher level languages than assembly.
The goal of this project is now: let’s make the fastest runtime in the world to run math.
Turned out, the MATLAB syntax offers a large amount of compiler time hinting in (it is meant for math intent capture after all).
We’ve found as we built this that if we take a domain specific approach (eg we’re going to make every optimization for what’s best for people wanting to focus on the math part), we can outperform general purpose languages like Python by a large mile on the math part.
For example, internals like keeping tensor shapes + broadcasting intent within the AST, and having the computation graph available for profitable GPU/CPU threshold detection isn’t something that makes practical sense to build into a general purpose runtime like Python, but —
It lets RunMat speed up elementwise math orders of magnitude (eg 1B points going through 5-6 element wise ops like sin/cos/+/- etc are 80x faster on my MBP vs Python/PyTorch).
So Tl;dr — started as for Octave users. Goal is to build the fastest runtime for math for those that are looking to use computers to do math.
Obligatory disclosure because we’re engineers: you can still get faster by writing your own CUDA / GPU code. We’re betting 99% of the people that are trying to run math using computers don’t want to do that (ML community notwithstanding).
a solid core, not the whole Matlab (they confound the language, compiler / runtime, an IDE, and a bunch of other things in the name / product that is MATLAB).
This is a solid compiler + minimal runtime, with an architecture designed to scale.