Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
In Defense of Matlab Code (runmat.org)
137 points by finbarr1987 1 day ago | hide | past | favorite | 155 comments




As an engineer, I use Matlab (or rather, Octave the free equivalent) all the time. It's really great for numerical computing and plotting. Most things 'just work', there's a sizeable collection of packages, and I personally like how flexible the function inputs are.

Biggest drawback though is that it's over-optimized for matrix math, that it forces you to think about everything as matrices, even if that's not how your data naturally lies. The first thing they teach about performant Matlab code is that simple for-loops will tank performance. And you feel it pretty quickly, I saw a case once of some image processing, with a 1000x speedup from Matlab-optimized syntax.

Other things issues I've run into are string handling (painful), and generally OOP is unnatural. Would love to see something with the convenient math syntax of Matlab, but with broader ease of use of something like JS.


@mNovak -- super helpful note! Thank you!

Author of RunMat (this project) here --

> The first thing they teach about performant Matlab code is that simple for-loops will tank performance.

Yes! Since in RunMat we're building a computation graph and fusing operations into GPU kernels, we built the foundations to extend this to loop fusion.

That should allow RunMat to take loops as written, and unwrap the matrix math in the computation graph into singular GPU programs -- effectively letting loop written math run super fast too.

Will share more on this soon as we finish loop fusion, but see `docs/fusion/INTERNAL_NOTE_FLOOPS_VM_OPS.md` in the repo if curious (we're also creating VM ops for math idioms where they're advantageous).

> Would love to see something with the convenient math syntax of Matlab, but with broader ease of use of something like JS.

What does "convenient math syntax of Matlab, but with broader ease of use of something like JS" look like to you? What do you wish you could do with Matlab but can't / it doesn't do well with?


Piggybacking on this comment to say, I bet a lot of people's first question will be, why aren't you contributing to Octave instead of starting a new project? After reading this declaration of the RunMat vision, the first thing I did was ctrl-f Octave to make sure I hadn't missed it.

Honest question, Octave is an old project that never gained as much traction as Julia or NumPy, so I'm sure it has problems, and I wouldn't be surprised if you have excellent reasons for starting fresh. I'm just curious to hear what they are, and I suspect you'll save yourself some time fielding the same question over and over if you add a few sentences about it. I did find [1] on the site, and read it, but I'm still not clear on if you considered e.g. adding a JIT to Octave.

[1] https://runmat.org/blog/matlab-alternatives


Fair question, and agreed we should make this clearer on the site.

We like Octave a lot, but the reason we started fresh is architectural: RunMat is a new runtime written in Rust with a design centered on aggressive fusion and CPU/GPU execution. That’s not a small feature you bolt onto an older interpreter; it changes the core execution model, dataflow, and how you represent/optimize array programs.

Could you add a JIT to Octave? Maybe in theory, but in practice you’d still be fighting the existing stack and end up with a very long, risky rewrite inside a mature codebase. Starting clean let us move fast (first release in August, Fusion landed last month, ~250 built-ins already) and build toward things that depend on the new engine.

This isn’t a knock on Octave, it’s just a different goal: Octave prioritizes broad compatibility and maturity; we’re prioritizing a modern, high-performance runtime for math workloads.


Piggybacking also to say that I hope you succeed, as your work aligns closely with the type of runtime that I had hoped to write someday when I first used MATLAB in the early 2000s (now mostly GNU Octave for small hobby projects).

The loop fusion idea sounds amazing. Another point of friction which I ran into is that MATLAB uses 1-based offsets instead of 0-based offsets for matrices/arrays, which can make porting code examples from other languages tricky. I wish there was a way to specify the offset base with something like a C #define or compiler directive. Or a way to rewrite code in-place to use the other base, a bit like running Go's gofmt to format code. Apologies if something like this exists and I'm just too out of the loop.

I'd like to point out one last thing, which is that working at the fringe outside of corporate sponsorship causes good ideas to take 10 or 20 years to mature. We all suffer poor tooling because the people that win the internet lottery pull up the ladder behind them.


I wish you all the best luck with your product!

Unfortunately, mathworks is a quite litigious company. I guess you are aware of mathworks versus AccelerEyes (now makers of ArrayFire) or Comsol.

For our department, we mostly stop to use MATLAB about 7 years ago, migrating to python, R or Julia. Julia fits the "executable math" quite well for me.


All anyone really needs is seamless integration of Julia with python. Instead everyone seems to be rewriting python into rust.

Checkout PythonCall.jl and juliacall (on the python side). Not to mention that now you can literally write python wrappers of Julia compiled libraries like you would c++ ones.

I will, thanks.

> you can literally write python wrappers of Julia compiled libraries like you would c++ ones

Yes, please. What do I google? Why can't julia compile down to a module easily?

No offense but once you learn to mentally translate between whiteboard math and numpy... it's really not that hard. And if you were used to Matlab before Mathworks added a jit you were doing the same translation to vectored operations because loops are dog slow in Matlab (coincidentally Octave is so much better than Matlab syntax wise).

And again python has numba and maybe mojo, etc. Because julia refused to fill the gap. I don't understand why there's so much friction between julia and python. You should be able to trivially throw a numpy array at julia and get a result back. I don't think the python side of this is holding things back. At least back in the day there was a very anti-python vibe from julia and the insistence that all the things should be re-implemented in julia (webservers etc) because julia was out to prove it was more than a numerical language. I don't know if that's changed but I doubt it. Holy wars don't build communities well.


> Biggest drawback though is that it's over-optimized for matrix math ...

I think this is what inspired the creation of Julia -- they wanted a Matlab clone where for loops were fast because some problems don't fit the matrix mindset.


Yes, strings appear like an afterthought, and sadly the Octave version has slight incompatibilities which may be a PITA for any non trivial script which aims to be compatible.

It's one of those languages that outgrew its original purpose, as did Python IMHO. So non-matrix operations like string processing and manipulation of data structures like tables (surprisingly, graphs are not bad) become unwieldy in MATLAB - much like Python's syntax becomes unwieldy in array calculations, as illustrated in the original post.

There's also Julia.

Earlier in my career, I found that my employers would often not buy Matlab licenses, or would make everyone share even when it was a resource needed daily by everyone. Not having access to the closed-source, proprietary tool hurt my ability to be effective. So I started doing my "whiteboard coding" in Julia and still do.


Precisely; today Julia already solves many of those problems.

It also removes many of Matlab's footguns like `[1,2,3] + [4;5;6]`, or also `diag(rand(m,n))` doing two different things depending on whether m or n are 1.


An understated advantage of Julia over MATLAB is the use of brackets over parentheses for array slicing, which improves readability even further.

The most cogent argument for the use of parentheses for array slicing (which derives from Fortran, another language that I love) is that it can be thought of as a lookup table, but in practice it's useful to immediately identify if you are calling a function or slicing an array.


I don't think Julia really solves any problems that aren't already solved by Python. Python is sometimes slower (hot loops), but for that you have Numba. And if something is truly performance critical, it should be written or rewritten in C++ anyway.

But Julia also introduces new problems, such as JIT warmup (so it's not really suitable for scripting) and is still not considered trustworthy:

https://yuri.is/not-julia/


> Python is sometimes slower (hot loops), but for that you have Numba

This is a huge understatement. At the hedge fund I work at, I learned Julia by porting a heavily optimized Python pipeline. Hundreds of hours had gone into the Python version – it was essentially entirely glue code over C.

In about two weeks of learning Julia, I ported the pipeline and got it 14x faster. This was worth multiple senior FTE salaries. With the same amount of effort, my coworkers – who are much better engineers than I am – had not managed to get any significant part of the pipeline onto Numba.

> And if something is truly performance critical, it should be written or rewritten in C++ anyway.

Part of our interview process is a take-home where we ask candidates to build the fastest version of a pipeline they possibly can. People usually use C++ or Julia. All of the fastest answers are in Julia.


> People usually use C++ or Julia. All of the fastest answers are in Julia

That's surprising to me and piques my interest. What sort of pipeline is this that's faster in Julia than C++? Does Julia automatically use something like SIMD or other array magic that C++ doesn't?


I use Rust instead of C++, but I also see my Julia code being faster than my Rust code.

In my view, it's not that Julia itself is faster than Rust - on the contrary, Rust as a language is faster than Julia. However, Julia's prototyping, iteration speed, benchmarking, profiling and observability is better. By the time I would have written the first working Rust version, I would have written it in Julia, profiled it, maybe changed part of the algorithm, and optimised it. Also, Julia makes more heavy use of generics than Rust, which often leads to better code specialization.

There are some ways in which Julia produces better machine code that Rust, but they're usually not decisive, and there are more ways in which Rust produces better machine code than Julia. Also, the performance ceiling for Rust is better because Rust allows you to do more advanced, low level optimisations than Julia.


This is pretty much it – when we had follow up interviews with the C++ devs, they had usually only had time to try one or two high-level approaches, and then do a bit of profiling & iteration. The Julia devs had time to try several approaches and do much more detailed profiling.

The main thing is just that Julia has a standard library that works with you rather than working against you. The built in sort will use radix sort where appropriate and a highly optimized quicksort otherwise. You get built in matrices and higher dimensional arrays with optimized BLAS/LaPack configured for you (and CSC+structured sparse matrices). You get complex and rational numbers, and a calling convention (pass by sharing) which is the fast one by default 90% of the time instead of being slow (copying) 90% of the time. You have a built in package manager that doesn't require special configuration, that also lets you install GPU libraries that make it trivial to run generic code on all sorts of accelerators.

Everything you can do in Julia you can do in C++, but lots of projects that would take a week in C++ can be done in an hour in Julia.


To be clear, the fastest theoretically possible C++ is probably faster than the fastest theoretically possible Julia. But the fastest C++ that Senior Data Engineer candidates would write in ~2 hours was slower than the fastest Julia (though still pretty fast! The benchmark for this problem was 10ms, and the fastest C++ answer was 3 ms, and the top two Julia answers were 2.3ms and .21ms)

The pipeline was pretty heavily focused on mathematical calculations – something like, given a large set of trading signals, calculate a bunch of stats for those signals. All the best Julia and C++ answers used SIMD.


> Part of our interview process is a take-home where we ask candidates to build the fastest version of a pipeline they possibly can. People usually use C++ or Julia. All of the fastest answers are in Julia.

It would be fun if you could share a similar pipeline problem to your take-home (I know you can't share what's in your interview). I started off in scientific Python in 2003 and like noodling around with new programming languages, and it's great to have challenges like this to work through. I enjoyed the 1BRC problem in 2024.


The closest publicly available problem I can think of is the 1 billion rows challenge. It's got a bigger dataset, but with somewhat simpler statistics – though the core engineering challenges are very similar.

https://github.com/gunnarmorling/1brc


The C++ devs at your firm must be absolutely terrible if a newcomer using a scripting language can write faster software, or you are not telling the whole story. All of NumPy, Julia, MATLAB, R, and similar domain-specific, user-friendly libraries and platforms use BLAS and LAPACK for numerical calculations under the hood with some overhead depending on the implementation, so a reasonably optimized native implementation should always be faster. By the looks of it the C++ code wasn't compiled with -O3 if it can be trivially beaten by Julia.

Are you aware that Julia is a compiled language with a heavy focus on performance? It is not in the same category as NumPy/MATLAB/R

As your comment already hints at, using Python often ends up a hodgepodge of libraries and tools glued together, that work for their limited scope but show their shaky foundations any time your work is outside of those parts. Having worked with researchers and engineers for years on their codebases, there is already too much "throw shit at the wall and see what sticks" temptation in this type of code (because they'd much rather be working on their research than on the code), and the Python way of doing things actively encourages that. Julia's type hierarchies, integrated easy package management, and many elements of its design make writing better code easier and even the smoother path.

> I don't think Julia really solves any problems that aren't already solved by Python.

I don't really need proper furniture, the cardboard boxes and books setup I had previously "solved" the same problems, but I feel less worried about random parts of it suddenly buckling, and it is much more ergonomic in practice too.


> using Python often ends up a hodgepodge of libraries and tools glued together

At least it has those tools and libraries, what cannot be said about Julia.


What tools/libraries you miss from Julia? Have you used the language or merely speculating?

> What tools/libraries you miss from Julia?

My experience with this website is that it would be rather pointless to enumerate, because you will then point to some poorly documented, buggy and supporting fraction of features Julia "alternatives" to Python packages or APIs that are developed and maintained by well-resourced organizations.

The same thing for tooling - unstable, buggy Julia plugin for VSCode is not the same as having products like PyCharm and official Python plugins made by Microsoft for VS and VSCode.

Now, I will admit that Julia also has some niceties that would be hard to find in Python ecosystem (mainly SciML packages), but it is not enough.

> Have you used the language or merely speculating?

I just saw the logo in Google Images.


> I don't think Julia really solves any problems that aren't already solved by Python.

But isn't the whole point of this article that Matlab is more readable than Python (i.e. solves the readability problem)? The Matlab and Julia code for the provided example are equivalent[1]: which means Julia has more readable math than Python.

[1]: Technically, the article's code will not work in Julia because Julia gives semantic meaning to commas in brackets, while Matlab does not. It is perfectly valid to use spaces as separators in Matlab, meaning that the following Julia code is also valid Matlab which is equivalent to the Matlab code block provided in the article.

    X = [ 1 2 3 ];
    Y = [ 1 2 3;
          4 5 6;
          7 8 9 ];
    Z = Y * X';
    W = [ Z Z ];

This snippet is also cleaner than one in article and more in spirit. Also the image next to whiteboard has a no-commas example.

Yes, Python code is indeed fast if you write it in C++... what a bizarre argument. The whole selling point of Julia is that I can BOTH have a dynamic language with a REPL, where I can redefine methods etc, AND that it runs so fast there is no need to go to another language.

It's wild what people get used to. Rustaceans adapt to excruciating compile times and borrowchecker nonsense, and apparently Pythonistas think it's a great argument in favor of Python that all performance sensitive Python libraries must be rewritten in another language.

In fairness, we Julians have to adapt to a script having a 10 second JIT latency before even starting...


> Pythonistas think it's a great argument in favor of Python that all performance sensitive Python libraries must be rewritten in another language.

It is, because usually someone already did it for them.


>I don't think Julia really solves any problems that aren't already solved by Python.

You read the article that compares MATLAB to Python? It's saying MATLAB, although some issues exist, still relevant because it's math-like. GP points out Julia is also math-like without those issues.


Sometimes slower? No, always slower. And no one wants to deal with the mess that is creating an interface with C or C++. And I wouldn’t want to code in that either, way too much time, effort, headache.

In Julia, you explicitly need to still reason about and select GPU drivers + manage residency of tensors; in RunMat we abstract that away, and just do it for you. You just write math, and we do an equivalent of a JIT to just figure out when to run it on GPU for you.

Our goal is to make a runtime that lets people stay at the math layer as much as possible, and run the math as fast as possible.


Why is the `[1,2,3] + [4;5;6]` syntax a footgun? It is a very concise, comprehensible and easy way to create matrices in many cases. Eg if you have a timeseries S, then `S - S'` gives all the distances/differences between all its elements. Or you have 2 string arrays and you want all combinations between the two.

The diag is admittedly unfortunate and it has confused me myself, it should actually be 2 different functions (which are sort of reverse of each other, weirdly making it sort of an involution).


What does it even mean to add a 1x3 matrix to a 3x1 matrix ?

This is about how array operations in matlab work. In matlab, you can write things such as

    >> [1 2 3] + 1
    ans = [2 3 4]
In this case, the operation `+ 1` is applied in all columns of the array. In this exact manner, when you add a (1 x m) row and a (n x 1) column vector, you add the column to each row element (or you can view it the other way around). So the result is as if you repeat your (n x 1) column m times horizontally, giving you a (n x m) matrix, do the same for the row vertically n times giving you another (n x m) matrix, and then you add these two matrices. So basically adding a row and a column is essentially a shortcut for repeating adding these two (n x m) matrices (and runs faster than actually creating these matrices). This gives a matrix where each column is the old column plus the row element for that row index. For example

    >> [1 2 3] + [1; 2; 3]
    ans = [2 3 4
           3 4 5
           4 5 6]
A very practical example is, as I mentioned, getting all differences between the elements of a time series by writing `S - S'`. Another example, `(1:6)+(1:6)'` gives you the sums for all possible combinations when rolling 2 6-sided dice.

This does not work only with addition and subtraction, but with dot-product and other functions as well. You can do this across arbitrary dimensions, as long as your input matrices non-unit dimensions do not overlap.


It means the same thing in MATLAB and numpy:

   Z = np.array([[1,2,3]])
   W = Z + Z.T
   print(W)
Gives:

   [[2 3 4]
    [3 4 5]
    [4 5 6]]
It's called broadcasting [1]. I'm not a fan of MATLAB, but this is an odd criticism.

[1] https://numpy.org/devdocs/user/basics.broadcasting.html#gene...


One of the really nice things Julia does is make broadcasting explicit. The way you would write this in Julia is

    Z = [1,2,3]

    W = Z .+ Z' # note the . before the + that makes this a broadcasted
This has 2 big advantages. Firstly, it means that users get errors when the shapes of things aren't what they expected. A DimmensionMismatch error is a lot easier to debug than a silently wrong result. Secondly, it means that julia can use `exp(M)` etc to be a matrix exponential, while the element-wise exponential is `exp.(M)`. This allows a lot of code to naturally work generically over both arrays and scalars (e.g. exp of a complex number will work correctly if written as a 2x2 matrix)

I remember the pitch for Julia early on being matlab-like syntax, C-like performance. When I've heard Julia mentioned more recently, the main feature that gets highlighted is multiple-dispatch.

https://www.youtube.com/watch?v=kc9HwsxE1OY

I think it seems pretty interesting.


Julia is actually faster than C for some things.

simulink is the matlab moat ,not just general math expression

Pictorus is a simulink alternative https://www.pictor.us/simulink-alternative

julia is still clunky for these purposes! you can't even plot two things at the same time without it being weird and there's still a ton of textual noise when expressing linear algebra in it. (in fact, i'd argue the type system makes it worse!)

matlab is like what it would look like to put the math in an ascii email just like how python is what it would look like to write pseudocode and in both cases it is a good thing.


Julia competes with the scientific computing aspect of matlab, which is easily the worst part of matlab and the one which the easiest to replace.

Companies do not buy matlab to do scientific computing. They buy matlab, because it is the only software package in the world where you can get basically everything you ever want to do with software from a single vendor.


In addition: Simulink, the documentation (which is superb), and support from a field application engineer is essentially a support contract and phone call away.

I say this as someone who’d be quite happy never seeing Matlab code again: Mathworks puts a lot of effort into support and engineering applications.


It is hard to explain that to people here.

Matlab is an great tool, if you can afford it.

It was a very unpleasant feeling when I graduated from my PhD and realized that most, if not all, of the Matlab scripts I had used for my research would now be useless to me unless I joined a company or national laboratory that paid for licenses with the specific toolboxes I had used.

I'm glad that a significant portion of tools in my current field are in open source languages such as Python and Julia. It widens access to other researchers who can then build upon it.

(And yes, I'm aware of Octave. It does not have the capabilities of Matlab in the areas that I worked in, and was not able to run all of my PhD scripts. I have not tried RunMat yet, but am looking forward to experimenting with it.)


> I graduated from my PhD and realized that most, if not all, of the Matlab scripts I had used for my research would now be useless

And this is why you should write free software and, as a scientist, develop algorithms that do not rely on the facilities of a specific language or platform. Nothing is more annoying than reading a scientific paper and finding out that 90% of the "implementation" is calling a third party library treated as a blackbox.


> And yes, I'm aware of Octave. It does not have the capabilities of Matlab in the areas that I worked in

Was there a specific reason for that? Or was it simply nobody wrote the code?


Octave has not implemented all of Matlab's functionality. You can see a list of Matlab functions that have not yet been implemented in Octave at the link below. It's a long list.

https://hg.savannah.gnu.org/hgweb/octave/file/tip/scripts/he...

EDIT: If the original link above isn't working, here's a fairly recent archived version:

https://web.archive.org/web/20250123192851/https://hg.savann...


You could say that no one wrote that code. But Matlab has serious packages in numerous engineering fields and it’s not anywhere close to easily replicable.

It’s like how open source will never replace Excel but probably worse because it’s multiple fields and it’s way harder to replicate it.


On the contrary, I think that well-designed general-purpose languages beat domain-specific languages. Even in the example given, in NumPy you can use np.array, but to make a fair comparison, use np.matrix.

  import numpy as np
  
  X = np.matrix([1, 2, 3])
  Y = np.matrix([[1, 2, 3],
                 [4, 5, 6],
                 [7, 8, 9]])
  
  Z = Y * X.T
  W = np.hstack([Z, Z])
That way, we can extend our languages. If np.matrix is "too many keystrokes", it can be imported as M, or similar.

X.T is as readable as X' - but on top of that, also extensible. If we want to add other operations, we can do so. Especially since transpose is a very limited operation: it only makes sense for vectors and matrices. In much of numerics (quantum physics, deep learning, etc.), we often work with tensors. For example, within matrix notation, I would expect [Z, Z] to create a tensor, not concatenate matrices.

To make it clear, I agree with the main premise that it is important to make math readable, and thus easy to read and debug. Otherwise, it is one of the worst places for errors (numbers in, numbers out).

When it comes to matrix notation, I prefer PyTorch over NumPy, as it makes it easy to go from math on a whiteboard to executable code (see https://github.com/stared/thinking-in-tensors-writing-in-pyt...).

Also, for rather advanced custom numerics in quantum computing, I used Rust. To my surprise, not only was it fast, but thanks to macros, it was also succinct.


In Julia:

  X = [1 2 3]
  Y = [1 2 3;
       4 5 6;
       7 8 9]

  Z = Y * X'
  W = hcat(Z, Z)

It's been long since I've heard of Julia. It seems it has hard times picking up steam... Any news ? (yeah, I check the releases and the juliabloggers site)

Yeah, it didn’t have the explosive success that rust had. Most probably due to a mixture of factors, like the niche/academic background and not being really a language to look at if you didn’t do numerc computing (at the beginning) and therefore out of the mouth of many developers on the internet. And also some aspects of the language being a bit undercooked. But, there’s a but, it is nonetheless growing, as you probably know having read the releases, the new big thing is the introduction of AOT compilaton. But there’s even more stuff cooking now, strict mode for having harder static guaratees at compile time, language syntax evolution mechanisms (think rust editions), cancellation and more stuff I can’t recall at the moment. Julia is an incredibly ambitious project (one might say too ambitious) and it shows both in the fact that the polish is still not there after all this time, but it is also starting to flex its muscles. The speed is real, and the development cycle is something that really spoiled me at this point.

The problem with MATLAB is that idiomatic MATLAB style (every operation returns a fresh matrix) can easily become very inefficient: it leads to countless heap memory allocations of new matrices, resulting in low data-access locality, i.e. your data is needlessly copied around in slow DRAM all the time, rather than being kept in the fastest CPU cache.

Julia's MATLAB-inspired syntax is at least as nice, but the language was from the ground up designed to enable you writing high-performance code. I have seen numerous cases where code ported from MATLAB or NumPy to Julia performed well over an order of magnitude faster, while often also becoming more readable at the same time. Julia's array-broadcast facilities, unparalleled in MATLAB, are just reason for that. The ubiquitous availability of in-place update versions of standard library methods (recognizable by an ! sign) is another one.

In our group, nobody has been using MATLAB for nearly a decade, and NumPy is well on its way out, too. Julia simply has become so much more productive and pleasant to work with.

https://julialang.org/

https://docs.julialang.org/en/v1/manual/noteworthy-differenc...


I want to come out and say that a long time ago at a startup we needed to generate a very particular type of analysis graph for a human operator to review in our SaaS.

and I just straight up installed GNU Octave on the server and called out to it from python, using the exact code the mathematician had devised.


These days however with all the AI coding tools that are available, it probably makes more sense to just ask Claude to port the Matlab/Octave script to Python and directly integrate it into your program. Numpy/Scipy often provide drop-in replacements for Matlab functions, even the names are the same in some cases.

I have gone further and asked AI to port working but somewhat slow numerical scripts to C++ and it's completely effortless and very low risk when you have the original implementation as test.


See the benefit of just using what the original mathematician wrote, is that if they had a problem with a way the graph was rendering, or they wanted to tweak it, they just had to edit the code, no translation layer needed. It shipped like any other component of the product at the time.

But then you lose the readability that is the core defense of MATLAB/Octave. Ok so NumPy is readable, but less pleasantly so.

Yeah, this is a pretty common pattern: use a domain-specific tool where it fits (Octave for the math), and a general language for the product glue (Python). Same idea as infra work — lots of teams would rather express intent in Terraform than build it in Rust, because a DSL can be a cleaner fit for the job.

For my thesis I did something similar: bash scripts to extract raw data from a Subversion repository, to be preprocessed with PHP scripts (now I would prefer Python but had more experience with PHP) for text extraction and csv output, and finally Octave did the math magic, generating tables and saving graphics in png format, ready for import into my Lyx document.

The most sensible thing I've heard this year.

Of the things matlab has going for it, looking just like the math is pretty far down the list. Numpy is a bit more verbose but still 1-to-1 with the whiteboard. The last big pain point was solved (https://peps.python.org/pep-0465/) with the dedicated matmul operator in python 3.5.

Real advantages of matlab:

* Simulink

* Autocoding straight to embedded

* Reproducible & easily versioned environment

* Single-source dependency easier to get security to sign off on

* Plotting still better than anything else

Big disadvantages of matlab:

* Cost

* Lock-in

* Bad namespaces

* Bad typing

* 1-indexing

* Small package ecosystem

* Low interoperability & support in 3rd party toolchains


> Big disadvantages of matlab:

I will add to that:

* it does not support true 1d arrays; you have to artificially choose them to be row or column vectors.

Ironically, the snippet in the article shows that MATLAB has forced them into this awkward mindset; as soon as they get a 1d vector they feel the need to artificially make it into a 2d column. (BTW (Y @ X)[:,np.newaxis] would be more idiomatic for that than Y @ X.reshape(3, 1) but I acknowledge it's not exactly compact.)

They cleverly chose column concatenation as the last operation, hardly the most common matrix operation, to make it seem like it's very natural to want to choose row or column vectors. In my experience, writing matrix maths in numpy is much easier thanks to not having to make this arbitrary distinction. "It's this 1D array a row or a column?" is just over less thing to worry about in numpy. And I learned MATLAB first, do I don't think I'm saying that just because it's what I'm used to.


* it does not support true 1d arrays; you have to artificially choose them to be row or column vectors.

I despise Matlab, but I don't think this is a valid criticism at all. It simply isn't possible to do serious math with vectors that are ambiguously column vs. row, and this is in fact a constant annoyance with NumPy that one has to solve by checking the docs and/or running test lines on a REPL or in a debugger. The fact that you have developed arcane invocations of "[:,np.newaxis]" and regular .reshape calls I think is a clear indication that the NumPy approach is basically bad in this domain.

You do actually need to make a decision on how to handle 0 or 1-dimensional vectors, and I do not think that NumPy (or PyTorch, or TensorFlow, or any Python lib I've encountered) is particularly consistent about this, unless you ingrain certain habits to always call e.g. .ravel or .flatten or [:, :, None] arcana, followed by subsequent .reshape calls to avoid these issues. As much as I hated Matlab, this shaping issue was not one I ran into as immediately as I did with NumPy and Python Tensor libs.

EDIT: This is also a constant issue working with scikit-learn, and if you regularly read through the source there, you see why. And, frankly, if you have gone through proper math texts, they are all extremely clear about column vs row vectors and notation too, and all make it clear whether column vs. row vector is the default notation, and use superscript transpose accordingly. It's not that you can't figure it out from context, it is that having to figure it out and check seriously damages fluent reading and wastes a huge amount of time and mental resources, and terrible shaping documentation and consistency is a major sore point for almost all popular Python tensor and array libraries.


> It simply isn't possible to do serious math with vectors that are ambiguously column vs. row ... if you have gone through proper math texts

(There is unhelpful subtext here that I can't possibly have done serious math, but putting that aside...) On the contrary, most actual linear algebra is easier when you have real 1D arrays. Compare an inner product form in Matlab:

   x' * A * y
vs numpy:

   x @ A @ y
OK, that saving of one character isn't life changing, but the point is that you don't need to form row and column vectors first (x[None,:] @ A @ y[:,None] - which BTW would give you a 1x1 matrix rather than the 0D scalar you actually want). You can just shed that extra layer of complexity from your mind (and your formulae). It's actually Matlab where you have to worry more - what if x and y were passed in as row vectors? They probably won't be but it's a non-issue in numpy.

> math texts ... are all extremely clear about column vs row vectors and notation too, and all make it clear whether column vs. row vector is the default notation, and use superscript transpose accordingly.

That's because they use the blunt tool of matrix multiplication for composing their tensors. If they had an equivalent of the @ operator then there would be no need, as in the above formula. (It does mean that, conversely, numpy needs a special notation for the outer product, whereas if you only ever use matrix multiplication and column vectors then you can do x * y', but I don't think that's a big deal.)

> This is also a constant issue working with scikit-learn, and if you regularly read through the source there, you see why.

I don't often use scikit-learn but I tried to look for 1D/2D agreement issues in the source as you suggested. I found a couple, and maybe they weren't representative, but they were for functions that could operate on a single 1D vector or could be passed as a 2D numpy array but, philosophically, with a meaning more like "list of vectors to operate on in parallel" rather than an actual matrix. So if you only care about 1d arrays then you can just pass it in (there's a np.newaxis in the implementation, but you as the user don't need to care). If you do want to take advantage of passing multiple vectors then, yes, you would need to care about whether those are treated column-wise or row-wise but that's no different from having to check the same thing in Matlab.

Notably, this fuss is precisely not because you're doing "real linear algebra" - again, those formulae are (usually) easiest with real 1D arrays. It when you want to do software-ish things, like vectorise operations as part of a library function, that you might start to worry about axes.

> unless you ingrain certain habits to always call e.g. .ravel or .flatten or [:, :, None] arcana

You shouldn't have to call .ravel or .flatten if you want a 1D array - you should already have one! Unless you needlessly went to the extra effort of turning it into a 2D row/column vector. (Or unless you want to flatten an actual multidimensional array to 1D, which does happen; but that's the same as doing A(:) in Matlab.)

Writing foo[:, None] vs foo[None, :] is no different from deciding whether to make a column or row vector (respectively) in MATLAB. I will admit it's a bit harder to remember - I can never remember which index is which (but I also couldn't remember without checking back when I used Matlab either). But the numpy notation is just a special case of a more general and flexible indexing system (e.g. it works for higher dimensions too). Plus, as I've said, you should rarely need it in practice.


> Autocoding straight to embedded

I used this twenty-something years ago. It worked, but I would not have wanted to use it for anything serious. Admittedly, at the time, C on embedded platforms was a truly awful experience, but the C (and Rust, etc) toolchain situation is massively improved these days.

> Plotting still better than anything else

Is it? IIRC one could fairly easily get a plot displayed on a screen, but if you wanted nice vector output suitable for use in a PDF, the experience was not enjoyable.


> The issue was never the syntax—it was the runtime. Why readable math still matters in a world aided by LLM-assisted code generation

I’m going to stop you right there. Matlab has 5 issues:

1. The license

2. Most users don’t understand what makes Matlab special and they write for loops over their arrays.

3. The other license

4. The other license

5. The license server

Mathworks seems to have set up licensing to maximize how much revenue they can extract with no thought given to how deeply annoying it is to use.


For me the friction of dealing with licenses would make it hard to fully integrate a commercial package into my routine. Commercial developers have to decide how they expect a product to be used, so they can allocate finite resources. This invariably imposes limits on users.

In my case, trivial uses are as important as high-visibility projects. I can spin up a complete Python installation to do something like log data from some sensors in the lab, while I do something in another lab, and have something going at my desk, and at home. I use hobby projects to learn new skills. I've played with CircuitPython to create little gadgets that my less technically inclined colleagues can work with. I encouraged my kids to learn Python. I write little apps and give them to colleage. I probably have a dozen Python installations running here and there at any moment.

This isn't a slam on Matlab, since I know it has a loyal following. And I'm unaware of an alternative to Simulink, if that's your bag. And Matlab might be doing the right thing for their business. My impression is that most "engineering software" is geared towards the engineer sitting at a stationary workstation all day, like a CAD operator. And this may be the main way that software is used. Maybe I'm the freak.


That's precisely the point(s), the runtime's issues (closed source, cost, etc) are what is helping with the declining popularity of the language when really the language can be handy to people who work in math-heavy industries.

thankfully there are fast open source alternatives out there now, hint hint runmat ;)


For me, the main appeal to MATLAB was the REPL experience, allowing me to experiment with ideas and inspect results in the UI. Python notebook have bridged the gap a bit, but always require a combination of libraries (with different docs & design choices) to write a single script.

Yup, for both Matlab and Maple the big feature was the Jupyter notebook experience right out the box. On top of that, at the time it had a pretty high number of math functions implemented.

For MATLAB, there exist many high quality free and/r open source toolboxes from community and academia.

Also there are high quality free and/or open source alternatives.

GNU Octave https://octave.org and Octave online https://octave-online.net/

Freemat https://freemat.sourceforge.net/ (sadly no ongoing development)

Scilab https://www.scilab.org/ and Scilab online https://cloud.scilab.in/


Indeed, there are many high-quality alternatives (sometimes described as "MATLAB clones" back in the day) that never gained bigger traction.

Among modern alternatives that don't strictly follow MATLAB syntax, Julia has the biggest mindshare now?

GNU Octave, as a superset of the MATLAB language, was (is) most capable of running existing MATLAB code. While Octave implemented some solvers better than MATLAB, the former just could not replicate a large enough portion of the latter's functionality that many scientists/engineers were unable to fully commit to it. I wonder whether runmat.org would run up against this same problem.

The other killer app of MATLAB is Simulink, which to my knowledge is not replicated in any other open source ecosystem.


Shameless plug for RunMat (we wrote this blog article, also an open source alternative for MATLAB):

https://runmat.org


glad I have your ear

as much as I love the meme in your post, it's the reason I won't be able to share it with work colleagues who use matlab every day

just something to consider


Appreciate the comment actually! It's good feedback -- we weren't sure if it's mixing work/memes too much, and keeping our materials clean like engineering docs are probably a good way to go. We may edit it out of the post.

Cheers for the comment!


Personally I haven't used Matlab since college (EE), and I haven't really done those EE things since then. But man, Matlab was indispensable (control loop design, system identification, all sorts of signal processing and E&M stuff).

One of the best things about Matlab was that it had an absolutely enormous library of tools, I could reasonably do everything I wanted, and more importantly, all the notation and convention matched what was used in EE, so I could easily translate whitepapers, and my own and others' calculations into code.

In scientific computing, each disciple has their own preferred way of writing things, so a system identification problem might be stated completely differently by a power engineer, a communications engineer, and a physicist, and deciphering each others' formulas and notations often is just as difficult as understanding the core point of the paper.

That's why a lot of math packages, which were written by academic physicists for other physicists were essentially impossible to use for EEs, and Matlab actually adopting the EE conventions was a godsend, even if it was proprietary.

As a programming language, I didn't like/hate it that much, I guess if I tried to develop an application suite in it like many others did, I'd have had an awful time, but for the simple stuff (in terms of programming) I did with it, it was fine.


Many people use Octave https://octave.org/ which is compatible (generally) with Matlab, supports this simple syntax, and is open source software. Indeed, I've taken at least one class where the instructor asked people use Octave for these kinds of calculations.

Yep -- Octave was very helpful for me in school.

Octave is not particularly fast.

RunMat is very fast (orders of magnitude -- see benchmarks).


One small piece of feedback for the dev, since I see you've been replying to comments here.

I had to jump like 3 links and 4 pages down to figure out what runmat actually "is" / "does".

As someone who's done their whole thesis using Octave this looks interesting.

I love Octave, it's one of my favourite languages. And, for reasons I don't understand even myself, I don't like matlab that much (though I admit their documentation is excellent).

How would you "sell" runmat to someone like me?


Thanks for digging in ;) We just released RunMat in August as an open-source, fast MATLAB runtime. The goal is to make it the fastest way to run math, period.

Coming from Octave, you'll notice significant speedup advantages, you can see some of our benchmarks with it here https://runmat.org/blog/introducing-runmat

Last month, we put out 250+ built-in functions and Accelerate, which fuses operations and routes between CPU/GPU without any extra code/memory management, i.e. no GPUarray.

We're still flushing out the plotting function, but we'll have updates to share around that and a browser version very soon.


Why do you focus exclusively on matlab as your competitor posterchild?

I feel like you should be saying Matlab / Octave wherever possible; especially since your target audience is far more likely to be the one that wants a "faster Octave" rather than a "cheaper Matlab".

PS: Don't trust github language stats; half of that code is octave specific, but still gets labelled as Matlab.


What's the business model?

An underrated aspect of Matlab is its call-by-value semantics. Function arguments are copied by default. Python+NumPy is call-by-reference; mutations to array arguments are visible to the caller. This creates a big class of bugs that is hard for non-programmers to understand.

Interesting... I wrote a similar post about MATLAB's syntax a while ago, and I still think MATLAB is one of the best calculators on the market.

RunMat is an interesting idea, but a lot of MATLAB's utility comes from the toolboxes, and unless RunMat supports every single toolbox I need, I'm going to be reaching for that expensive MATLAB license over and over again.


Yep! Makes sense. Though I think the cost of writing these toolboxes is lim --> 0.

Will have a really solid rust inspired package manager soon, and a single #macro to expose a rust function in the RunMat script's namespace (= easy to bring any aspects of the rust ecosystem to RunMat).


I wouldn't be so sure that writing those toolboxes is cheap. You need an aerospace engineer to write the aero toolbox, or you are going to miss subtleties. I assume you need a biologist to write the biology toolboxes. All of these domain experts are really expensive, and I would not trust a toolbox that hadn't been review by them.

It's funny that you listed 1-based index as a strength, and another poster here lists it as a weakness. Goes to show there's really no agreement when it comes to indexing!


Glad to see this. MATLAB doesn't even have consistent/sensible syntax, or at least doesn't have consistent syntax relative to almost all other programming languages in existence.

In this particular case, there was no need at all for the reshaping, and the result could have been achieved with just:

    Z = Y @ X
    W = np.c_[Z, Z]

Defending Matlab code in 2025 is like defending Emacs: it's not that you don't have logically good points, in many cases, it is just that you are so completely out of touch with modern advances, communities, and requirements that it isn't even clear that you are speaking to anything more than what amounts to a rounding error.

EDIT: Specifically, it is extremely hard for me to think that anyone should be convinced to learn Matlab in 2025 - this seems to be a statistically useless and obviously soon-dying skill. Any logical arguments about what Matlab offers NOW seem to entirely ignore - what seems to me - this obvious practical reality.


> Defending Matlab code in 2025 is like defending Emacs

I feel attacked.


LOL. My friend, I sympathize deeply, and this was not the intention.

When I read "The honest truth: ...", the AI-generated alarm bells go off in my head. Whether the article is human written or not.

Seems like Claude to me.

> # We must reshape X to be a column vector (3,1)

> # or rely on broadcasting rules carefully.

> Z = Y @ X.reshape(3, 1)

Why not use X.transpose()?


Actually, I just tried Y @ X in Numpy and it works just fine.

It's because in Python 1-dimensional arrays are actually a thing, unlike in Matlab. That line of code is a non-example; it is easier to make it work in Python than in Matlab.


The result of `Y @ X` has shape (3,), so the next line (concatenate as columns) fails.

To make `Z` a column vector, we would need something like `Z = (Y @ X)[:,np.newaxis]`.

Although, I'm not sure why the author is using `concatenate` when the more idiomatic function would be stack, so the change you suggest works and is pretty clean:

   Z = Y @ X
   np.stack([Z, Z], axis=1)
   # array([[14, 14],
   #        [32, 32],
   #        [50, 50]])
with convention that vectors are shape (3,) instead of (3,1).

You do realize how many arbitrary words and concepts that are nowhere to be found in “conventional” math there are here, right?

I know what all of these do, but I just can’t shake the feeling that I’m constantly fighting with an actual python. Very aptly named.

I also think it’s more to do with the libraries than with the language, which I actually like the syntax of. But numpy tries to be this highly unopinionated tool that can do everything, and ends up annoying to use anywhere. Matplotlib is even worse, possibly the worst API I’ve ever used.


> To make `Z` a column vector, we would need something like `Z = (Y @ X)[:,np.newaxis]`.

Doesn't just (Y @ X)[None] work? None adding an extra dimension works in practice but I don't know if you're "supposed" to do that


It seems `(Y @ X)[None]` produces a row vector of shape (1,3),

   (Y @ X)[None]
   
   # array([[14, 32, 50]])
   
but `(Y @ X)[None].T` works as you described:

   (Y @ X)[None].T
   
   # array([[14],
   #        [32],
   #        [50]])

I don't know either RE supposed to or not, though I know np.newaxis is an alias for None.

Isn’t X.T still valid? I believe it’s been phased out in pandas but still around in numpy.

It doesn't seem to be the example here, but I know that X.transpose() does not work if X is a (3,) and not a (3,1), which is the common way to present vectors in numpy. MATLAB forcing everything to be at least rank-2 solves a bunch of these headaches.

I thought so too, but it doesn't seem to work since X has shape (3,).

This seems to work,

   Z = Y @ X[:,np.newaxis]
thought it is arguably more complicated than calling the `.reshape(3,1)` method.

> Why not use X.transpose()?

Or just X.T, the shorthand alias for that


I dislike Matlab's licensing and spaghetti style coding practices like anyone else, but there's another reason python or other modern replacements are scorned at. There are some algorithm/theorem implementations that will produce different results based on the platform and version you are using. With Matlab the chance of that happening is much much lower, if not zero.

I went to college a few miles from Mathworks's global headquarters.

They came to speak at my school and described open source alternatives (Python in particular) as the biggest threat to MATLAB.

I think if they open-sourced the MATLAB runtime and embraced a model similar to Canonical or Red Hat where users paid for support or integrations, they'd make more money. But it's hard to get there from where they are now.


Alas, AI-written. Makes it much harder to trust.

I remember my first encounter with Matlab. Some YouTuber was building a toy rocket and he was simulating it in Matlab (Simulink). He just put in the weight of the rocket and it gave him the trajectory, apogee, flight time etc. It was like magic to a beginner like me.

You can do the same thing in other languages but it won't be built in like that.


Its been a while since I worked with MATLAB and others. Whats up with GNU Octave these days? IIRC thats what folks were championing 10 years ago when anyone was talking about the problems with MATLAB.

I sometimes run my MATLAB code under Octave (generally needs some minor tweaks to catch up with a few new features) and while I don't hit bugs, I find it is much slower.

I think the MATLAB JIT compiler is probably difficult to match.


There is also Nelson as a fairly new Octave and Matlab compatible numerical computing language.

https://github.com/nelson-lang/nelson


Okay, but what's your business model? We've all been down this road before.

We don’t have a finalized business model yet, right now the focus is getting the open-source runtime solid, useful and very fast. If we add paid stuff later, it’ll be around optional services (not taking features away from the core runtime), and we’ll be clear about it up front.

Matlab is annoying for many reasons. But to me the main advantage of using Matlab over Python is the plotting.

Plotting in Python is still a pain. Notebooks help, but they still don’t let you add to a plot piece-by-piece; and 3D plots are possible but nowhere close to as simple as it is in Matlab.


Matlab code is fantastic for prototypes and for getting a "feeling" before doing the expensive optimal implementation.

If the rewrite is for performance, ideally the logic capture = the thing executing in production.

Cases where a JIT running would conflict with requirements notwithstanding (e.g. HIL with strict requirements and whatnot)...


Prototype in Matlab, production in C.

Simulink is great for Rapid Prototyping

So this is basically jax?

Every data scientist and statistician who joins our team is always happier moving to Python or R away from Matlab after using it for a bit. I guess it’s ok for academia but at the two large companies I have worked at, no one is using Matlab or complaining about it being gone.

Most, if not all the points made in the article, seem to stem from a sense of "we need to optimize the code to be readable by mathemeticians, not programmers" which is fair, depending on what you're doing. But goes a bit overboard with the safety argument, which we've seen much better ROI by focusing on memory safety, rather than abstract mathematical proofs.

In fact, the entire safety argument is undermined by the author themselves:

> The engine is closed source. You cannot see how fft or ode45 are implemented under the hood. For high-stakes engineering, not being able to audit your tools is a risk.

What's the point of optimizing your code to be easy for physicists/mathematicians to read for safety, when you can't even verify what the compiler will produce?

I suppose it basically boils down to whether your orgs engineering is run by academics or software engineers, but Matlab doesn't really do anything that python can't for free. And python is more accessible, has more use cases, and strong academic support already.


What about SciLab?

Excellent piece of technology. Now in 2026.0 version.

It gets better and better all the time.


What a terrible article. The author does not understand matlab at all and he is also either lying or totally clueless.

Matlab is successful because of precisely one thing, which nobody has replicated. It offers a complete software environment from one source.

Nowhere else can you get scientific computing, a GUI toolkit, a high level embedded software environment, a HiL/SiL toolkit, a model based simulation environment, a plotting and visualization toolkit and so much more in a single cohesive package. Nobody else has any offering that comes even close.

>The engine is closed source. You cannot see how fft or ode45 are implemented under the hood. For high-stakes engineering, not being able to audit your tools is a risk.

This is just a lie. Open matlab and you can inspect all the implementation details behind ode45. It is not a black box.

>The Cloud Gap: Modern engineering happens in CI/CD pipelines, Docker containers, and cloud clusters. Integrating a heavy, licensed desktop application into these lightweight, automated workflows is painful.

Another lie. See: https://de.mathworks.com/help/compiler/package-matlab-standa... Mathworks has done everything hard for you already. I do not understand why the author feels the need to authoritatively speak on a subject he absolutely does not understand.


> Nowhere else can you get scientific computing, a GUI toolkit, a high level embedded software environment, a HiL/SiL toolkit, a model based simulation environment, a plotting and visualization toolkit and so much more in a single cohesive package. Nobody else has any offering that comes even close.

Mathematica does. Arguably Mathematica is even more cohesive because it's not split up into "feature sold separately" packages.


You're too generous here. This has all the hallmarks of an AI-generated article, and HN is once again duped into passionately arguing with something that took zero effort to produce.

>> The engine is closed source. You cannot see how fft or ode45 are implemented under the hood. For high-stakes engineering, not being able to audit your tools is a risk. This is just a lie. Open matlab and you can inspect all the implementation details behind ode45. It is not a black box.

How do I see the .c files / trace how `ode45` will execute on my machine? Can I see the JIT's source code?

--

Entitled to your view, but clearly difference of opinion here. From perspective of open / closed source -- maybe for you it qualifies as open source, but I can't follow the logic chain, so to me MATLAB is not open source.


I explicitly pointed out what the article was lying about.

"You cannot see how fft or ode45 are implemented under the hood." is a totally false statement. You absolutely can do exactly that. This is not a matter of opinion. Right click the function and open it, you can view it like any other matlab function.

> From perspective of open / closed source -- maybe for you it qualifies as open source

Matlab is obviously not open source. Who said anything about that? The article claims you can not audit ode45, that is false and it seems pretty embarrassing for someone speaking authoritatively about matlab to make such basic claims, which every matlab user can disprove with two clicks. Every single matlab user has the ability to view exactly how ode45 is implemented and has the ability to audit that function. This is not a matter of opinion, this is a matter about being honest about what matlab offers.


Seeing in MATLAB code how ode45 is implemented != how the thing is running on the machine. That's a very small top slice.

But okay -- as I mentioned, you're entitled to your views!


fft.m is the more obvious example of the closed source algorithm here. You open it and it just says

% Built-in function.

The algorithms written in C and compiled by mex are the "built-in" ones that are not viewable.


How about fft? If you open fft.m, you get just a commented file that ends with

% Built-in function.

If the algorithm is implemented as a compiled mex function, then you cannot inspect its details.


They’re trying to sell their product, which seems like a new language + runtime inspired by matlab. Reinventing Julia perhaps? It will be missing all the things that make matlab unique, as you point out.

It's open source / free. But yes, of course we want people to try it and get value from it.

At GKN Aerospace, I was hired predominantly to migrate the team off Matlab and rewrite everything in Python. There was pushback from OEMs who wanted their specs to be in Matlab but eventually everyone folded. Having to need 2 licenses to run on 2 cores was horrible UX IIRC. I'm glad I learnt from that experience.

No. Just no.

Terrible HPC integration.

Proprietary runtime.


I worked at three large universities where folks ran Matlab processes on HPC all the time.

"I have direct experience of universities doing horrifyingly wasteful computations" is not the ringing endorsement for Matlab you might think it to be...

Granted, I've seen Python horrors on university HPC clusters too, but at least there are libraries and clear documentation (e.g. Lightning, Ray, etc) for how to properly manage these things. Good luck finding that with Matlab.


Universities are not wasteful. University graduates earn more and face fewer unemployment than high school graduates. More universities correlates with higher GDP per capita.

GP said "universities doing horrifyingly wasteful computations".

You claimed they asserted that "Universities are wasteful".

Put the goalposts back where they were.


To find the goalpost look at the parent comment of the one you first saw.

Sorry if I was unclear, but GP is correct, you are misreading quite deeply.

I am saying that because it is much harder to find good documentation on using MATLAB on HPCs, a lot of computations on HPCs that use MATLAB are highly wasteful compared to if they had been written using a language and/or tools that make it much easier to use HPC resources more efficiently. I was NOT in any way saying that "universities are wasteful".


Fair. My point though is that in scientific research, accuracy of the results come first. It does not mean anything that research was performed efficiently if the results are incorrect.

Ehhh, as someone who did cognitive neuroscience in grad school and wrote non-stop Matlab 20 years ago, this is correct but insufficient. The toolbox and licensing situation sucked, but that's not why I hated it.

At the time, we had massive issues with using Matlab with large fMRI/EEG/MEG data sets, and attempts to write naive matrix-based versions of code would occasionally blow up memory consumption, and turn a 3-week analysis into a 50-year analysis.

So, yeah, I had to replace a decent amount of pretty matrix code into gnarly, but performant, for loops. Maybe the situation has improved since then, but I don't care to find out.

---

Want strings? You had your choice of cells or 2D char matrices? Who ever thought char matrices were a good idea? strfind() vs findstr()? Even after years of Matlab, I had to double-check the docs to recall which one I wanted.

---

Anything to encourage reliability or assist scientists in their workflows, like built-in version control? Nope. Or basic testing support for your ad hoc statistical functions? No.

I guarantee there's a ton of Matlab code that produced biased/wrong results, and nobody knows because it produced numbers in the expected range, and nobody ever thought to check it.

Mathworks was in a unique position to improve scientific code quality, and did nothing with it.

---

Matlab really excelled at only two things: matrix math and making pretty plots. As soon as you needed to do anything else, it was unbelievably painful, and that's where my personal dislike came from.


Have to agree, working with fMRI and MRI data. Matlab is nearly impossible to debug or even find basic workarounds for problems in this domain, in comparison to Python, because of how closed and niche Matlab is, and the lack of community and trustworthy documentation, or ability to write sensible code with performant for loops.

In my experience, those arguing for the value of Matlab are mostly 50+ years old, or are in an extremely niche industry using something like e.g. Simulink or other highly-industry-specific tooling, in which case it seems the considerations are irrelevant to something like 99.5% of the modern population.

Matlab will clearly be dead and irrelevant otherwise, in a short amount of time and in almost all domains.

EDIT: And few things indicate an out-of-touch / cookie-cutter or almost-certainly p-hacked neuroscience paper like the use of MATLAB. It is a smell for incompetent legacy research in this domain.


> It is a smell for incompetent legacy research in this domain.

Wild to hear. At the time, almost everybody in the field used it. The then-dominant fMRI package (SPM) and EEG/MEG package (Fieldtrip) were both open-source Matlab. (I think I knew one prof who used BrainVoyager, and that's because he hired a former BV employee as an RA.)


"At the time" being key here, yes. SPM is a red flag now.

"then-dominant", are you sure?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: