Great instructional video. First place I learned E natural minor with his scale fragments section.
Yes, not a new technique by any stretch of the means. AFAIK John Petrucci takes a less aggressive approach with raising BPM. Funnily, Shawn Lane goes into a very similar methodology >30 years ago[1].
Are we assuming that "testing" is limited to only exercising the single-threaded behavior of a function? I'm curious how others approach effective testing of multi-threaded behavior.
Sanitization of data is such a strange security practice to me. It feels like any sort of vulnerability sensitive to data sanitization just boils down to a failure to properly encode or escape data into a target language that is susceptible to injection attacks e.g. SQL, HTML, javascript. Is there a real-world scenario where data sanitization is required where proper data encoding/escaping is not the better solution?
There are tons of languages and frameworks made by developers who know what they are doing that do not treat everything blindly like strings.
For SQL in particular, you should never build queries directly from user input - any modern database supports bind variables or parameters, which completely eliminate any need for sanitizing input.
I agree with you regarding sanitization, and I'd add further that having to sanitize input for security purposes is a big sign of code smell and an overall insecure code by design.
I feel like answering this comment could start a possible argument, which I have no interest in doing.
I do, however, want to point that anyone interested in comparing language design choices can conclude by themselves this is likely a strong factor.
You can find references like the classic "PHP: a fractal of bad design"[1] which not only talks about the language itself but SQL injection, error handling and tons of other issues. It summarizes most of the important points.
I can also add a few issues like[2][3], which unfortunately are not isolated incidents: these are a reflection of core design decisions and how the language approaches software design as a whole.
I stand by my point, which I'll define more precisely as:
"A badly-designed language either makes it hard for developers to do good choices, or makes it easy for developers to do bad choices."
PHP is not alone, but it is a prime example of this.
You can disagree with this assessment - and that's OK.
I have to disagree, because your assessment is outdated and somewhat shallow. My impression is that it doesn't rest on much real programming experience with PHP either.
To stay with the topic, these arguments are in essence a way of trying to hold PHP as a language accountable for functions it exposed in its since long (about a decade ago) deprecated original mysql extension. These functions actually belong to the underlying C library developed by MySQL, and as has been the custom with tons of functionality brought into PHP from elsewhere over the years, the entire library was relayed. The very same functions - e.g. escape_string(), the culprit "luring" users away from parameterization - are still available in Oracle's mysql C library, and are to some extent also available in, for example, the mysql Python connector through its C extension API.
At the time "a fractal of bad design" was published a handful of its talking points were already no longer actual. It got tired and trope-y years ago, and PHP isn't what it was 15 years ago. Referencing the article today is about as valid as regurgitating "classic" 1950s health advice to Ironman triathletes or something.
As I said, I have no intention of starting an argument.
I would just like to point out a few issues:
A) I deliberately focused on the language itself in my claims.
The functions I cited earlier were meant to illustrate the side effects of a certain mindset of the core language.
Keep in mind: these functions are not from some random library in the ecosystem, but from the core library of the language, providing core functionality. And that hasn't changed, nor the functions.
B) You've made a number of statements in response to my comments, but I don't see any supporting references.
The only justification you've given is your own opinion that "the article is too old and not relevant anymore".
Which takes us to point C.
C) I skimmed through the article again, along with the general documentation of the language, and I stand by this statement:
"Every major point in that article about the language is as relevant today as it was in 2012."
PHP might work fine for templating some web pages, but so does Jinja. As a general programming language, it falls short in too many ways to list here. You can revisit the original article I mentioned before for a more comprehensive list, in particular the "core language" section.
Well, at least that's my opinion. As I said, you're free to disagree - and that's OK.
--
Side note: The easiest approach during a disagreement in an online discussion is to write a lot of "opinion-based statements" as if they were facts, and leave everything else as an exercise for the reader.
If you want to be taken seriously, please don't do that.
And tons of such frameworks have been written in PHP; prepared statements with an adapter-agnostic database connection layer are first-class citizens in PHP.
>"Is there a real-world scenario where data sanitization is required where proper data encoding/escaping is not the better solution?"
In context of SQL queries which accept variable input, the only correct approach is to parameterize the queries, never to string-encode the variables. So, yes. But perhaps you implied parameterization as well.
I think cognitive load has a lot more to do with the paradigm that the code is written in than any particular type of author's contribution to the code. For instance, the object-oriented paradigm by design increases cognitive load by encouraging breaking up otherwise straightforward logic into multiple interfaces, classes, and methods.
I've built a VST3 plug-in that simulates a Mesa Boogie Mark IIC+ preamp purely from the circuit.
The approach doesn't seem popular for professional plug-ins likely because it wasn't viable for real time until modern CPU enhancements became available. Performance scales with frequency of the input which is interesting and seems to be a consequence of using an iterative solver on a system of equations and using the previous sample's state vector as a guess for the current sample.
On my MacBook M3 it requires between 50 to 70% of a single core to produce a 2x oversampled output at 48000Hz. This can be scaled back by reducing the solution tolerance bounds and get down near 25% with minimal quality loss.
If the PC offsets are non-repeating, you would just create a table from a known start. Kinda like Ouruborus, or the Humming distance between vertexes in a cube without repeating.
The obvious solution to me is to implement streaming/buffered processing of the content while downloading it instead of downloading the entire content into memory to be processed in a single contiguous byte[].
Buffered IO would have the CPU processing acting as a natural backpressure mechanism to prevent downloading too much content and also prevent unbounded memory allocation. Each CPU worker only needs to allocate a single small buffer for what it processes and it can refill that same buffer with the next IO request. Your memory usage becomes entirely predictable and will only scale with how many concurrent threads you can actually execute at once.
Also, no matter how you artificially rate limit the virtual thread scheduling (e.g. via semaphore), if you still insist on downloading the entire content into memory before starting processing then obviously you cannot process any single piece of content larger than what can fit into available memory at any given time.