Right? strange to assume nuclear power plants could be built over night! while discrediting that renewable on components production levels could be easily scaled.
Does any programmer suffer with music theory as well, just based on the fact that an exact thing could be called in many different ways, depends on its position, function..etc?
my brain kind of cannot accept this fact and I struggle with it
I've been programming for 40 years and playing music for 50. My original background was classical and I play jazz today. I'm a fluent reader.
I think that historically, people were already familiar with "standard" notation and terminology before they learned theory, so it wasn't a major hurdle. Not only do theory students (i.e., at the college level) know how to read, but they are also required to learn keyboard. I've heard people say: Don't try to learn theory without a keyboard in front of you.
Music instrumentation and notation are technologies and as such they are replete with historical baggage. I have an unorthodox view, which is that if someone is not already usefully reading standard music notation by adulthood, then they have no reason to learn it. Explanation of theory for non readers would be better served by using an invented notation that sidesteps the historical naming problems.
One such notation is the Nashville number system. It's not nearly universal, but for the purposes of just enjoying a wide swath of popular and folk music, it actually works. It's fun to see how many different songs boil down to a few basic patterns.
A computerized tutorial could show both notations. There is a lot of instructional material for guitar, that shows conventional notation in parallel with a notation based on a diagram of the fingerboard.
Programming would be just as bad if we were stuck with a 400 year old language. Fortunately we develop new languages, but that's because old programs just get thrown away, and it's easy to teach a computer to read a new language. We also teach programmers not only how to read, but how to create better notation themselves.
This is the first time I heard of the Nashville number system - what's the difference to Roman Numeral Analysis? Is it essentially the same concept, but with Arabic numerals instead?
If you tell me you're going to make my life easier by teaching me "Roman numeral Analysis", I'm gonna run away. That sounds scary and vaguely reminds me of Latin class.
"Nashville number system" sounds easy to master. It's country, and country has a well-known self-imposed reputation as simple. (In truth, country can be just as complicated as anything else. But I'm talking about first impressions.)
I used to be a part of a congregation whose band spoke in 5ths and 7ths and I had no idea what they were going on about. And then I learned that part of joining the band was learning the Nashville system. It's just the simplest way to get everyone on the same page, and when you say "Nashville" musicians immediately relate to what you're saying.
Pretty much the same, adapted to the specific purposes. For instance, Nashville charts also include some notations for the form of a song, such as Intro, Verse, Chorus, etc.
A reason for the usefulness was how recorded music was made. The recording musicians had to be able to choose a key that accommodated the singer's range, on the spot. So a transpose-able format was ideal.
I think the industry in New York had a different scheme, which was to write for a "standard" male tenor voice, and rely on the musicians to handle exceptions.
Yes, but we have similar issues in programming. Is a list a hash? Is a hash a dictionary? Are these all arrays? Are arrays collections?
Of course, there is a right answer, and depending on the language, all of the above can be VERY different things. But they're also similar enough to be completely unintuitive... their distinctions take practice to master.
Likewise, in music there is a right time to call a note a flat, a right time to call it a sharp, and a right time to talk about intervals instead. They can all technically refer to the same thing, yet there is a proper word to be used in any given context.
It's all very confusing, until you start using those terms in their proper contexts on a regular basis. Just like in programming.
Some other examples:
"=" vs. "==" vs. "===" vs. ":" vs. "=>" vs. "~>"
"function_name first_parameter" vs. "function_name(first_parameter)" vs. "hash_name[key]" vs. "object.property_or_method"
"MethodName" vs. "methodName" vs. "method_name"
"function" vs. "method"
...none of these are intuitive. But we use them, we get used to them, and then they seem obvious and we wonder how we could have ever written these things differently.
I think the same goes for musical notations. I struggle with them heavily, but I'm far too casual of a guitar player to take the time and learn the language properly. It's tempting to say the problem is the complicated and confusing language of music, but I know the problem is my own unwillingness to put in the time.
It's all about thinking in thirds. If you want an A chord it has to be A, C, E, in thirds. A major would be A, C#, E not A, Db, E because that breaks the rule of thirds.
Also, and most importantly, if you're playing an instrument like violin, C# and Db are not actually the same note. Since they happen in different contexts, and have different positions in whatever key they're in, they have different psychological roles and are actually played differently by the player.
If I'm not mistaken, a C# would be played slightly sharper, and a Db slightly flatter to fit the particular key.
If only Egypt would realize that and cooperate with the upstream countries, or more specifically, Ethiopia. They could easily mitigate the issue with money, but they believe they have an ancient colonial right to command their neighboring countries.