Can anyone speak on how it is to move from an older assembly, to a modern CPU? I asked to take an assembly class in local/public college, and was told they wouldn't hold the class because not enough students were interested. This was in 1998, and I truly couldn't believe my ears.
I feel like learning modern assembly would be more useful, but maybe 6502 assembly is far easier to pick up? The first language I learned was Atari BASIC around 1991, and it was enough to get me to keep on moving to better languages and more powerful CPUs, but I wonder where I would have ended up if I learned assembly either before or after Atari BASIC. I try to keep up with technology, and have read quite a bit on different assembly dialects. I still haven't learned an assembly that I can program in, and I suppose it's because there are so many other high-level languages now and I feel like I need to keep up to what is used in "industry" rather than what is more obscure (but might help in debugging).
I moved from 6502 assembly to PowerPC assembly. I found 6502 assembly easy to understand because of the limited number of registers, instructions and addressing modes. You didn't have to learn very many.
And as it turned out, I didn't have any problems adapting to the expanded PPC ISA. Being on the 6502 already taught me how to chop problems down into tiny bits and now the bits didn't have to be so tiny. I certainly wasn't trying to do 8-bit chained adds for bigger quantities anymore; it didn't make sense to when I had 32-bit registers and an FPU.
I started with BASIC on a Vic 20 in 1979 in a computer class for kids. Then I got my own Atari 400 at home, so I learned BASIC on that. Then around 1984 I got a Commodore 64 and it was amazing. I quickly got past the BASIC part of the C64 manual and in the back of the manual they had a memory map, all the hardware registers for all the chips, and then all the 6502/6510 opcodes. There was even an entire schematic for the C64 in the back of the book as a fold-out poster. 14 year old me was hooked. I got into assembly when I was about 15 and never looked back to BASIC. I wrote a lot of assembly code for many years, creating "crack intros" for cracking groups and demos on the C64. I then moved on to Amiga and 68000 assembly, and I even got into SNES 65C816 (16 bit version of 6502) assembly coding using a 30th generation photocopy of the pirated SNES developer manual and a floppy disk game copier, which also had a parallel port that I managed to connect to my Amiga and use that as the development system uploading the compiled program to the SNES via parallel port. I would make "crack intros" for SNES games being traded on pirate BBS's. It was a lot of fun. Many years later, I'm still doing some 8-bit assembly code for microcontroller projects where I'm trying to use the smallest possible CPU to do interesting things. Everything I learned about Assembly on the C64 still applies to the modern microcontrollers (PIC/AVR) that I use today.
In so many of these years I had not learned C. I went from assembly to scripting languages, and was a very early adopter of Javascript since the same month it first came out. Only in the last 10 years have I learned C, and it was so easy! After learning Assembly and Javascript, C was right in the middle.
I've programmed in over a dozen languages through the years, but I'm really happy with Assembly, C, and Javascript as my stack depending on the project.
Thank you for this, I was born in 1979, but got introduced to the Atari 8-bit computers VERY early, and connected immediately. I have a Raspberry Pi v4, and it sounds like learning assembly on nearly anything right now will pay off. I don't have a problem learning a new language, but the C-style of languages are easiest for me. I'm sure them being imperative is a huge plus for me learning new things about any imperative language. I actually find JavaScript to be quite satisfying to write code in, mostly because I can test out functional features more easily than adopting an entirely different language to do it in.
I spent my teenage years grabbing cracks and texts from BBS', and enjoyed the whole scene. It's really what sparked my interest into writing software, and I always wished I had learned some lower-level skills beyond running a debugger and poking at memory. I felt like if I knew Assembly, I could get into deeper debugging (cracking was initially my "exciting" topic to cover, but now I feel like it would just be understanding memory better when I run into a strange error).
Was the 400 as much of a pain to type on as it looks? I was lucky to inherit an 800 when my uncle bought his first IBM clone, and those had nice keyboards.
Before I got the C64 I got an Atari 600XL because the keyboard was a real keyboard. I had really hacked a lot on the Atari 400 keyboard, so much that it was becoming pretty worn in places, so the better keyboard was really an upgrade.
I noted in a different comment in this thread that we teach all first-year compsci and software engineering students at my university MIPS assembly. They may then specialise into other areas, security, operating systems, embedded, etc., and in those specialties may need assembly for more modern CPUs.
We have found that when needed, students pick up the newer/more advanced assembly languages (e.g. ARM, x86) fairly well, so we believe the early and universal introduction to MIPS does provide benefits.
ARM borrowed most of its design from 6502. So if you learn 6502, you've learned many of the tricky parts of ARM (such as how the carry flag works with subtraction).
Many of the instructions have similar mnemonics, such as the conditional branch instructions (BEQ, BNE, BCC, BCS, BMI, BPL, BVS, BVC), as well as the arithmetic instructions like ADC, SBC, EOR.
Not really (perhaps leaving mnemonics aside). The 6502 has little in common with the first ARM. The ARM's designers liked the simplicity and speed of the 6502 and it was their favourite 8-bit ISA. And a visit to the Western Design Center convinced them to design their own CPU.
However, the early RISC papers were the biggest influence on the design of the ARM. There is even a clue in the name 'Acorn RISC Machine'.
ARM designers liked and had extensive experience with 6502 - it WAS their bread and butter for a long time - and this might be why the mnemonics are so similar (and carry in subtraction - that might have been for making porting easy, but I won't risk assuming that without a primary source).
They obviously also studied foundational papers on RISC and understood the possibility of having a simple design (such as the 6502, which was very powerful considering its small transistor count) applied to a simple, more regular, instruction set.
Assigning weight to these factors might a futile exercise, as the designers themselves might not agree.
They were not blind to RISC and, therefore, it made sense to put it in the name of the architecture.
Completely agree with all these points and that without the 6502 the ARM1 would not have looked like it did. One might say ARM1 was inspired by the 6502 and it prevented them going down the CISCy 68k route.
But I don't think that ARM 'ARM borrowed most of its design from 6502' or (to my mind at least) looks like a refreshed 6502. There are just too many fundamental differences:
- ARM1 was a load/store architecture / 6502 wasn't.
- 6502 had a few special purpose registers / ARM1 had loads of general purpose registers.
Plus there are lots of key innovations in ARM1 that weren't in either 6502 or RISC 1 such as conditional execution. Furber and Wilson were really quite innovative and didn't just borrrow ideas from other ISAs.
Things like conditional execution being always available, and the barrel shifter being always available feel a lot more like ideas from VLIW architectures. And when you come from 8-bit instructions and suddenly have 32 bits available, your instructions are a very long word.
Modern CPUs are more difficult to program in assembly.
The simplicity of RISC-V is illusory. Because it lacks many features of normal ISAs, like ARM or Intel/AMD x86-64, writing programs that are both efficient and robust, i.e. which handle safely any errors, is quite difficult in assembly language.
For a simpler programming in assembly language it is hard to beat DEC PDP-11 and Motorola 68000 derivatives.
However those are no longer directly useful in practice. For something useful, the best would be to learn assembly programming using a development board for some Cortex-M microcontroller, preferably a less ancient core, e.g. with Cortex-M23 or Cortex-M33 or Cortex-M55 or Cortex-M85, i.e. cores implementing the Armv8-M ISA (the latter 2 also implement the Helium vector extension).
Probably some development board with a microcontroller using Cortex-M33 would be the easiest to find and it should cost no more than $20 to $30. I would not recommend for learning today any of the development boards with obsolete cores, like Cortex-M0+, Cortex-M3, Cortex-M4 or Cortex-M7, even if those boards can be found e.g. at $10 or even less.
Such a development board can be connected to any PC with a USB cable. All the development tools are free (there are paid alternatives, but those are not better than the free GNU tools).
You can compile or assemble your program on the PC, then load and run it on the development board. You can have a serial console window connecting to your program, by using a serial port on the development board and a USB serial interface. All such development boards have LEDs and various connectors for peripherals, allowing you to see what your program does.
I think that learning to program in assembly such an Armv8-M microcontroller is more useful than learning something like 6502. Armv8-M is less quirky than 6502 or RISC-V and it is something that you can use for implementing some useful home device or even professionally.
Otherwise, the best is to learn the assembly language of your personal computer, e.g. x86-64 or Aarch64, but that is much more difficult than starting with a microcontroller development board from ST (e.g. a cheap STM32 Nucleo variant), NXP, Infineon, Renesas, Microchip, etc.
The most important are the lack of integer overflow detection and indexed addressing. Integer overflow detection is required for any arithmetic operation unless it is possible to prove at compile time that overflow is impossible (which is possible mostly for operations with some counters or indices, whose values are confined inside known ranges), while indexed addressing is needed in all loops that access arrays, i.e. extremely frequently from the point of view of the number of actually executed instructions.
There are absolutely no reasons for omitting these essential features, because their hardware implementation is many times simpler and cheaper than either the software workarounds for their absence or than the hardware workarounds that can be implemented in other parts of the CPU core, e.g. instruction fusion.
6502 is much more similar to a normal CPU than RISC-V, because it has both integer overflow detection and indexed addressing.
While I believe that other ISAs are better than 6502 for learning assembly language for the first time, I consider 6502 as a much better choice than RISC-V.
RISC-V could be used for teaching if that would be done in the right way, i.e. by showing how to implement with the existing RISC-V instructions everything that is missing in RISC-V. In that case the students would still learn how to write real programs instead of toy didactic programs.
However I have not seen any source teaching RISC-V in this way and it would be tedious for newbies, in the same way as if they were first taught how to implement a floating-point library on a CPU without floating-point instructions, instead of being allowed to use the floating-point instructions that any CPU has today.
So it's on par for unsigned, and takes two additional independent instructions for signed 64-bit and one for signed 32-bit.
For teaching, using unsigned XLEN-bit values by default is probably a good idea anyway.
> indexed addressing
I'm not sure how much this actually matters in practice.
It's nice when you access multiple arrays at the same index, such that you only need to implement one index instead of every pointer.
Such loops are often vectorized, and the indexed loads become useless, once you read two values from an array index, e.g. an array of structs.
Edit: removed measurements, because I'm not sure they are correct, might add back later.
The cost of providing carry and overflow is absolutely negligible in any CPU and even more so in an OoO CPU, which is many times more complex.
If you mean that if the flags are not stored in a general-purpose register, which is a possible choice, but it requires an extra register file write port, but in a dedicated flags register, like in x86 or ARM, then the flags register must also be renamed to allow concurrent operations, like any other register, this is a minor complication over having register renaming for all other registers.
What is extremely expensive is not having overflow and carry in hardware and having to implement software workarounds that require several times more instructions.
When loops are vectorized or you have an array of structures, this does not change anything, you still use the same indexed addressing (or auto-incremented addressing in ISAs that have it). Perhaps you think about scaled indexed addressing, which may not always work for an array of structures, but in such cases you just use simple indexed addressing, with the scale factor 1.
Without either indexed addressing or auto-incremented addressing you need an extra addition instruction for each memory access, which increases the code size and it limits the execution speed by occupying an extra execution port.
Because of this, the highest-performing RISC-V implementations have added non-standard ISA extensions for indexed addressing, but such extensions are still rather awkward because the base instruction encoding has not been thought for allowing indexed addressing, so the extensions must use a quite different encoding that must be squeezed in a limited encoding space.
- learn assembly as subroutines inside your "daily driver" language on your regular PC, which is going to be x64
- microcontroller with in-circuit debugger where you can connect it to flashing lights (obvious candidate is Arduino, which is AVR, or STM32 which is ARM)
Back in the day the first of these was easier in environments like BBC Basic inline assembler or MS-DOS COM files (which are headerless executables that can be extremely small.) You could also learn either of those in emulators.
One thing to consider is that there's a whole world of 8, 16, and 32-bit microcontrollers out there being used in new products every day. They are all readily programmable in assembly language, and if you have the basics down then the learning curve isn't terribly steep.
> Can anyone speak on how it is to move from an older assembly, to a modern CPU?
The modern CPU with orthogonal instruction set is just as easy (possibly easier because you have more and larger registers, making it easier to get ‘real’ results) until you start to look at performance.
Then, you hit the problem of having to understand how your target CPU does out-of-order execution, how large its caches are, how they are organized, etc.
Modern CPUs out there that have very little of that exist, though.
I feel like learning modern assembly would be more useful, but maybe 6502 assembly is far easier to pick up? The first language I learned was Atari BASIC around 1991, and it was enough to get me to keep on moving to better languages and more powerful CPUs, but I wonder where I would have ended up if I learned assembly either before or after Atari BASIC. I try to keep up with technology, and have read quite a bit on different assembly dialects. I still haven't learned an assembly that I can program in, and I suppose it's because there are so many other high-level languages now and I feel like I need to keep up to what is used in "industry" rather than what is more obscure (but might help in debugging).