The IDE is indeed written entirely in assembly language, as is everything from the webserver up (JohnFound, author of FreshLib/FreshIDE also wrote a fastcgi layer to interconnect with rwasa from my own goods). Everything there is assembler.
At the time I lost my interest to assembly, it had pretty high-level contructs like looping, function frames, structs, etc. via macros. Macros of fasm are iirc recursive, so its power is far more than usual assembler. I would put it at 85% on [regular asm .. non-optimized C] scale. You can think of tasm/masm as of lisp with cpp instead of macros.
The use case beyond educational purpose is still unclear to me though. Especially with macros.
Actually the only Assembler I got disappointed with bare macros support is gas.
I never used FASM, being an old MS-DOS grey beard, but tasm/masm macros were quite powerful, specially after MASM 6.0.
So I never got the idea they were like cpp macros.
Regarding the educational purpose with macros, are you aware that TI has some CPUs with an Assembler that looks like C--, or that AS/400 Assembly supports objects?
Yeah, looking back I would say Turbo Pascal, Turbo C, Turbo Basic and AMOS on Amiga were the Unity of the early 90's game dev on home computers.
Anything that actually required extracting performance out of the system was straight Assembly, which I why it is ironic that new generations think that C compilers were generating fast code since day one.
I also knew a few people that did it like that, to save money on an Assembler.
I think masm 6.0 was a time when Watcom went popular [or just known to me] and I was very disappointed of my asm skill that lost ~x2 both in time and size for Brezenham's. I couldn't even understand what exactly Watcom did for so wow, much performance, and I gave up finally :(
So take my words with grain of salt.
>So I never got the idea they were like cpp macros.
>TI ...
My experience lays completely in intel 80* range. I'm provincial-ussr born, so even a regular PC-compatible was almost unobtainable until circa '95.
I can tell you in my provincial Sweden household, a goddamn PC compatible was near unobtainable until 95. Not that I wanted a PC, I remember I dreamt that I had an Amiga but when I woke up I still had a Z80 machine. :-)
Thanks. I found the interface kind of hard to navigate on my 24" screen too. Mainly the the location of the "Menu" is hardest to figure out. The link to source tree is there.
Oh boy, the memories from all the "IDEs" there were for NASM. In the end, it was just easier to use your favorite editor, because most of them just had a pretty color scheme for assembly. This looks pretty sweet, though.
Haha, I managed to crack a couple programs by following the +ORC cracking tutorials and a (cracked, of course) copy of SoftICE. In fact, that's how I got interested in assembly. Then I learned C and started using Linux as my operating system, so I didn't have a need for either thing anymore :/
I don't know what are use cases for that. Nowadays, if you are coding in assembly, you are probably doing some kernel things, embedding asm in C/C++ (for SIMD or something like that and not because compilers generate bad code), embedded code for microcontrollers or retro computers (i.e. ZX Spectrum).
But Visual Studio-like IDE for making x86 application software, with GUI editor seems weird.
I am using assembly language for all my programming tasks.
And most of they are application programming (Fresh IDE itself and many closed source projects in my work) or even web programming (https://board.asm32.info).
That is why I needed a powerful IDE, suitable for rapid programming of relatively big projects (500Kloc or higher).
About twice slower than in HLL, with code reusing of course.
But the code is more reliable and the debugging process is much easier. After some short debugging stage, most of the projects runs for years without single bug report or other support issues.
I am not talking about the significantly higher speed, lower memory footprint and better UX (especially the response time of the UI is really much faster).
As a whole the advantages are more than disadvantages IMHO.
Interesting. Are you sure the increase in code reliability goes down to the language and not your skills? It feels quite contrary to my experience that a lower-level language would be more reliable.
Well, I am pretty average programmer. Not the worst, not the best.
The code reliability of assembly programs is better because programming algorithms in low level, the programmer controls every aspect of the execution. Notice, that excessive use of code generation macros will cancel this advantage.
Another advantage is that the bugs in assembly programs usually cause immediate crash of the program and this making the fixing easy.
Defer crashes and strange/random/undefined behavior of bugs in assembly programs is rare. IMO, this is because of reduced count of abstraction layers.
What about code maintainability and readability - I'm guessing that must be worse when compared to a HLL? Also, what made you get into writing complex programs in assembly - was it just the extra control? I've used assembly when I needed to optimise my C code, but it was a slow and difficult process! I would not really choose it for complex stuff, but I'm really interested to hear your point of view.
Thanks for sharing! This is really interesting, especially the part about the reduced count of abstraction layers. Do you think the abstraction layers are the problem, or the fact that the overwhelming majority of "abstractions" that materialize in modern high-level software are leaky?
Why adding abstraction layers makes programming easier?
Because allow the programmer to not think (and even know) about some things and leaving them to the layer/libraries.
But every layer adds also a level of obscurity. The interaction between multiply layers is even more undefined and random.
It is OK while everything goes as expected. But when there are problems, the obscurity can make the debugging a hell.
In addition, the behavior of the bugs hidden deep in the layers (or in the way the layers interacts in between and with the application) can be really weird.
That is why, IMHO, the programmer should keep the abstraction layers to the minimal count that allows solving programming tasks with minimal effort, counting not only the coding time, but debugging and supporting time as well.
In my practice, I decided that using FASM with Fresh IDE and set of assembly libraries gives me the needed quality of the code.
This is a sad statement. Proficiency in a given language/environment dictates how long and painful a solution will be. Use cases for this are no different to any computing-related task, though I concede it isn't for everyone :-) Neither is Python, Perl, Java, JS, Node, and hey, lets throw in COBOL because you know, there was never a use case for that either. /sarcasm
The UI is simply what you get with the classic win32 API. It is instantly comprehensible and workmanlike, if not contemporary chic. It is also crazy efficient; that API and the widgets and controls it provides were developed when machines had 4MB-8MB of RAM. The whole IDE probably launches in a few milliseconds and fits in L2.
I know that comment seems superficial, but when I saw the Windows 98 style GUI, I actually thought that perhaps this was an abandoned project someone was bringing up for nostalgic purposes.
The screenshots are taken in Linux: XFCE+Wine.
Fresh IDE is actually some kind of hybrid application. It works in Linux better and with more features than in Windows. :)
Still not GTK though. It is too heavy for assembly language programming and will not allow portability for example on MenuetOS or KolibriOS assembly written OSes.
Nice improvement! The screenshots have much nicer font rendering (and a better font, for that matter). The fact that it uses more than the 16 colours that were available in Windows 3.1 helps a lot too.
Sure, that's likely the case and its an impressive piece of software. But.. that doesn't stop it from looking dated. Then again, maybe people used to developing in assembly (ie the target users) don't mind or are used to worse looking tools (eg when I was doing microcontroller development, most tools were pretty bad and dated looking, some newer ones used eclipse/netbeans for better or worse).
I didn't mean it as a criticism to you or your work and fresh certainly looks decent, more that I found the name ironic given the "classic" look of the GUI. My first comment at least, the second one was commenting on how many low level tools are to look dated and maybe it's the norm. Again, I didn't really mean it as criticism even though I suppose it sounds like it. :/
Could you instead call into an existing portable UI library? Perhaps FLTK or something more lightweight than the popular ones. It just seems to me that creating a new GUI toolkit from scratch is a massive undertaking and I'm unsure what value you will get over spending the time improving fresh itself? I guess there's some appeal to having a self contained assembly system through and through.
At first, FLTK, does not look much better than the old Windows widgets.
In addition, I strongly want Fresh IDE to be portable to MenuetOS, KolibriOS and other assembly written OSes. As a rule, they all are written with FASM and a good IDE that can be ported for days (not for years) can be great tool for the OS developers.
That is why I started the development of special GUI toolkit.
RE: Portability -
Not sure how far you'll be able to get by gcc -S'ing something like nuklear[1] (cross-platform ANSI C89) but it might save you some time.
I don't have much HLL asm/demoscene experience personally so I'm not sure what's "impressive" as engineering feats these days but this looks cool. As someone who aspires to see a viable Smalltalk-like runtime self-modifiable introspective debugger at the OS level with a decent layer of POSIX compatibility and the ability to run AVX512 instructions, I like the idea that tools like this are out there. Cheers, mate
> RE: Portability - Not sure how far you'll be able to get by gcc -S'ing something like nuklear (cross-platform ANSI C89) but it might save you some time.
The big problem with using "gcc -S" is that as a result you have a HLL program, simply written as an assembly language listing.
The humans write assembly code very different than HLL. Even translated to asm notation, this difference will persist. Asm programmer will choose different algorithms, different data structures, different architecture of the program.
Actually this is why in the real world tasks, regardless of the great compiler quality, the assembly programmer will always write faster program than HLL programmer.
Another effect is that in most cases, deeply optimized asm program is still more readable and maintainable than deeply optimized HLL program.
In this regard, some early optimizations in assembly programming are acceptable and even good for the code quality.
> As someone who aspires to see a viable Smalltalk-like runtime self-modifiable introspective debugger at the OS level
That's an interesting pile of keywords you've got there.
I don't know about Smalltalk (I find Squeak, Pharo, etc utterly incomprehensible - I have no idea what to do with them), but for some time I've been fascinated with the idea of a fundamentally mutable and even self-modifying environment. My favorite optimization would be that, in the case of tight loops with tons of if()s and other types of conditional logic, the language could JIT-_rearrange_ the code to nop the if()s and other logic just before the tight loop was entered - or even better, gather up the parts of code that will be executed and dump all of it somewhere contiguous.
C compilers could probably be made to do this too, but that would break things like W^X and also squarely violate lots of expectations as well.
For a VM, RE: code rearrangement, you're effectively describing dynamic DCE if I understand you correctly, CLR does this (and lots more)[2].
At the low-level programmer level, there's nothing stopping a (weakly) static language like C from adopting that behavior[3] at runtime [i.e. with a completely bit-for-bit identical, statically linked executable which].
At the compiler level, you've got the seminal Turing Award by Ken Thompson that does it at compiler level[4].
At the processor level, you heuristically have branch prediction as a critical part of any pipeline. (I think modern Intel processors as of the Haswell era assign each control flow point a total of 4 bits which just LSL/LSR to count the branch taken/not taken. (Don't quote me on that)).
RE: Smalltalk - for me, the power of the platform's mutability was revealed when I started using Cincom. When I was using GNU implementations ~10 years ago, they felt like toys at the time (though I hear things have largely improved). If you've ever used Ruby, a simple analogy would be the whole "you can (ab)use the hell out of things like #Method_Missing to create your own DSLs". This lends of a lot of flexibility to the language (at the expense of performance, typing guarantees). In a Smalltalk environment, you get that sort of extensibility + static typing guarantees + the dynamic ability to recover from faults in a fashion you want.
Imagine an environment[5] that has that structured instrinsically + the performance of being able to use all them fancy XMM/YMM registers for numerical analysis + a ring0 SoftICE type debugger. Turtles all the way down, baby.
=====
[1] See ISL-TAGE of CBP3 and other, more modern reportings from "Championship Branch Prediction" if it's still being run).
[2] https://stackoverflow.com/a/8874314 Here's how it's done with the CLR. The JVM is crazy good so I'd imagine the analogue exists there as well.
[5] Use some micro-kernel OS architecture so process $foo won't alter $critical-driver-talking-to-SATA-devices or modifying malloc. I'd probably co-opt QNXs Neutrino designs since it's tried and true. Plus that sort of architecture has the design benefit of intrinsically safe high-availability integrated into the network stack.
> For a VM, RE: code rearrangement, you're effectively describing dynamic DCE if I understand you correctly, CLR does this (and lots more)[2].
You mean Dynamic Code Evolution?
Regarding [2], branch prediction hinting being unnecessary (as well as statically storing n.length in `for (...; n.length; ...)`) is very neat. I like that. :D
> At the low-level programmer level, there's nothing stopping a (weakly) static language like C from adopting that behavior[3] at runtime [i.e. with a completely bit-for-bit identical, statically linked executable which].
Right. The only problem is people's expectation for C to remain static. Early implementations of such a system may cause glitches due to these expectations being shattered, and result in people a) thinking it won't work or b) thinking the implementation is incompetent. I strongly suspect that the collective masses would probably refuse to use it citing "it's not Rust, it's not safe." Hmph.
> At the compiler level, you've got the seminal Turing Award by Ken Thompson that does it at compiler level[4].
> And it is "almost" impossible to detect because TheKenThompsonHack easily propagates into the binaries of all the inspectors, debuggers, disassemblers, and dumpers a programmer would use to try to detect it. And defeats them. Unless you're coding in binary, or you're using tools compiled before the KTH was installed, you simply have no access to an uncompromised tool.
...Nn..n-no, I don't quite think it can actually work in practice like that. What Coding Machines made me realize was that for such an attack to be possible, the hack would need to have local intelligence.
> There are no C compilers out there that don't use yacc and lex. But again, the really frightening thing is via linkers and below this hack can propagate transparently across languages and language generations. In the case of cross compilers it can leap across whole architectures. It may be that the paranoiac rapacity of the hack is the reason KT didn't put any finer point on such implications in his speech ...
Again, with the intelligence thing. The amount of logic needed to be able to dance around like that would be REALLY, REALLY HARD to hide.
Reflections on Trusting Trust didn't provide concrete code to alter /usr/bin/cc or /bin/login, only abstract theory, discussion and philosophy. It would have been interesting to be able to observe how the code was written.
I don't truly think that it's possible to make a program that can truly propagate to an extent that it can traverse hardware and even (in the case of Coding Machines) affect routers, etc.
> At the processor level, you heuristically have branch prediction as a critical part of any pipeline. (I think modern Intel processors as of the Haswell era assign each control flow point a total of 4 bits which just LSL/LSR to count the branch taken/not taken. (Don't quote me on that)).
Oh ok.
> RE: Smalltalk - for me, the power of the platform's mutability was revealed when I started using Cincom.
Okay, I just clicked my way through to get the ISO and MSI (must say the way the site offers the downloads is very nice). Haven't tested whether Wine likes them yet, hopefully it does.
> When I was using GNU implementations ~10 years ago, they felt like toys at the time (though I hear things have largely improved).
Right.
> If you've ever used Ruby, a simple analogy would be the whole "you can (ab)use the hell out of things like #Method_Missing to create your own DSLs".
Ruby is (heh) also on my todo list, but I did recently play with the new JavaScript Proxy object, which basically makes it easy to do things like
> This lends of a lot of flexibility to the language (at the expense of performance, typing guarantees).
Mmm. More work for JITs...
> In a Smalltalk environment, you get that sort of extensibility + static typing guarantees + the dynamic ability to recover from faults in a fashion you want.
Very interesting, particularly fault recovery.
> Imagine an environment[5] that has that structured instrinsically + the performance of being able to use all them fancy XMM/YMM registers for numerical analysis + a ring0 SoftICE type debugger. Turtles all the way down, baby.
oooo :)
Okay, okay, I'll be looking at Cincom ST pretty soon, heh.
FWIW, while Smalltalk is a bit over my head (it's mostly the semantic-browser UI, which is specifically what completely throws me), I strongly resonate with a lot of the ideas in it, particularly message passing, which I have some Big Ideas™ I hope to play with at some point. I keep QNX 4.5 and 6.5.0 (the ones with Photon!) running in QEMU and VNC to them when I'm bored.
Oh, also - searching for DCE found me Dynamic Code Evolution, a fork of the HotSpot VM that allows for runtime code re-evaluation - ie, live reload, without JVM restart. If only that were mainstream and open source. It's awesome.
I'd say it is actually a stylized version of the latest UI conventions: the icon row could be an interpretation of the MS Office 'ribbon'. I can't tell from the screen shots whether or not it has the old Borland C++ IDE green check mark and red X. That would be dated, no question. The Windows 3.1 / Presentation Manager look would be dated.
There's actually a CSS property just for this called `shape-outside` [0].
It lets you define a shape of an image (or other element) that makes it so when it is floated, other elements can wrap up against it correctly.
[1] is an example I just quickly made to show how the linked page could have been done in straight CSS. It works a bit nicer too as the text smoothly wraps instead of stepping like the linked article does (although there is no reason why both methods can't be combined to provide a smooth stepping where possible, and fallback to the approximation they used when it's not supported)
It's browser support is pretty awful right now (only chrome, safari with the `-webkit` prefix, and basic support in firefox behind a flag), but if it makes it to standardization, it's a pretty neat tool to be able to reach for in these cases.
If there were an easier way to see if the website itself was open source, I'd try and give it as a quick patch, but it doesn't look like the website itself is open source anywhere that I can find.
Hahah, proper Aussie mate! (there's a compile-time flag to make them all boring instead of our homage to Aussie slang haha, cheers and glad you like it)
Hi! I thought the response string was rwasa's fault :)
Question. I've just started to become interested in learning/messing around with assembly language under Linux, and fasm seems like a really attractive option - as and nasm are both tied to gcc (nasm indirectly), and fasm skips all that (and produces slightly smaller binaries too!). fasm also seems compiles itself in less than a second on my really old machine as well, and fast iteration time is one of my favorite features.
Linux-specific assembly-language documentation is kind of rare on the ground though; for fasm in particular, there's literally hens' teeth and dust bunnies. It's very possible to piece things together, but if you have absolutely no idea what you're doing it's a bit intimidating.
HeavyThing is practically a tutorial in and of itself, but I must admit my hesitancy to lean too heavily on it due to its use of GPLv3. I certainly respect the use of that license (and understand the many reasons it might be used for such a unique project), but I'm only tinkering around myself at this point so would likely want to use MIT, CC0 or similar, and would feel a bit conflicted about the predominant thing I learned from being from GPLv3.
HT is on the list for sure, but I wanted to combine it with other sources of info. You seem to be one of the few people out there actively using fasm for Linux development, so I figured it couldn't hurt to ask if you had any other high-level suggestions.
Start simple and hook fasm in with a "normal" gcc/g++ project... I wrote a page[0] ages ago on integrating C/C++ with the HeavyThing library and a good portion of that has nothing to do with my library specifically and is a great starting point to mess around with assembler on a Linux platform. The only other pointer is the "call PLT" format for calling externally linked functions from inside your fasm objects but that is the only tricky bit IMO.
The official fasm tar comes with a few hello world examples. One shows you how to say hello with libc, other with the kernel (then there's x86 and amd64 versions of bout).
I've had a look at the examples that come with fasm, which are invaluable.
But I completely forgot to point out (I knew I was forgetting something!) in my previous comment that I'm actually looking for info on 32-bit assembly programming. My motivation comes from the fact that a) a lot of my systems are 32-bit (such as the ThinkPad T43 I'm using to type this), due to circumstances I cannot change; and b) because (as I discovered to my delight) a program written for i386 and statically linked (eg, by fasm's ELF writer) will run on x86_64 without 32-bit glibc/userspace/anything! This makes perfect sense, but is an absolute winner for me for the kinds of things I'm going to want to make.
So x86_64 is in the "it would be monumentally stupid not to learn it" category, and I'm looking forward to doing so, but I'd have to do some seriously inelegant wrangling (something like qemu-x86_64 + 64-bit userspace - on a 32-bit machine, lolol) to actually work with it at this point.
amd64 is just an extension on x86. I took the (64bit) syscalls from a kernel header, that i can't find now, so you can take from that header you found.
C calling convention for x86 is to push everything on the stack (and use call, that is short for "push instruction pointer and jmp", ret being the reverse), while the linux kernel uses a variant of fastcall (aka put stuff in registers (then use int 80)).
When i was learning i found a lot of x86 examples and tutorials (and a book, can't remember the name (is free)), and not much on amd64.
Just play with it, it'l get easy when you go over the wall.
With "normal" C calling convention you have to care about the stack pointer (esp) (i think it's the callee's responsibility (of the called function)), maybe even the bottom pointer (ebp) (i remember the wikipedia page on calling conventions explains it). The other difference between x86 and amd64 is floating point math, where sse is the default on amd64 and x87 on x86 (x87 works on stacks of numbers, the reverse-polish using a stack way IIRC).
Useful tools are: objdump -d ("-M intel" for the intel notation), strace to trace system calls, and fdbg since GDB can't make sense of a valid ELF file. You can also join the flat assembler and/or nasm forums. I like fasm better then nasm for no strong reason, but nasm is a bit easier.
That was my experience too. I hear "she'll be right" fairly often, but I hadn't heard "she'll be apples" before. I couldn't imagine any country but Australia coming up with a phrase like that though.