Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Fresh IDE (flatassembler.net)
282 points by mutin-sa on Sept 19, 2017 | hide | past | favorite | 81 comments


Just curious if they wrote the IDE in assembly (my instincts say not, but they do make assembler). The login is giving 404 https://fresh.flatassembler.net/fossil/repo/fresh/fossil/rep...


The IDE is indeed written entirely in assembly language, as is everything from the webserver up (JohnFound, author of FreshLib/FreshIDE also wrote a fastcgi layer to interconnect with rwasa from my own goods). Everything there is assembler.


This is not the assembly we remember from TASM/MASM days though. It seems to include quite many high level constructs.


fasm was in fact modelled after early TASM, and much of the "high level constructs" are just macros... or did you mean something more specific?


At the time I lost my interest to assembly, it had pretty high-level contructs like looping, function frames, structs, etc. via macros. Macros of fasm are iirc recursive, so its power is far more than usual assembler. I would put it at 85% on [regular asm .. non-optimized C] scale. You can think of tasm/masm as of lisp with cpp instead of macros.

The use case beyond educational purpose is still unclear to me though. Especially with macros.


Actually the only Assembler I got disappointed with bare macros support is gas.

I never used FASM, being an old MS-DOS grey beard, but tasm/masm macros were quite powerful, specially after MASM 6.0.

So I never got the idea they were like cpp macros.

Regarding the educational purpose with macros, are you aware that TI has some CPUs with an Assembler that looks like C--, or that AS/400 Assembly supports objects?


Back in the day, there was a bit of a hierarchy amongst home computer users, with Amiga assembly programmers deriding x86 syntax.

I also knew a few people who nominally programmedi in Turbo Pascal, but whose code was 70% inline assembly...

And weirdly enough, a few marooned Acorn Archimedes/RiscPC programmers waxing poetic about their ARMs.

(And if there ever was some niche of a "MenuetOS"-like OS, it would probably be for the Raspberry Pi)


Yeah, looking back I would say Turbo Pascal, Turbo C, Turbo Basic and AMOS on Amiga were the Unity of the early 90's game dev on home computers.

Anything that actually required extracting performance out of the system was straight Assembly, which I why it is ironic that new generations think that C compilers were generating fast code since day one.

I also knew a few people that did it like that, to save money on an Assembler.


I think masm 6.0 was a time when Watcom went popular [or just known to me] and I was very disappointed of my asm skill that lost ~x2 both in time and size for Brezenham's. I couldn't even understand what exactly Watcom did for so wow, much performance, and I gave up finally :(

So take my words with grain of salt.

>So I never got the idea they were like cpp macros. >TI ...

My experience lays completely in intel 80* range. I'm provincial-ussr born, so even a regular PC-compatible was almost unobtainable until circa '95.


I can tell you in my provincial Sweden household, a goddamn PC compatible was near unobtainable until 95. Not that I wanted a PC, I remember I dreamt that I had an Amiga but when I woke up I still had a Z80 machine. :-)


I liked the MASM 6 macros for some reason. It was blasphemy to others at the time though.


The other MASM macrohead chiming in


You always can reduce the usage the of macros or make them more assembly-centric. Check the Fresh IDE sources as an example of moderate macro use.

On the other hand, definition of complex data structures is much easier with powerful macro engine.


Sorry, bad links on the front page. Fixed now.

Also, there is a popup menu at left with the navigation links. Although, the repository interface is not very mobile friendly.

Thanks for the report!


Thanks. I found the interface kind of hard to navigate on my 24" screen too. Mainly the the location of the "Menu" is hardest to figure out. The link to source tree is there.

If anyone else is wondering, here's the direct link to repo browser: https://fresh.flatassembler.net/fossil/repo/fresh/dir?ci=tip...


Oh boy, the memories from all the "IDEs" there were for NASM. In the end, it was just easier to use your favorite editor, because most of them just had a pretty color scheme for assembly. This looks pretty sweet, though.


AS a kid I remember those days. I spent like $100 because the IDE was going to help me crack all the gamez. Whelp that was a sorry waste of money.


Haha, I managed to crack a couple programs by following the +ORC cracking tutorials and a (cracked, of course) copy of SoftICE. In fact, that's how I got interested in assembly. Then I learned C and started using Linux as my operating system, so I didn't have a need for either thing anymore :/


I like the example project that compiles a source file into a Mandelbrot image.


I don't know what are use cases for that. Nowadays, if you are coding in assembly, you are probably doing some kernel things, embedding asm in C/C++ (for SIMD or something like that and not because compilers generate bad code), embedded code for microcontrollers or retro computers (i.e. ZX Spectrum).

But Visual Studio-like IDE for making x86 application software, with GUI editor seems weird.


I am using assembly language for all my programming tasks.

And most of they are application programming (Fresh IDE itself and many closed source projects in my work) or even web programming (https://board.asm32.info).

That is why I needed a powerful IDE, suitable for rapid programming of relatively big projects (500Kloc or higher).


This is interesting. Do you not find the dev process significantly slower than using a higher level language?


About twice slower than in HLL, with code reusing of course.

But the code is more reliable and the debugging process is much easier. After some short debugging stage, most of the projects runs for years without single bug report or other support issues.

I am not talking about the significantly higher speed, lower memory footprint and better UX (especially the response time of the UI is really much faster).

As a whole the advantages are more than disadvantages IMHO.


Interesting. Are you sure the increase in code reliability goes down to the language and not your skills? It feels quite contrary to my experience that a lower-level language would be more reliable.


Well, I am pretty average programmer. Not the worst, not the best.

The code reliability of assembly programs is better because programming algorithms in low level, the programmer controls every aspect of the execution. Notice, that excessive use of code generation macros will cancel this advantage.

Another advantage is that the bugs in assembly programs usually cause immediate crash of the program and this making the fixing easy.

Defer crashes and strange/random/undefined behavior of bugs in assembly programs is rare. IMO, this is because of reduced count of abstraction layers.


What about code maintainability and readability - I'm guessing that must be worse when compared to a HLL? Also, what made you get into writing complex programs in assembly - was it just the extra control? I've used assembly when I needed to optimise my C code, but it was a slow and difficult process! I would not really choose it for complex stuff, but I'm really interested to hear your point of view.


Code maintainability and readability depends only on the programmers knowledge of the language/framework/libraries used.

For example, I don't know Lisp, so for me it is much harder to read/maintain Lisp project than assembly language project.


Thanks for sharing! This is really interesting, especially the part about the reduced count of abstraction layers. Do you think the abstraction layers are the problem, or the fact that the overwhelming majority of "abstractions" that materialize in modern high-level software are leaky?


Why adding abstraction layers makes programming easier?

Because allow the programmer to not think (and even know) about some things and leaving them to the layer/libraries.

But every layer adds also a level of obscurity. The interaction between multiply layers is even more undefined and random.

It is OK while everything goes as expected. But when there are problems, the obscurity can make the debugging a hell.

In addition, the behavior of the bugs hidden deep in the layers (or in the way the layers interacts in between and with the application) can be really weird.

That is why, IMHO, the programmer should keep the abstraction layers to the minimal count that allows solving programming tasks with minimal effort, counting not only the coding time, but debugging and supporting time as well.

In my practice, I decided that using FASM with Fresh IDE and set of assembly libraries gives me the needed quality of the code.


This is a sad statement. Proficiency in a given language/environment dictates how long and painful a solution will be. Use cases for this are no different to any computing-related task, though I concede it isn't for everyone :-) Neither is Python, Perl, Java, JS, Node, and hey, lets throw in COBOL because you know, there was never a use case for that either. /sarcasm


Assembly IDEs are still quite common on embedded space, and TASM back in the day had a Turbo Vision based IDE on MS-DOS.

And on the Amiga we had DevPac.


With too many high level constructs, i also feel it looks quite different from TASM days.


It would make a better first impression if your hero screenshot didn't have font rendering from circa 1995.


Yes, it does not look like website of another Node.js framework or adtech startup. Another culture of software has another website aesthetics.

And yet it looks better than average "modern" website. Fonts are of adequate size, contrast is high, no "carousels", no videos.


The UI is simply what you get with the classic win32 API. It is instantly comprehensible and workmanlike, if not contemporary chic. It is also crazy efficient; that API and the widgets and controls it provides were developed when machines had 4MB-8MB of RAM. The whole IDE probably launches in a few milliseconds and fits in L2.

Cool stuff.


I know that comment seems superficial, but when I saw the Windows 98 style GUI, I actually thought that perhaps this was an abandoned project someone was bringing up for nostalgic purposes.


The screenshots are taken in Linux: XFCE+Wine. Fresh IDE is actually some kind of hybrid application. It works in Linux better and with more features than in Windows. :)


I know Windows is a lost cause, but Unix apps don't have to be ugly like that ;-)

And even Windows can use GTK.


I am working on v3.0 that will use its own portable GUI toolkit (in assembly language) with much prettier UI. On this page you can see some preliminary experimental screenshots: https://fresh.flatassembler.net/index.cgi?page=content/artic...

Still not GTK though. It is too heavy for assembly language programming and will not allow portability for example on MenuetOS or KolibriOS assembly written OSes.


Nice improvement! The screenshots have much nicer font rendering (and a better font, for that matter). The fact that it uses more than the 16 colours that were available in Windows 3.1 helps a lot too.


> in assembly language

That's pretty impressive.


That looks cool as hell!


We have been conditioned to prefer style over substance.


Yeah, I find it funny that a project called Fresh has such a dated looking GUI style :D

It does look like an interesting project though, if you can ignore the fugly widget styles.


The entire thing is writted in FASM assembler. So maybe they thought a normal gui toolkit is too heavy weight for asm.


Sure, that's likely the case and its an impressive piece of software. But.. that doesn't stop it from looking dated. Then again, maybe people used to developing in assembly (ie the target users) don't mind or are used to worse looking tools (eg when I was doing microcontroller development, most tools were pretty bad and dated looking, some newer ones used eclipse/netbeans for better or worse).


Well, as I said several posts above, I am working on a new, assembly language, portable GUI toolkit that to be used for v3.x series of Fresh IDE.

Unfortunately it will add another 50..100kB to the code, but the portability has price. :(


I didn't mean it as a criticism to you or your work and fresh certainly looks decent, more that I found the name ironic given the "classic" look of the GUI. My first comment at least, the second one was commenting on how many low level tools are to look dated and maybe it's the norm. Again, I didn't really mean it as criticism even though I suppose it sounds like it. :/

Could you instead call into an existing portable UI library? Perhaps FLTK or something more lightweight than the popular ones. It just seems to me that creating a new GUI toolkit from scratch is a massive undertaking and I'm unsure what value you will get over spending the time improving fresh itself? I guess there's some appeal to having a self contained assembly system through and through.


No offenses. :)

At first, FLTK, does not look much better than the old Windows widgets.

In addition, I strongly want Fresh IDE to be portable to MenuetOS, KolibriOS and other assembly written OSes. As a rule, they all are written with FASM and a good IDE that can be ported for days (not for years) can be great tool for the OS developers.

That is why I started the development of special GUI toolkit.


RE: Portability - Not sure how far you'll be able to get by gcc -S'ing something like nuklear[1] (cross-platform ANSI C89) but it might save you some time.

I don't have much HLL asm/demoscene experience personally so I'm not sure what's "impressive" as engineering feats these days but this looks cool. As someone who aspires to see a viable Smalltalk-like runtime self-modifiable introspective debugger at the OS level with a decent layer of POSIX compatibility and the ability to run AVX512 instructions, I like the idea that tools like this are out there. Cheers, mate

[1] https://github.com/vurtun/nuklear


> RE: Portability - Not sure how far you'll be able to get by gcc -S'ing something like nuklear (cross-platform ANSI C89) but it might save you some time.

The big problem with using "gcc -S" is that as a result you have a HLL program, simply written as an assembly language listing.

The humans write assembly code very different than HLL. Even translated to asm notation, this difference will persist. Asm programmer will choose different algorithms, different data structures, different architecture of the program.

Actually this is why in the real world tasks, regardless of the great compiler quality, the assembly programmer will always write faster program than HLL programmer.

Another effect is that in most cases, deeply optimized asm program is still more readable and maintainable than deeply optimized HLL program.

In this regard, some early optimizations in assembly programming are acceptable and even good for the code quality.


> As someone who aspires to see a viable Smalltalk-like runtime self-modifiable introspective debugger at the OS level

That's an interesting pile of keywords you've got there.

I don't know about Smalltalk (I find Squeak, Pharo, etc utterly incomprehensible - I have no idea what to do with them), but for some time I've been fascinated with the idea of a fundamentally mutable and even self-modifying environment. My favorite optimization would be that, in the case of tight loops with tons of if()s and other types of conditional logic, the language could JIT-_rearrange_ the code to nop the if()s and other logic just before the tight loop was entered - or even better, gather up the parts of code that will be executed and dump all of it somewhere contiguous.

C compilers could probably be made to do this too, but that would break things like W^X and also squarely violate lots of expectations as well.


This is sort of implemented in various forms.

For a VM, RE: code rearrangement, you're effectively describing dynamic DCE if I understand you correctly, CLR does this (and lots more)[2].

At the low-level programmer level, there's nothing stopping a (weakly) static language like C from adopting that behavior[3] at runtime [i.e. with a completely bit-for-bit identical, statically linked executable which].

At the compiler level, you've got the seminal Turing Award by Ken Thompson that does it at compiler level[4].

At the processor level, you heuristically have branch prediction as a critical part of any pipeline. (I think modern Intel processors as of the Haswell era assign each control flow point a total of 4 bits which just LSL/LSR to count the branch taken/not taken. (Don't quote me on that)).

RE: Smalltalk - for me, the power of the platform's mutability was revealed when I started using Cincom. When I was using GNU implementations ~10 years ago, they felt like toys at the time (though I hear things have largely improved). If you've ever used Ruby, a simple analogy would be the whole "you can (ab)use the hell out of things like #Method_Missing to create your own DSLs". This lends of a lot of flexibility to the language (at the expense of performance, typing guarantees). In a Smalltalk environment, you get that sort of extensibility + static typing guarantees + the dynamic ability to recover from faults in a fashion you want.

Imagine an environment[5] that has that structured instrinsically + the performance of being able to use all them fancy XMM/YMM registers for numerical analysis + a ring0 SoftICE type debugger. Turtles all the way down, baby.

=====

[1] See ISL-TAGE of CBP3 and other, more modern reportings from "Championship Branch Prediction" if it's still being run).

[2] https://stackoverflow.com/a/8874314 Here's how it's done with the CLR. The JVM is crazy good so I'd imagine the analogue exists there as well.

[3] https://en.wikipedia.org/wiki/Polymorphic_code

[4] http://wiki.c2.com/?TheKenThompsonHack

[5] Use some micro-kernel OS architecture so process $foo won't alter $critical-driver-talking-to-SATA-devices or modifying malloc. I'd probably co-opt QNXs Neutrino designs since it's tried and true. Plus that sort of architecture has the design benefit of intrinsically safe high-availability integrated into the network stack.


> This is sort of implemented in various forms.

> For a VM, RE: code rearrangement, you're effectively describing dynamic DCE if I understand you correctly, CLR does this (and lots more)[2].

You mean Dynamic Code Evolution?

Regarding [2], branch prediction hinting being unnecessary (as well as statically storing n.length in `for (...; n.length; ...)`) is very neat. I like that. :D

> At the low-level programmer level, there's nothing stopping a (weakly) static language like C from adopting that behavior[3] at runtime [i.e. with a completely bit-for-bit identical, statically linked executable which].

Right. The only problem is people's expectation for C to remain static. Early implementations of such a system may cause glitches due to these expectations being shattered, and result in people a) thinking it won't work or b) thinking the implementation is incompetent. I strongly suspect that the collective masses would probably refuse to use it citing "it's not Rust, it's not safe." Hmph.

> At the compiler level, you've got the seminal Turing Award by Ken Thompson that does it at compiler level[4].

That c2 article very strongly reminded me of https://www.teamten.com/lawrence/writings/coding-machines/ - particularly the theoreticalness of the idea.

For example,

> And it is "almost" impossible to detect because TheKenThompsonHack easily propagates into the binaries of all the inspectors, debuggers, disassemblers, and dumpers a programmer would use to try to detect it. And defeats them. Unless you're coding in binary, or you're using tools compiled before the KTH was installed, you simply have no access to an uncompromised tool.

...Nn..n-no, I don't quite think it can actually work in practice like that. What Coding Machines made me realize was that for such an attack to be possible, the hack would need to have local intelligence.

> There are no C compilers out there that don't use yacc and lex. But again, the really frightening thing is via linkers and below this hack can propagate transparently across languages and language generations. In the case of cross compilers it can leap across whole architectures. It may be that the paranoiac rapacity of the hack is the reason KT didn't put any finer point on such implications in his speech ...

Again, with the intelligence thing. The amount of logic needed to be able to dance around like that would be REALLY, REALLY HARD to hide.

Reflections on Trusting Trust didn't provide concrete code to alter /usr/bin/cc or /bin/login, only abstract theory, discussion and philosophy. It would have been interesting to be able to observe how the code was written.

I don't truly think that it's possible to make a program that can truly propagate to an extent that it can traverse hardware and even (in the case of Coding Machines) affect routers, etc.

> At the processor level, you heuristically have branch prediction as a critical part of any pipeline. (I think modern Intel processors as of the Haswell era assign each control flow point a total of 4 bits which just LSL/LSR to count the branch taken/not taken. (Don't quote me on that)).

Oh ok.

> RE: Smalltalk - for me, the power of the platform's mutability was revealed when I started using Cincom.

Okay, I just clicked my way through to get the ISO and MSI (must say the way the site offers the downloads is very nice). Haven't tested whether Wine likes them yet, hopefully it does.

> When I was using GNU implementations ~10 years ago, they felt like toys at the time (though I hear things have largely improved).

Right.

> If you've ever used Ruby, a simple analogy would be the whole "you can (ab)use the hell out of things like #Method_Missing to create your own DSLs".

Ruby is (heh) also on my todo list, but I did recently play with the new JavaScript Proxy object, which basically makes it easy to do things like

  var curcfg = null, defcfg = { ... };
  var cfg = new Proxy({}, {
    get: (_, key) => {
      return (curcfg[key] !== null) ? curcfg[key] : defcfg[key];
    },
    set: (_, key, val) => {
      curcfg[key] = val;
      return true;
    }
  });
implementing default parameters, overlays, etc.

> This lends of a lot of flexibility to the language (at the expense of performance, typing guarantees).

Mmm. More work for JITs...

> In a Smalltalk environment, you get that sort of extensibility + static typing guarantees + the dynamic ability to recover from faults in a fashion you want.

Very interesting, particularly fault recovery.

> Imagine an environment[5] that has that structured instrinsically + the performance of being able to use all them fancy XMM/YMM registers for numerical analysis + a ring0 SoftICE type debugger. Turtles all the way down, baby.

oooo :)

Okay, okay, I'll be looking at Cincom ST pretty soon, heh.

FWIW, while Smalltalk is a bit over my head (it's mostly the semantic-browser UI, which is specifically what completely throws me), I strongly resonate with a lot of the ideas in it, particularly message passing, which I have some Big Ideas™ I hope to play with at some point. I keep QNX 4.5 and 6.5.0 (the ones with Photon!) running in QEMU and VNC to them when I'm bored.

Oh, also - searching for DCE found me Dynamic Code Evolution, a fork of the HotSpot VM that allows for runtime code re-evaluation - ie, live reload, without JVM restart. If only that were mainstream and open source. It's awesome.


I'd say it is actually a stylized version of the latest UI conventions: the icon row could be an interpretation of the MS Office 'ribbon'. I can't tell from the screen shots whether or not it has the old Borland C++ IDE green check mark and red X. That would be dated, no question. The Windows 3.1 / Presentation Manager look would be dated.


Or if the site was usable on mobile.


Rather tell this to the vendor of your preferred mobile browser.


On a side note, check out how they have laid out the curved screenshot.

It is apparently a stack of images. That helps setting the text to follow the couture of the curve.


There's actually a CSS property just for this called `shape-outside` [0].

It lets you define a shape of an image (or other element) that makes it so when it is floated, other elements can wrap up against it correctly.

[1] is an example I just quickly made to show how the linked page could have been done in straight CSS. It works a bit nicer too as the text smoothly wraps instead of stepping like the linked article does (although there is no reason why both methods can't be combined to provide a smooth stepping where possible, and fallback to the approximation they used when it's not supported)

It's browser support is pretty awful right now (only chrome, safari with the `-webkit` prefix, and basic support in firefox behind a flag), but if it makes it to standardization, it's a pretty neat tool to be able to reach for in these cases.

If there were an easier way to see if the website itself was open source, I'd try and give it as a quick patch, but it doesn't look like the website itself is open source anywhere that I can find.

[0] https://developer.mozilla.org/en-US/docs/Web/CSS/shape-outsi...

[1] https://jsfiddle.net/c1ffdpgq/2/


Your example doesn't work in Firefox (text doesn't wrap properly), but it does in Chrome


On desktop firefox you need to enable a flag in the browser for it to start working (at least that's what MDN says, I didn't try it)


It does work on latest Firefox for Android. Text curves well.


That's a clever solution! There's an experimental way to do this in CSS - but it's not well supported yet:

https://developer.mozilla.org/en-US/docs/Web/CSS/shape-outsi...

And likely this can be done with SVG as well.


That brings me back. I remember Dreamweaver or Frontpage used to do this automatically in the late 90's, I was blown away and used it everywhere.

That, and image maps.


The HTTP response code for fresh.flatassembler.net assets is "200 She'll be apples", I didn't know 200s could be customised like that :)


The status message is just informational and can be anything. It's actually removed from HTTP/2.

I've seen some crappy clients expect exact strings though.


I know the fasm forum is powered by rwasa, perhaps fresh. is too.

https://2ton.com.au/rwasa/ is based on https://2ton.com.au/HeavyThing/, a library written for fasm that handles a bunch of cool things including enough crypto to do TLS and enough network code to make a webserver.


Hahah, proper Aussie mate! (there's a compile-time flag to make them all boring instead of our homage to Aussie slang haha, cheers and glad you like it)


Hi! I thought the response string was rwasa's fault :)

Question. I've just started to become interested in learning/messing around with assembly language under Linux, and fasm seems like a really attractive option - as and nasm are both tied to gcc (nasm indirectly), and fasm skips all that (and produces slightly smaller binaries too!). fasm also seems compiles itself in less than a second on my really old machine as well, and fast iteration time is one of my favorite features.

Linux-specific assembly-language documentation is kind of rare on the ground though; for fasm in particular, there's literally hens' teeth and dust bunnies. It's very possible to piece things together, but if you have absolutely no idea what you're doing it's a bit intimidating.

HeavyThing is practically a tutorial in and of itself, but I must admit my hesitancy to lean too heavily on it due to its use of GPLv3. I certainly respect the use of that license (and understand the many reasons it might be used for such a unique project), but I'm only tinkering around myself at this point so would likely want to use MIT, CC0 or similar, and would feel a bit conflicted about the predominant thing I learned from being from GPLv3.

HT is on the list for sure, but I wanted to combine it with other sources of info. You seem to be one of the few people out there actively using fasm for Linux development, so I figured it couldn't hurt to ask if you had any other high-level suggestions.


Start simple and hook fasm in with a "normal" gcc/g++ project... I wrote a page[0] ages ago on integrating C/C++ with the HeavyThing library and a good portion of that has nothing to do with my library specifically and is a great starting point to mess around with assembler on a Linux platform. The only other pointer is the "call PLT" format for calling externally linked functions from inside your fasm objects but that is the only tricky bit IMO.

https://2ton.com.au/rants_and_musings/gcc_integration.html


That makes a lot of sense.

I could combine this with viewing gcc's assembly output, as well!

Thanks :D

Edit: This page is ridiculously comprehensive. Wow.


The official fasm tar comes with a few hello world examples. One shows you how to say hello with libc, other with the kernel (then there's x86 and amd64 versions of bout).

http://chamilo2.grenet.fr/inp/courses/ENSIMAG3MM1LDB/documen... is the official spec for the amd64 calling convention (aka ABI) on unices. http://www.logix.cz/michal/devel/amd64-regs/ is a nice table showing what goes in what register (still amd64) when calling a C library or the kernel. X86 library calling convention is just putting everything on the stack, while the kernel convention.. i don't remember (int 80 and syscall in eax, but arguments..). There's a syscall table https://filippo.io/linux-syscall-table/ and i made a fasm to include https://pastebin.com/nnrMVF8u (amd64; i made from the kernel source headers, that i can't find now).

That's about all there is linux specific.


Hmm, interesting.

I've had a look at the examples that come with fasm, which are invaluable.

But I completely forgot to point out (I knew I was forgetting something!) in my previous comment that I'm actually looking for info on 32-bit assembly programming. My motivation comes from the fact that a) a lot of my systems are 32-bit (such as the ThinkPad T43 I'm using to type this), due to circumstances I cannot change; and b) because (as I discovered to my delight) a program written for i386 and statically linked (eg, by fasm's ELF writer) will run on x86_64 without 32-bit glibc/userspace/anything! This makes perfect sense, but is an absolute winner for me for the kinds of things I'm going to want to make.

So x86_64 is in the "it would be monumentally stupid not to learn it" category, and I'm looking forward to doing so, but I'd have to do some seriously inelegant wrangling (something like qemu-x86_64 + 64-bit userspace - on a 32-bit machine, lolol) to actually work with it at this point.

The syscall table you made is very similar to HeavyThing's, heh. I've actually been researching precisely that of late; you most likely generated your copy from https://github.com/torvalds/linux/blob/master/arch/x86/entry.... I of course want https://github.com/torvalds/linux/blob/master/arch/x86/entry....


amd64 is just an extension on x86. I took the (64bit) syscalls from a kernel header, that i can't find now, so you can take from that header you found.

C calling convention for x86 is to push everything on the stack (and use call, that is short for "push instruction pointer and jmp", ret being the reverse), while the linux kernel uses a variant of fastcall (aka put stuff in registers (then use int 80)).

When i was learning i found a lot of x86 examples and tutorials (and a book, can't remember the name (is free)), and not much on amd64.

Just play with it, it'l get easy when you go over the wall.

With "normal" C calling convention you have to care about the stack pointer (esp) (i think it's the callee's responsibility (of the called function)), maybe even the bottom pointer (ebp) (i remember the wikipedia page on calling conventions explains it). The other difference between x86 and amd64 is floating point math, where sse is the default on amd64 and x87 on x86 (x87 works on stacks of numbers, the reverse-polish using a stack way IIRC).

Useful tools are: objdump -d ("-M intel" for the intel notation), strace to trace system calls, and fdbg since GDB can't make sense of a valid ELF file. You can also join the flat assembler and/or nasm forums. I like fasm better then nasm for no strong reason, but nasm is a bit easier.

glhf


I'm an Aussie too :) I didn't expect to see slang popping up in Chrome DevTools though haha.


I'm an Aussie also born in 91 and I have never heard that expression before. The "she'll be" made me wonder though!


That was my experience too. I hear "she'll be right" fairly often, but I hadn't heard "she'll be apples" before. I couldn't imagine any country but Australia coming up with a phrase like that though.


According to the wiktionary[1] it is "rhyming slang" from:

    apples and spice = nice
[1] https://en.wiktionary.org/wiki/she%27ll_be_apples


For the complete list, look for occurrances of "HTTP/1.1" in the source:

https://2ton.com.au/library_as_html/webserver.inc.html


Is there a book that teaches how to use assembly with Fresh IDE?


Well, no, there is no "book" in the real meaning of this word.

But some documentation is available in the "Documentation" section of the web site and inside the "Help|Help file" (Ctrl+F1) menu in the IDE itself.

There are small example and template projects as well.

Also the FASM forum is a good place to ask: https://board.flatassembler.net




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: