Since we are joking, assuming that the MOV instruction exists on many CPU's, could the input for this compiler be considered a needed "portable assembly language"?
The following can be adapted to provide other information: whatever procfs provides. This is a rough equivalent of "pgrep -fl .|less". Work-in-progress. Don't know if Linux grep has "-a" option.
#! /bin/sh
# Almquist clone, not Bash
case $# in
0)
exec grep -a . proc/[0-9]*/cmdline \
|exec tr '\000' '\040' \
|exec sed '
/grep -a .* proc/d;
#parent: '"$$"';
s/proc./ /;
s/\/cmdline:/ /;
' \
|exec less
;;
*)
exec grep -a . proc/[0-9]*/cmdline \
|exec tr '\000' '\040' \
|exec grep $@ \
|exec sed '
/grep '"$@"'/d;
s/proc./ /;
s/\/cmdline:/ /;
' \
|exec less
esac
You don't need any of those `exec`s. Yes, GNU grep has `-a`.
If you don't like the backslash-newline-pipe sequence, if you put the pipe at the end of the previous line, you don't need the backslash; but it's less obvious that the next line is operating on the output of the previous.
Multi-line arguments to sed can be a pain (good luck getting your editor to auto-indent them). Instead, you can use -e to specify multiple sed commands.
There's nothing in there that would make it not work in bash. That should work in any Bourne-family shell.
It only handles 0 or 1 arguments correctly, not > 1.
Multiple arguments could be added if you want that. I personally do not need it as I search the cmdline patterns I need without using spaces. I use dots instead. Quick and dirty.
I write 100's of these small scripts for my own use only so I have my own style, peculiar as it may be. I never need indentation because I always keep scripts short; I only use it occasionally and randomly.
I do not use -e with sed, unless I'm using branches or loops.
The execs seem superfluous but actually make a difference, at least on the UNIX I use. Try it with and without and see if you notice.
All my scripts are portable to Bash, but they're also portable to the most basic of Bourne-compatible shells too. I do not use Bash.
What shell are you using? I could see a naive shell forking twice without exec, but I don't know of a shell that does that. Exec just says "don't fork(3) before calling exec(3)", but, but inside of a pipeline like that, it shouldn't fork again anyway. I have tried it with and without.
Worse is that Google tries to stop scraping. It's like they don't want anyone to see past the first page of results.
They could scrape your website and then they prevent you form scraping your own data back.
The whole process is silly; it reflects the duct tape and chicken wire nature of the www.
No one should have to "scrape" or "crawl".
Data should be put into a open universal format (no tags) and submitted when necessary (rsynced) to a public access archive, mirrored around the world.
This to bridge the gap until we reach a more content addressable system (cf. location based).
Clients (text readers, media players, whatever) can download and transform the universally formatted data into markup, binary, etc. -- whatever they wish, but all the design creativity and complexity of "web pages" or "web apps" can be handled at the network edge, client-side.
"Crawling" should not be necessary.
No one should have to store HTML tags and other window dressing for data.
To give an example, there is a lot of free open source software mirrored all over the internet, mostly on ftp servers, but also on http, rsync, etc.
If you use Linux or BSD you probably are using some of this software. If you use the www, then you are probably accessing computers that use this software. If you drive a new Mercedes you are probably using some of this software. There are a lot of copies of this code in a lot of places.
Is that centralized? Does anyone hosting a mirror ("repository") "own" the software? Is it the same person or entity hosting every mirror?
Compare Google's copies of everyone else's data, also replicated in a lot of places around the world. Who "owns" this data?
From Rob Pike quote at the end, after the Ryan Dahl quote:
"... the UNIX/POSIX/Linux systems of today are messier, clumsier and more complex than the systems the original UNIX was designed to replace."
"Messier, clumsier and more complex" are adjectives that could describe almost all of today's software vis a vis software from the 1970's. This is not a criticism of today's software it is just the evolution (or devolution) as it happened, an objective observation.
By and large, programmers do not attempt to make software more clean, more efficient or less complex. Most do not spen all their time cleaning up messy code, fixing bugs, or sacraficing inefficiency at the expense of usability or abundance of "features". And almost none spend time removing code and reducing complexity.
They do the opposite: add features, pursue "ease of use" at the expense of common sense and incessantly generate and commit code believing that any decline in commits or "software updates" signals a project is "dead", not "modern" and probably in need of a "replacement". Again, I'm not critiquing this, I am just stating the facts. This is what they do.
Not sure about Pike, but the reason I think some older software is higher quality than most newer software is not because it was or is high quality in an objective sense. It's because today's software is so low in quality and in too many cases worse than yesterday's. Indeed, it's "messier, clumsier and more complex."
In an objective sense, 1970's UNIX is nothing to celebrate. But compared to the then-alternatives, what came afterwards, and what we use today, it can be held in high regard. It's only good in a relative sense. Everything else was and still is so bad. (Why is anyone's guess.)
Avoiding the hassles Dahl alludes to[1] brings me a certain feeling of satisfaction. My language of choice is Bourne shell. And if I am just working with text, such as reading and writing, I do not use a graphics layer - no X11.
The question is: Does Pike's comment apply to Plan 9?
1. Not to mention avoiding the needless complexity and obscurity of Microsoft Windows.
There's nothing minimalistic about Qemu. It's an enormous resource intensive program to build. And it's not even easy to run in text mode (no x11). "Qemu-lite" is sorely needed.
You mean can I run it without having X11 running? Yes, of course, with -nographic. How do you think most major VPS providers run qemu? They certainly don't run their hosts with X11 going.
And yes, it runs on not-Linux, but obviously the main selling point of qemu is the access it provides to KVM, which is an interface provided by the Linux kernel. You can do software emulation of a number of architectures on a number of operating systems with qemu.
Are you saying you yourself do this? You're using BSD, running in VGA textmode, from which you run qemu with -nographic to load an image of another OS? You yourself are doing this?
Most of "major VPS providers" I know use Linux, not BSD. (Yes, there are some exceptions but the reliance on Linux is almost universal.)
I would love it if you were right and I am wrong, but I seriously doubt you have tested this recently. I will happily give Qemu another go if you have tested this recently and it worked.
So you are admitting you have not tested this and you are just going by what you read.
When was the last time you used any "smaller Unices"?
Anyway, even if the problems with -nographic on x86 have been fixed on other non-FreeBSD BSD's (and I have my doubts), my comment that compiling qemu has become a substantial undertaking still stands. I suspect you've probably never even tried compiling it yourself but that will not stop you from commenting.
I compile Bochs statically on low powered computers with no problems. Cannot say the same for Qemu. Even you're not using x11 they expect you to have x11 libraries at compile time. As with most programs that grow so large, there are more than a few dependencies one can do without.
It's not minimalistic in its capabilities or implementation, but it's minimalistic in its interface. Its a tool that emulates various hardware, and provides a CLI interface that's no more complex than necessary to operate its conceptually narrow functionality.
I was referring to how long it takes to build from source and how much RAM and CPU it uses while building. The interface is fine. I love Qemu. Bellard writes great software. My early experiences with it were very good. But it has grown to become a whale of a program. I had to switch to Bochs.
Your comment focuses on how fast the compiler generates lines of assembly, after the programmer has constructed a solution.
But unless I have misunderstood the author was focusing on how fast the programmer can construct a solution.
The final step of depressing keys to input the solution into the computer can be sped up by using libraries and perhaps macros.
It's also possible to write programs ("code generators") that generate lines of assembly based on a "template". And that "template" need not be written in C or any other popular computer language.
It could be some "DSL", a shorthand for assembly, that the programmer creates for himself.
djb's qhasm might be considered an example.
While it may be generally true that C requires less key punching than assembly (certainly it requires fewer "lines"), this does not mean constructing a solution in C necessarily comes faster than constructing one in assembly... if the programmer is familiar and efficient working with assembly.