One particularly mnemonic collection of switches is 'plane':
perl -plane 'my $script'
which iterates over all files given on the command-line (or stdin) and
+ (p)rints every processed line back out
+ deals with (l)ine endings, in and out
+ (a)utosplits every line into @F
I am aware that -n and -p are mutually exclusive, but as -p overrides -n, it's seems simpler to just keep 'plane' in mind and remove the 'p' if necessary.
The Texinfo format happened a couple years ago, in the early days of the Web, and let people on modest computers who couldn't run a Web browser work through SICP on their screens (no need for expense of printing to paper) while they also ran a Scheme interpreter on the same modest computer. The work was done by Lytha Ayth from the original freely-available HTML version of the book.
Later on, and now that everyone has more powerful computers, I've heard someone took the Texinfo source code, and replaced the ASCII-art illustrations with real ones, and ran it through TeX, such as for printing or PDF of "camera ready" format that looked similar to the original print book from MIT Press.
I wasn't involved in that much more recent TeX work, and though it was kind of them to preserve the version number with my name in it, I'll ask them to please remove it. (The name was part of some kind of distributed version-tracking scheme that Lytha Ayth proposed, when this seemed to be in the spirit of the original HTML release of the book. I tried to follow versioning instructions when I made changes to the Texinfo source, not knowing my name would show up 20 years later in a very different thing. :)
Thank you for the pointer and, more so, your contributions.
That screenshot of SICP in Emacs -- running side-by-side with the built-in Guile interpreter -- induces peculiar sensations. An echo of how things could've been and possibly still are in some obscure(d) corners of the Net. An interactive learning environment that at least points in the right direction. It certainly looks elegant and somewhat inspirational to me (though my inner Alan Kay is voicing some profound objections ;).
In any case: you carried that torch for a while, don't be hesitant accepting apparently undue credit -- there's too little, in any case, to warrant worry. ;)
Regarding the Emacs screenshot, here's another, from an early attempt to make Emacs more off-she-shelf usable for Scheme programming: https://www.neilvandyke.org/quack/
An actually better experience in Emacs (and part of what got me psyched to learn Lisps) is for Emacs Lisp programming: with a properly configured/installed Emacs, you can be browsing the documentation with rapid navigation, bringing up hypertext docstrings from your code , with links to the source code (perhaps the source code of Emacs itself), evaluating code that affects your running environment from both REPL and editor, etc. It's different than the Smalltalk-80 environment (which I also used, and wrote a little Smalltalk browser for), but there is some overlap. Modern IDEs let you do some of that, and some other things, but sometimes not as well, and Emacs people had this for a few decades.
':back' is mapped to 'H' -- in correspondence to the usual Vi/Vim paradigm to move the cursor with the home-row keys, hjkl. ':open' is naturally mapped to 'o' and drops you into a tab-completable shell. ':tabopen' -> 't' and so on.
The key part, as in Vim, are /not/ the mnemonic, highly effective shortcuts.
Rather, it's the modal workflow that Vim and it's spritual descendants bring to the table.
I'll leave it at that (sounding like a damn preacher already).
[0] Now that I've looked at it, it becomes clear that they're selling rather directly to Vim-acolytes. Pity, perhaps.
I wrote a program I fancifully called 'Human Unit Tests' to aid me in my studies (learning a diverse set of constants for biophysics). I can very much attest to the effectiveness of spaced repetition.
But, /boy/, do you need to stay on the ball. You can't really afford a cavalier, let's-see attitude with this (given any non-trivial amount of items-to-be-memorized).
The review process needs to be as much part of a daily routine as workouts ... Yeah.
On the other hand, there's one reward that doesn't usually get mentioned (as in the fine article re-submitted here[0]): the strengthening of corollary knowledge (or coordinate terms, for the linguistically inclined).
Suppose you're reading a biography of Huygens. You may find yourself
inspired to memorize a few of the basic facts therein. Dutifully, you
feed his life's dates, his major acquaintances and maybe a few places
of importance into the SR system of your choice. You are committed
and keep repeating those facts in ever-increasing intervals.
After a few years a random conversation touches upon the very subject.
To your delight you discover that you are able to hold forth on
Huygens, the man and his time.
To your surprise (and this is my contention [and experience]), you
also find yourself able to speak with some level of accuracy about
tangential matter -- eg. the theories he worked on -- without ever
having either added related facts to the database or dealt with the
subject matter in the intervening years.
In other words: recall of a whole web of interconnected pieces of
knowledge may be strengthened considerably by spaced repetition of
just a few of the central facts.
In my experience there's no specific 'encoding' procedure necessary.
I never put any thought into carefully selecting facts for the spaced
repetition treatment, yet the effect usually manifested itself. So,
yes, I would say it's a 'recall' phenomenon inasmuch as the brain
does all the heavy lifting.
Fascinating thanks, this is a new term for me but strikes a chord.
It also fits nicely with the limited understanding we have of the recall of information in our brains - it all comes down to context and activating the right network (or paths) which can only be reached by activating related/overlapping networks. So once you activate memory on a specific issue you can more easily activate related information (or even do so without intention as you describe). Having more easily reachable 'access points' (strongly encoded and thus well connected information) makes it then easier to access related information.
A corollary is that in order to remember information it's important to connect it to previous well established memories (eg "how does this new concept fit my own experience").
I simply cannot wrap my head around the direction of the Unicode discourse.
We're discussing the appropriate code-point for different smiley faces,
obscure electrical symbols[0] or, in the present case, half stars to express
film or book ratings, yet we have no complete set of sub- and superscripts!
Am I mistaken in thinking it odd, that there's a complete Klingon alphabet but no
representation whatsoever for most Greek or Latin subscripts? Or what if, heaven forbid,
I'd want to use a 'b' index/subscript? Tough! Not even the "phonetic extensions",
where subscript-i comes from, provides it.
Surely there's the one or two actual scientists on the Unicode consortium?
Or even the one odd soul still sporting a notion of consistency who finds it
only logical to provide a "subscript b" if there's a "subscript a"?
Unicode is not known for its consistency in dealing with these issues. The original idea behind Unicode was to be able to represent every then-extant character set with perfect fidelity (i.e., go from X to Unicode and back, and you should get the same data). Why are there letters like U+212B Angstrom sign (not to be confused with U+00C5 Latin capital A with ring above) or things like half-width and full-width characters? Because they were present in Shift-JIS, not because of any coherent notion of what constitutes a glyph. Han unification was driven more by the need to keep from blowing a space budget than by actual rationalization of whether or not the scripts deserved separate spaces.
Note that Klingon isn't in Unicode (it was explicitly rejected by the UTC, with a vote of 9 in favor of the rejection proposal, 0 against it, and 1 abstaining). Tengwar and Cirth, though, are actually considered serious proposals for Unicode, just really, really low priority compared to, say, Mayan script (for which the first proposal should be going live in 2017). Mayan script is interesting in its own right because it's the script (well, of the ones I'm aware of) that most challenges normal conventions on what constitutes letters and glyphs.
ISTM a great deal of trouble and complication could have been prevented by three special types of NBSP that meant "sub", "super", and "back to normal". It's true that some glyphs will be special-cased by some fonts, but in general the glyph is just shrunk and translated when sub- or super-scripted.
I disagree. In math there can be super-super-superscripts, as with tetration representations
https://en.wikipedia.org/wiki/Tetration . Does each get its own character, and when does it end?
In science, consider an isotope like
180m
Ta
73
This cannot be represented as a sequence of symbols because that would give:
180m 180m
Ta -or- Ta
73 73
Markup is how Wikipedia represents it correct, as:
In addition, pretty much anything can go in superscripts, including 2^א and integral equations. The most general solution is to have a "start superscript" and "end superscript" marker, with the ability to embed superscripts, but that still doesn't solve the isotope representation problem.
> The most general solution is to have a "start superscript" and "end superscript" marker, with the ability to embed superscripts, but that still doesn't solve the isotope representation problem.
Couldn't one have something like a "start zero-width superscript" marker, so that the following subscript would not be offset?
> Couldn't one have something like a "start zero-width superscript" marker, so that the following subscript would not be offset?
Well, the problem is that the subscript and superscript are both aligned with the following regular text, so you really need (for the isotope representation) a "start right-aligned zero-width superscript" marker, a "start right-aligned zero-width subscript" marker (though zero-width isn't exactly right, since they should have width, its just that only the wider of the super- and sub-script in a pair should be used in spacing the text) -- there might be other notation that also needs left-aligned versions of -- plus generic start/end superscript markers that have normal width flow, plus appropriate end markers.
It's not surprising that an offhand suggestion doesn't magically solve all problems, but I appreciate your taking the time carefully to explain what's missing. Thanks!
The author states, as regards the interpretation of the
Dunning-Kruger diagrams, that
[i]n two of the four cases, there’s an obvious positive correlation between
perceived skill and actual skill, which is the opposite of the pop-sci
conception of Dunning-Kruger.
In my corner of the universe, you don't get to cherry-pick which pieces
of data (ie "what instances of two sets of random variables") you bestow
the golden twig of correlation upon. If I'm not entirely mistaken,
correlation is very much a global feature, not a measure of proximity of
two points on a chart.
So, yes, Dunning-Kruger (as evinced from the diagrams sported here) indeed seems to make a weaker claim: that there's no
correlation between “perceived ability” and “actual ability”. As such,
this claim is as far from the "pop-sci conception" of Dunning-Kruger as
it is from the author's.
The referenced graphs measure performance and perceived ability on 4 different tasks. You're right that ideally you'd pool this data to get to an overall correlation, but for the point the author makes, eyeballing it and taking a mental average does the trick, no?
Also, what corner of the universe are you from? Loess regression, hierarchical modeling, conditional analyses... methods for finding "non-global" correlations aplenty.
Err, no. There is clearly a pattern to the data, and drawing the conclusion that self-assessment and actual skill are uncorrelated is not what you should do (the simplified pattern being that unskilled individuals overestimate their skill, while skilled individuals underestimate theirs, or a regression to the average if you will).
At any rate, even if we don't take that into account self-assessment is worth plenty as demonstrated by many studies which manage to get coherent data from self-assessments. Sure, you should take them with a grain of salt, and you can expect biases, but no need to throw them out.
As a European, I have not been able to convince family members or friends
that aren't intimately acquainted with the US-American situation that there
is no universal, legislative framework for paid or unpaid maternity leave.
They usually respond with a variation of
"this can't be right; you must be misinformed;
it would be horrible if that were the case."
And you (as a populace) aren't even fighting for it! Not visibly, at least. So, honestly, you
seem to be the one hurting all sorts of causes with this misdirected
attitude of apologism.
I see [0] that it mandates 12 weeks of unpaid leave total -- for pregnancy, childbirth, and the first few weeks of caring for the baby -- which is not hugely generous; but it is something.
Note the requirements, which exclude a lot of people including employees of small businesses: "In order to be eligible for FMLA leave, an employee must have been at the business at least 12 months, and worked at least 1,250 hours over the past 12 months, and work at a location where the company employs 50 or more employees within 75 miles."
When my sister developed health complications caused by her pregnancy, she had to quit her job. She had no other choice. But even if she hadn't quit, she would not have been eligible for FMLA. Now she is going through a divorce, and since she has no income, I am helping to pay her living expenses. My family is definitely feeling the ripple effects of our inadequate maternity leave laws in the U.S. right now.
It may be best to explain to your relatives that the federal US government was mostly intended to provide for a common defense and a few other common things, and the states were supposed to be the real movers and shakers. It appears that things may be similar in the EU, where the individual decisions are left up to the member nations:
...In recent times, the states have seemingly ceded a lot of their power to the federal government. Maybe someday there will be an Article V convention and some of that will be reversed.
At the simplest level, because we the people design laws that allow the company to exist. Without our consent, these invented human things called companies would have no legal standing. So, because we say it should be so.
We support a common police force because it makes sense that government has a monopoly on force. If private security forces did the job of police, that person or persons employing the private force would be a society unto themselves, not a part of the same society as the rest of us, and when conflict developed between their police and the common police (or the police of a 3rd party -- another private security force), then we would be at war. In other words, you don't understand why everyone in society supports the police.
I largely agree now -- in fact I think that women and men (since it's the 21st century and all types of people go on maternity leave) should be able to go on paid maternity leave. The leave will be funded by all corporations for social good -- no point in limiting it to the one the person worked for, since a person on maternity leave contributes equally to all of them. Paid maternity leave is a human right, like clean drinking water and fast internet, and I demand corporations provide us with it!
Most developed countries (the USA being the most notable exception) have some form of paid paternity leave as well, ranging from just a few days, to the same level as maternity leave: https://en.wikipedia.org/wiki/Parental_leave
Where the state provides some or all of the financial assistance for parental leave, one could argue that it is already funded by all corporations, as it would be derived from tax income.
Why should the company provide maternity benefits and not the state? I don't think it's fair for some three person startup to go bankrupt because employee #1 got pregnant and continues to draw down salary while staying home.
If the person is adding value while not on maternity (or paternity) leave then it would be worthwhile to pay them while they are gone so they come back and continue working for you. If you don't do it, some other employer probably will, and they may choose to work for the employer with better benefits instead.
It is astonishing to me, how widespread fisher-wife's-tale-level
conceptions about fundamental aspects of our existence are.
I devoutly hope that you are not, upon contemplation, equate a well-versedness in general knowledge with mindless memorization.
Is emergent behaviour of neural networks really that alien a concept?
Is it possible to believe, in all earnestness, that factoids such as these remain
isolated and inactive in your memory until recalled?
These questions aren't there to test your ability to learn atomic
facts without rhyme or reason. These questions, pitiful as they
may seem, try to probe the breadth of your mental landscape.
You might be right about testing the breadth of one's knowledge, but, for me, this test reminds me of the Mensa tests in Reader's Digest and other magazines from years past. The Mensa tests seemed like they were designed to be just easy enough to get several right answers, thereby piquing the interest of the test taker. Maybe you are smart enough to be in Mensa! Maybe you are smart enough to work at Edison! This test strikes me as 1920s era gamification.
Well, actually, according to Edison's defense of his test as linked to in the OP, he did try to gauge one's ability for rote memorization, and he didn't care about whether people knew about things beyond their immediate job. His 'theory' (highly flawed, I think) is that one needs excellent memory to be able to make decisions now, without needing to take the time to research them.
>His 'theory' (highly flawed, I think) is that one needs excellent memory to be able to make decisions now, without needing to take the time to research them.
I don't see any flaw in the theory. When you code for example, if you don't know in advance about several idioms, data structures, algorithms etc that's (most of the time) not something that you will make up later by researching and changing your program. It's simply something that will take you down a narrower path and constrain your programming.
I'm not talking about knowing the details of algorithm X, or how to implement it from memory. But if you don't know it's existence even, it wont be an algorithm you'll consider when you write your program.
Same thing applies to programming interviews I guess. If you know the minutiae off by heart, then you will be able to make decisions and proceed with your coding immediately rather than take a diversion to research the details. Obviously being able to quickly research is also a very useful skill, but I think it's reasonable to expect a certain level of 'memorized knowledge' from a professional programmer.
Knowledge of the factoids Edison regarded as important is not the same as breadth of knowledge. There is far too much to know to capture it in a short test like this.