Hacker Newsnew | past | comments | ask | show | jobs | submit | damck's commentslogin

Which west? The one that has legal corruption under name of lobbying? Or the west which has internally, without the need of pescy public's say, elected chair of governance?


For me it was mostly video games. I have vivid memories of teachers' being astounded by how much I already knew as far as elementary. Though it was funny that I knew what "sword" is but other more common words eluded me.


Vim should be learned by blood sweat and tears Or you could just curse the random box you ssh into for not having nano


> Vim should be learned by blood sweat and tears

or learned via nethack.

To be honest, I know vi and emacs, and curse fresh Linux installs for having some weird editor with completely foreign control keys. :) Dunno if it's nano or something else, but it's mildly annoying to me. I tend to just uninstall it rather than muck around with the "alternatives" system.


Try openbsd then.

Default editor is mg, which is basically a micro-emacs reincarnation


The default editor is vi.


Ed is the standard text editor.


True that. But this games some times help you with discoverability. In the sense that, sometimes you might not know if there is a way to do X.


Writing modern safe c++ isn't really the hassle everyone makes out of. Besides smart pointers the clang's sanitizers go a long way. I did try to pitch Rust at my corp but aforementioned safety checks are considered enough against the overhead of learning new language and I agree. Personally I don't like the Rc and Box syntax that's required to get a simplest homebrew version of even linked list going, C++'s metaprogramming hacks are rivaling that.

I wish the stigma against "unsafe" C++ was a bit more rational. People who use it aren't the kind fresh out of bootcamps and mostly realise the gains and risks. But maybe I'm skewed by my job which uses C++ and takes any risks seriously.


Seriously, what is this fascination with linked lists?

In comparison to array-based lists they're: - less memory-efficient, - do not allow random-access, - worse for cache locality (so can be up to orders of magnitude slower) and - more complex.

They are nice to learn some principles in the context of an Intro to FP course but apart from that, meh.


The linked list is just an lowest common denminator example. I think he and others (and me when I bring it up) mean mutually linking data strauctures.

Almost any kind of data structure in Rust is extremely painful to do efficiently. You either go the unsafe route of you drowned in a sea of boxes and cells.

On Reddit recently somebody gave the ludicrous claim that you shouldn't have to write your own data structures in rust - the rust system library should have everything you need.


On the real, large projects I have worked on for years (Firefox, rr, Pernosco) in C++ and Rust I have spent negligible time writing container data structures. Of course I create data structures, but almost always by combining hashtables, arrays and smart pointers and occasionally something more exotic from a library.

It's unfortunate that a lot of teaching programmer has people implement data structures from scratch. It gives the false impression that that's what programming is largely about.


Maybe the project should have implemented more from scratch instead of cobbling together some Frankenstein data structure (and Firefox wouldn't be such a massive memory hog with poor performance)?

I guess it really depends on your job, skill level, and mentality. While I do use a lot of off the shelf pieces, their relationships don't always it neatly and shoehorning them can cause performance issues. (I'm not going to pay for a double indirection when I can avoid it entirely).

But then again, I think this cookie-cutter approach to software is poor craftsmanship and often results in bloated, slow code that is way larger than it needs to be. I want to write something better than everybody else, not just make the same paint-by-numbers piece everybody else does.


I have a PhD in computer science from CMU, I have published many academic papers, and I was a distinguished engineer at Mozilla. The issue isn't skill level.

Randomly lashing out at Firefox is silly, especially at this time when it's getting so much praise for performance compared to Chrome. Firefox does indeed contain some complex, micro-optimized data structures for its core data (e.g. the CSS fragment tree and the DOM). It's just that it also contains a lot more code besides.

You wouldn't use an off-the-shelf hashtable to implement the mapping from a DOM node to its attributes. You should use an off-the-shelf hashtable to track, say, the set of images a document is currently loading. Like any kind of optimization, you optimize your data structures where it matters and you write simple, maintainable code everywhere else.


Slow down there turbo. Nobody said anything about your skill (although PhD doesn't particularly mean a talented developer - some of the worst code ive seen come from cs phds where some only understand the highest polynomial in big-o but forget the other factors). And nobody cares a cent about you getting whatever award from moz.

A said anything about optimizing in inappropriate areas (honestly, what did you get that from). This entire thread started because somebody didn't understand why people often user linked list as an example of something difficult in rust.

> Of course I create data structures, but almost always by combining hashtables, arrays and smart pointers and occasionally something more exotic from a library.

But that does scream "I don't really do a lot of performance oriented work". That you can somehow cobble together an apple out of a banana and a cat by probably using a metric ton of boxes and refcounts (that are just used to get around the borrow checker) doesn't surprise me if you are willing to make the readability and performance sacrifices.


> Maybe the project should have implemented more from scratch instead of cobbling together some Frankenstein data structure (and Firefox wouldn't be such a massive memory hog with poor performance)?

Sorry, but what is that supposed to mean? Have you looked at Chromium's (or any other modern browsers') memory usage? Firefox is timid compared to it, and always has been so. Maybe it's not due to the browser engineers' low skill level, but due to the enormous complexity of modern web? It's a separate operating system on top of your operating system.


Sadly it is not a stigma,

https://www.jetbrains.com/lp/devecosystem-2019/cpp/

34% don't use any kind of unit testing.

35% don't use any kind of static analysis tooling.

36% don't use any kind of guidelines.


I wouldn’t rely on JetBrians survey to tell the whole story for C++. TONS of places use Visual Studio for C++ and would never even know about a JetBrains survey


Naturally it isn't representative of the whole industry, but it does show a trend.

I can also post a ISO C++ one with similar results.

Or the video from Herb Sutter's talk at CppCon, where only 1% of the audience confirmed using any form of static analysers.

As anecdote, many enterprise places that use VC++ are still using versions like 2008 or 2010 and writing code as MFC/ATL had just been released.

The same kind of shops that are running Red-Hat enterprise 5, some Java version pre-8, and such.


Those might be even worse.

I think my experience correlates with the study. Most lower level code that I had seen used neither unit-tests nor any good structuring. At least in close-to-hardware projects that seems to be more the rule than an exception. I think this is due to many contributors there not having a pure software-engineering background. Those often have not worked in other higher level stacks and therefore are not familiar with practices.


In my opinion, the advtanges of Rust over C++ are not so much the borrow checker, but all the other features. In particular the error handling. I understand the reasoning for implementing exceptions in C++, but I really don't like the implicit nature of them. Algebraic Data Types are really easy to use, and with the '?'-operator using them is very clean.

Having proper metaprogramming is also really great. Sure, you can definitely go overboard, but a few things are just only possible with proper metaprogramming like quickly printing the value of a struct or enum for debugging and easy serialization/deseralization (like serde does). It's just a huge boon for doing introspection.

But it's not just the particular features that are important, it's the fact that best practices are integrated into the language. There are standard solutions for most things: error handling, unit tests, build system, package management, formatting style, etc. Sure, if you have a long-running C++ project, you're gonna have answers for all that, but the consistency matters when you want to integrate libraries.

I think if you're going to use Rust, you should try to speak to its strengths rather than retrofitting existing C++ idioms onto it. There are both real advantages and very real costs to doing this, and you certainly shouldn't just switch an existing C++ codebase to rust.


That's a really sad way of looking at things


Why is that?


It's not a "way of looking at things", it's a statement. A statement cannot be sad, and even if a statement makes someone sad, that doesn't make it false.

edit: yeah, figures. When calling out ElsaGate, I got replies like "your argument is literally 'think of the children'". But in this thread it's all so heartwarming, or respectively turning statements that aren't dripping with soppy general statements that mean nothing into an "attitude" or "way of looking at things" (which remains BS, and reaching 500 karma and finding the downvote button doesn't make that false, either)


I won my first (and only) programming contest in Logo. Fun to see the idea alive.


Are there a command pop up with tab search and mouse gestures built in yet? These as well as multitab selection kinda won me over to Vivaldi a year and bit back


Libraries aren't supposed to be "fun".


I feel like this is a very negative way to view the situation. Libraries should totally be "fun". It should be "fun" to go and find a book that you enjoy or just spend time reading in an area dedicated to the love of books. It shouldn't be a place to just drop the kids off and leave, but it should be a place kids want to go. Reading can be fun and I almost feel like it is a "trendy thing" of my generation (Millennial) to hate it.


You can blame how reading is taught in schools.

There is nothing that will suck the enjoyment out of a story faster than a high-school curriculum.


Where have you seen the reading hate trend?


I should have made it more clear that it was in my own personal experience. Many of the kids from my highschool or even as late as the people I am still in contact with from college. Many are astonished I own a bookshelf full of books and actually read from it. I am pretty sure there is no real relation here, but I would like to compare it to how the same set of people feel about math. They have an assumption that it is boring and not fun so they actively avoid it.


God that's awful. Where are you from?


Go around and ask any people, younger people the better, what the last three books they've read are.

Watch them struggle.


Not reading is one thing, hating books to be trendy is another.


The most popular selection on FB in the West for favorite books literally seems to be "I don't read lol".


I don't know if it's necessarily "hate" per-se, but go read one or more forums on something like Reddit.

Depending on the forum, and the size of the comments, you'll sometimes see something like:

TL;DR

"too long; didn't read"

In some cases, you'll see a post where there will be a "TL;DR" section summarizing the "long form" version just below it.

The funny thing is, the "long form" might only be a paragraph or two, but apparently for a certain segment of the population who read forums, even that much information is "TL;DR" worthy.

These people don't want to read anything that won't fit inside a tweet. I'd dare say that for some, even 280 characters is just too much text to digest.

It leaves no room for thoughtful discourse. It leaves no room for intelligent debate and conversation.

I see such brush-offs of conversation online, and couple it with what I have heard of some people who eschew being alone; who need noise (particularly people around them talking with each other) so that they don't have to listen to their own thoughts - and it makes me shudder to think to what end our society will arrive at.

In a way, we are already witnessing its decline.


And that is why libraries are closing.


Libraries are supposed to be whatever the fuck we want them to be.


> Libraries are supposed to be whatever the fuck we want them to be.

No, libraries serve a specific purpose. You can want them to be banks, or restaurants, or laundromats or whatever, but that doesn't make them anything but libraries. And besides, everyone else wants libraries to be libraries, so you've been outvoted in that regard.


I don’t know about everone. The Carnegie Library here in DC is now an Apple store. Our local library is turning more into a community center, while many others are a place for the homeless to hang out.


This train of thought only made me ask myself if the piece itself is real at all. The wonders of internet


Dunno if just mobile problem, but most code has ^M instead of linebreaks; hard to browse oneline source


It's because the code was written on classic Mac OS, which used CR newlines rather than the LF or CRLF styles used today.


lol, I knew LF and CRLF was a thing, but only CR? Interesting :D


Taken literally just CR seems very odd.

You probably know this, but LF is "line feed" which advances the feed.

CR is "carriage return" which puts the carridge to the start of the line.

The combination makes sense because it does both, it returns the carriage and advances the line.

Just LF kind of makes sense because when you advance the line you can think of the line being empty.

But conceptually just CR suggests returning the carriage but that doesn't imply the newline.

Of course the terminology is originally from type-writers so it doesn't have to make sense, but it does seem odd that some systems chose just CR.


On a typewriter from around the time of the first Macs, the return key normally moves back to the beginning and moves down a line, so that's probably why they picked it. They also called the key "return" rather than "enter."


On a classic non-electric typewriter the carriage was returned by pushing a lever on the right side of the carriage. The lever could be pressed to the left relative to the carriage which would move the paper up. At the end of travel relative to the carriage if you kept pushing it would move the carriage back to the left. Hence CR and LF could be accomplished in one motion. If you wanted to advance the paper many lines, you'd push the lever multiple times - probably with the carriage all the way to the left the whole time.

On another note, I find it incredibly weird describing an "everyday" item like a typewriter because many people on HN may have never used one!


Strange that 'return' takes you to a new line, rather than returning you to the previous line. English!


It's a simple mechanical artefact. Carriage return moves the paper carriage all the way to the right (returning it to its initial position), while line feed will advance the paper vertically (feeding a line through the carriage). With that in mind, line feed and carriage return on their own make about as little sense.

On later type writers both the line feed and carriage return were integrated into a single key or lever, which was just called carriage return or return. From this perspective, an encoding using a lone CR for newlines might make more sense than one using a lone LF. But neither combination really makes intuitive sense in buffered, electronic systems. It's just how it is.


(Nevermind)


Brings back memories of staring at a keyboard unsuccessfully looking for a "return" key to satisfy "Press return to continue..."


Somehow the friends who told me how to use a PC always used "Return" instead of "Enter", but this was in Germany where we didn't have any text on this key, just a big arrow pointing down and left.


CR is a common line ending for RS-232 devices. I've got 3 or 4 in a cabinet for my current job which are "line-based" and which use CR as the line-ending. To issue a command to one of these devices, you terminate the command with CR and then the device processes it. These same devices will also send response data with CR line endings.

This is partly why stty and termios have options for CRLF translation on terminals. I'm sure there's some historical reason for that.


I thought it was to do with the fact that you might want to use them separately sometimes? For instance you might want to repeat a line without a newline for a "bold" effect, or with underscores for underlining.


It's no more odd than just 'LF'. At the end of the day, something has to represent 'end of line'. The outlier here is really MS DOS, wasting a perfectly good byte on pretend-lineprinter codes.


The internet also uses CRLF for newlines since about 1973: https://tools.ietf.org/rfc/rfc542.txt

So historically speaking, MS DOS is doing the right thing.


1971, really - https://tools.ietf.org/html/rfc158

In 1973, the 'internet' was yay big:

https://twitter.com/workergnome/status/807704855276122114/ph...

Apple sold more Apple I's than that in its first few months of existence.


Ahh, you're correct. I wasn't sure if CRLF was actually codified in rfc158, I just did a search for "CRLF" on it, I didn't realise they were using the hex values of CR and LF (X'0D' X'0A').

The internet is now 8 or 9 million devices big, and CRLF is still the standard newline for internet protocols.


MS-DOS isn’t alone in that. https://en.wikipedia.org/wiki/Newline#Representations_in_dif... claims ”Atari TOS, Microsoft Windows, DOS (MS-DOS, PC DOS, etc.), DEC TOPS-10, RT-11, CP/M, MP/M, OS/2, Symbian OS, Palm OS, Amstrad CPC, and most other early non-Unix and non-IBM operating systems” used it.

The real surprise is the BBC Micro, which used LF+CR.


DOS did it because CP/M did it.

If I had to guess I'd say CP/M was developed for extra dumb teletypes that needed both to properly handle a newline. So many quirks in terminals date back to the days when everybody was just making it up as they went along. Legacy support is the root of most braindamage.


Teletypes needed it for timing. https://en.wikipedia.org/wiki/Newline#History:

”The separation of newline into two functions concealed the fact that the print head could not return from the far right to the beginning of the next line in one-character time. That is why the sequence was always sent with the CR first. A character printed after a CR would often print as a smudge, on-the-fly in the middle of the page, while it was still moving the carriage back to the first position.”


There's an entry point in the BBC Micro's OS that's routine that prints a character, or LF+CR if the character is CR (13). This routine prints the CR second, so that when it was called with 13 originally it can fall through into the main, non-translating character print routine with 13 in the accumulator, and then return to the caller that way too. (These two routines promise to preserve the accumulator.) Saves a couple of bytes in the ROM.

(There's also an entry point partway through the wrapper that just prints a newline. DRY and all that.)

The code is not exactly this, but differs from it in no relevant way:

    .osasci \ print char, translating CR to newline
        cmp #13:bne oswrch
    .osnewl \ print newline
        lda #10:jsr oswrch
        lda #13:\fall through
    .oswrch \ print char without translation
        pha
        ...
        pla
        rts
The Atom's ROM does the same thing, so they presumably just copied this for the BBC. After all, it doesn't really matter which order you print them.


A lone CR is certainly more odd than a LF...LF is implicitly a new line. CR by definition is a return to the beginning of the line; literally NOT the end of the line!

Who would choose 'beginning of line' to mean 'end of line'?? Oh, Apple :)


There's nothing more 'implicitly new line' about either, in a text file. But if you're particularly set on hardware analogies - there's a 'return' key on keyboards and no 'line feed' one.


Umm, there absolutely is, unless you want to start using the backspace character instead of line feed or any other arbitrary swap, or maybe 'null', or 'delete'; Sure, 'null' should mean new line... 'Line Feed' by words, use and history, means new line....it feeds lines...i don't understand how you could make such an argument.

The fact that some keyboards say 'return' (but usually have a down-and-to-the-left graphic) is a separate issue, now that we are talking actual hardware; The lack of need for a specific CR, LF keys is obvious. Once again, of course today apple is using the odd choice of 'return', despite the rest of the industry. Many (most?) of 70s onward keyboards had an 'enter' key, no?


Except that LF was also designated as NL before Apple came onto the scene. They were deliberately incompatible.


You're pulling things out of your ass. I'm interested in comments like yours. At some point between reading the previous comment and responding to it, you must have had some thought that goes, roughly, "Hey, I'm just going to make some shit up now and post it." Right? How else does it happen?

Apple's is not the only ecosystem that settled on carriage return. Who do you think was around for them to deliberately break compatibility with in 1977? Kildall?


You think when Woz was designing the Apple II he was going "oh better make this incompatible with UNIX"?

Just like some people will defend Apple to the death, others will desperately assign malice to literally anything they do. Funny company.


It's the same for pretty much every company that has ever existed, try not to take it personally.


It was? By whom? What were they 'deliberately incompatible' with?


UNIX, for one.


At that time Apple chose CR, UNIX was just one of a multitude of minicomputer operating systems. Compatibility with it would have been only important in hindsight.


I wonder if this would have been a big concern for home computer vendors at the time. How many of them would have used Unix? Even if they had, would Unix even have been popular enough to really matter?


No.


Well, they could just apply a minimal cleanup for end lines before posting.


Then the people who want to compile it rather than complain about it would have to go through an extra hoop to fix it again. They published it the right way.


Plus, if they replaced the ^Ms, what would we have to complain about? This truly is the best way. People can complain, and people can compile.


That is an excellent point although I suspect your question is going to be answered in spades when people notice who the developer posting the code is.


I don't know anything about this developer?


Agreed, at least make sure it will still compile with line endings changed before submitting pull requests.

This builds with CodeWarrior 10 (from 1996) which is probably OK with non-Mac line endings, but there are other old Mac codebases on GitHub using older toolchains that require CR. (i.e. Pararena 2: http://bslabs.net/2016/11/13/building-pararena/)

A PPC Mac running OS X 10.4 is basically the only way to work with both git and classic Mac dev environments, I'll be trying this out myself.


I still have my Power MachTen CD. I used to love wasting time porting GNU/Linux source to that odd ball environment.


A year or two ago I built dropbear on Power MachTen, worked great and just needed a couple fixes to bring it back into compatibility with GCC 2.x.

Playing around with Professional MachTen (for 68k) is a real trip though. 4.3BSD, an even older GCC, and it implements virtual memory and protection by taking over the system memory manager. (You actually have to restart when quitting it)

It would be so cool to have something like MachTen on iOS, some people have tried but the restrictions on executable pages really restricts things.


Eh, I prefer when they don't clean up anything. It feels more like real archival work.


If you click on the "raw" view it will render properly in the browser.

You might also need to force your browser to display with "Western" text encoding (that's what it's called in Firefox, not sure about other browsers).


There's a CR->LF conversion at:

https://github.com/chungy/shockmac/


Viewing the raw file, at least in Chrome on OSX shows line breaks.


Same thing on desktop.


I put in a PR to fix that: https://github.com/NightDiveStudios/shockmac/pull/2

Best thing about GPL code is that we can fix it!


Changing line endings isn't a "fix"... if you do that, it's unusable on the intended system upon which you would work on this code - System 7 or Mac OS 8.


I'm pretty sure CodeWarrior was able to deal with any kind of line endings: CR, LF, CR+LF.


This might be a dumb question (I spend most of my days in Ruby), why can't this compile with gcc and make? It's all just C (and a few files C++) right?


The fix that needs to be made here is in Git, which fails to process line endings in a sane way (and includes a bunch of insane, poorly documented config options that approach the problem completely sideways).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: