Hacker Newsnew | past | comments | ask | show | jobs | submit | jgg's commentslogin

Spot on.

I also feel like even if you were being charged the tuition of 30+ years ago, you wouldn't be getting nearly as much for your money. I looked through the introductory Russian textbook for my state university and was kind of blown away by how verbose and obnoxious it was. This same school cut operating systems and anything related to low-level programming (people complain that they just want to learn Java or .NET so they can get a job, so I guess they got their way), and rearranged basic English to make it easier to pass. 100 years ago, learning Greek and/or Latin was standard - it seems like there's a noticeable trend towards "dumbing down", or maybe I'm viewing a time period I didn't live in with rose-colored glasses...


100 years ago, learning Greek and/or Latin was a large part of all there was to study. Now we have multitudes of fields and subfields regularly generating actual demonstrable progress in the capabilities of humanity, few of which require knowing anything about Thermistocles.

Knowledge has gotten so complex and siloed that one of the most useful things for the inter-disciplinary academic project I observed turned out to be a facilitator with no subject-area expertise in any of the fields, who undertook the task of building glossaries to translate jargon and storyboard concepts people in different departments were trying to explain past each other.

The chain of conceptual courses necessary to reach a level where one understands papers that have recently been published advancing a particular subfield, has grown longer and longer in STEM. At some point, your program decided to compromise on the things that weren't necessary to reach the advanced levels, so that they had room to get people there at all during their undergrad. There are fields and universities where this isn't the case, where an undergrad only qualifies you to take a master's degree. Some revision of the standard unit of 'the four-year degree' is probably a reasonable thing to examine at this point.


100 years ago, learning Greek and/or Latin was a large part of all there was to study. Now we have multitudes of fields and subfields regularly generating actual demonstrable progress in the capabilities of humanity, few of which require knowing anything about Thermistocles.

Learning Greek and/or Latin was done as a) a mental exercise and b) a way to access an ancient body of knowledge that was basically considered something any educated person should know. Your statement that there was nothing else to learn is highly ignorant.

At some point, your program decided to compromise on the things that weren't necessary to reach the advanced levels, so that they had room to get people there at all during their undergrad.

Yeah, to reach advanced levels where the bulk of their graduates don't have to understand the English language well, and the CS grads don't know what a pointer is or how memory works.

I'm going to assume for my own mental health that you're a troll.


Well, it's possible that you're both right, in different situations:

* Let's just get all these School of Business clowns in and out, OK?

* Now, science geeks, we're gonna see some serious shit! (at least if you go to the right school)

Anyway, that's a possible interpretation :-)


Programming and graphic design perhaps (personally, I've yet to meet someone who programmed as a child who was better than a good programmer who learned later), but I really doubt there is much of an advantage to teaching your child Depth-First Search.

I don't really think CS is like learning a foreign language at all. The advantage for languages supposedly comes from the fact that we are hard-wired to acquire language at a specific age range ("Critical period hypothesis"). The biggest advantage is for infants, which steadily tapers off until puberty. I doubt we're hard-wired to acquire CS the same way - programming or theory.


> but I really doubt there is much of an advantage to teaching your child Depth-First Search

There is a huge advantage if you can teach it to him in a way that he can understand it, not just memorize the algorithm, like the unfortunate way kids learn to divide and multiply without understanding wtf they are actually doing, which is a big IF. It's a basic reasoning skill, even more basic than "basic logic" or arithmetic and if you can wire it into a very young mind you've changed that mind for ever. He will understand much better anything from philosophy to the theory of evolution, because you will have primed the mind for algorithms and logic. It doesn't matter if he ends up an investment bank manager or a politician, he will have learned how to understand algorithmic thinking.

90% of the population are, imho, what I call "mud-minds", they are incapable of deep understanding of logic and algorithmic knowledge, they can just learn instructions and apply them, sometimes to great results, but they don't "grok" them. And when it comes to truly abstract concepts that have no direct equivalent in the real world, they can only recite definitions from books, they have no "intuitive vision" of abstractions. And I think they are like this because their parents and teachers fed them a strict diet of 100% practical knowledge and scientific facts and never exposed them to abstract algorithms, patterns and processes. (As a different line of thinking, I think "separating the mind from reality" also helps young minds get a better grip of abstraction, like "running" an algorithm or visualizing a pattern as pieces of a fantasy world or advice of an imaginary companion, though child psychologists would obviously not be happy of such ideas of education...)

By teaching your kid something like depth-first-search (A* search on a game maze map would be even more awesome if you can grab a kid's attention though...), you give him a chance of not becoming such a "mud mind".

True, if a kid is technically oriented, you'll probably have more luck teaching him/her general programming before a bunch of algorithms. But the algorithms are the stuff that really "sharpens the mind" and you don't even need programming to get them, they can be explained in more mathematical or more graphical terms. And you don't even need a computer to teach a kid depth-first-search and you don't even have to tell him how it's called or what it is, just try and imprint a deep intuitive understanding of the process and algorithm on his mind.


> personally, I've yet to meet someone who programmed as a child who was better than a good programmer who learned later

True, but then you're already filtering out all the ones that are worse because you preselect for 'good programmers who learned later'. You should compare those who programmed as a child as one group versus those that learned how to program later.


The 'critical period' hypothesis is not the only theory and I don't believe its a useful one.

I think you're misinformed if you think brain development can be said to taper off in any way over 18 years. There is pretty clear evidence that it happens in distinct stages.

I think the real mechanism is that learning symbolic logic early causes a type of confidence that adults without find hard to acquire. Of course, my opinion doesn't really count here since my Karma is negative one now.


I think you're misinformed if you think brain development can be said to taper off in any way over 18 years.

That's not what I said, and the advantage you talked about supposedly does taper off until the child is around 12.

(edited for clarity)


The same encrypted communications that the NSA tries to subvert or sabotage with bribery and legal extortion?


It's really fast, has minimal boiler plate and supports functional programming without any of the "orthodoxy" of Haskell. That is, you can write a recursive function but also write a for loop.

Imagine something that can compete with C and C++, but doesn't require all of the low-level reinvention and memory management. It's like a high-level language for smart people that doesn't feel entirely impractical. It has some things that people bitch about (like having to use +. to add two floats and + to add two ints), which don't really bother me that much.

If it were more popular and had better libraries/platform support (unless that's changed drastically in the past year or so), it would be a serious contender for general development. Being completely honest, I think Jane Street is probably the biggest organization pulling OCaml, and from a navel-gazing standpoint you can either view that as good or bad.

Use it because you learned it and thought it didn't suck, I guess?

EDIT: changed some phrasing


> "... unless that's changed drastically in the past year or so ..."

Actually a lot has happened in the last year or so. If you're not aware of it, you could start by looking over the end-of-year post from OCaml Labs [1]. I'm aware that it's difficult to keep a central overview of all the progress but there is a lot of work being done, in many ways, to improve the ecosystem.

[1] http://www.cl.cam.ac.uk/projects/ocamllabs/news/#Dec%202013


Whoa, that's cool. I had no idea people were still working on it at that level. The last thing I saw was a version of OPAM that didn't seem to work that well...I'll have to go look around.


I think we're pretty clearly the most intense user (most code, most developers using it), but not technically the biggest company using it. Bloomberg and Facebook are examples of bigger companies using it in serious ways.


Yep, that's what I was trying to convey.

Thanks for your work.


The pretentious view that Haskell's core is a mathematical structure familiar to mathematicians is wrong.

Haskell borrows concepts from category theory (a "field" so abstract from most math that most mathematicians don't need but a handful of its concepts) to label typeclasses, and those typeclasses don't always follow from their namesake.

Further, Haskell could exist and keep its property of referential transparency without the the 'mathematical' structures or their names. Here's a version of Haskell without monads: http://donsbot.wordpress.com/2009/01/31/reviving-the-gofer-s...

Monads, functors, arrows, etc. are more aesthetic than fundamental to the core nature of Haskell. They're just a bookish (and sometimes bureaucratic, IMO) design choice. What I mean by that, is that the language designers took a concept (referential transparency) and built an understandable structure, but beside and aside from this, made a bunch of nested, derived typeclasses of dubious value or purpose. Sometimes I look at a Haskell library and have a more academic version of the, "Why the fuck is this a class?" moment.

I'd like to comment tangentially that Haskell is almost the OOP/Java of the 2010's - the programming community claims that it makes your code "safer", and in some sense it very much does, but its features are being perverted and overhyped while its caveats are being forgotten.


Corecursion is useful for taking a recursive algorithm and transforming it to a stream-like output pattern.

But this is ridiculously inefficient.

Well...you might be surprised in the general case. Because of lazy evaluation, Haskell won't necessarily implode on non-TCO recursive functions (example: http://stackoverflow.com/questions/13042353/does-haskell-hav...), and will actually sometimes cause a stack overflow on "optimized" functions.

Until now, the only "principled" way I knew of to transform this into something sensible was through memoization, but that's still very wasteful.

I think "real" Haskell code usually favors operations of lists over recursive functions. That said, the standard way to transform your recursive structure into something "sensible" is to use a tail-recursive function. In basically any other functional language, you'd go with that approach.

To get the same "benefit" in Haskell, you'd have to force strict evaluation inside of a tail-recursive function. This prevents a thunk from causing problems. That said, Haskell doesn't always build up a normal stack on a regular recursive call.

Otherwise, you'd just use a list structure.

(Someone correct me if I've said something stupid.)

ref:

http://www.haskell.org/haskellwiki/Tail_recursion


My remarks were addressed to people who don't need tail recursion or the benefits & drawbacks of lazy evaluation explained to them, so I think they went a little over your head.

I presented a recursive definition, a stream based definition, and a tail-call definition of the fibonacci function. In that toy example, it's easy to get between the three different forms, but in many cases the connection is far less obvious. We need principles that unite the different forms, and allow us to move between them. Co-recursion is one of those principles.


so I think they went a little over your head.

I understand lazy evaluation and tail recursion fine. I interpreted your comment as presenting corecursion as the only logical alternative to naive recursive algorithms with or without memoization.

You've tacked on the part where you say the latter algorithm is equivalent (due to Haskell's evaluation) - I get that. I'm still not understanding what you mean by only knowing inefficienct, naive recursion in contrast to corecursion. In practice, I have rarely seen corecursion or naive recursion used, but maybe we read different code.

In that toy example, it's easy to get between the three different forms, but in many cases the connection is far less obvious. We need principles that unite the different forms, and allow us to move between them. Co-recursion is one of those principles.

Uh, okay.


I once read a textbook introducing the theory of algorithms that started with the Fibonacci series. The author gave a formal definition, and translated this directly into a naive, recursive algorithm. Then he showed how to improve the algorithm with memoization. This was a real revelation to me: using this one simple, generally applicable transformation, all kinds of algorithms can be improved.

Then the author introduced the iterative Fibonacci algorithm, and formally proved that the iterative and recursive algorithms were equivalent. But the author failed to explain where that iterative algorithm came from: there was no general purpose tool like memoization that could be used to derive the iterative form from the recursive form. I still remember the intellectual disappointment.

Now, with co-recursion and a couple of other techniques, that gap is finally bridged. Using co-recursion, I can derive the stream algorithm directly from the recursive definition. And then I can apply deforestation techniques and tail-call optimization to derive the iterative algorithm from the stream algorithm. That's a pretty powerful intellectual toolset, don't you think?


I once read a textbook

I think I've read that book too.

For the benefit of anyone following along at home: The unproven-but-accepted Church-Turing thesis states that any algorithm that can be done on a Turing machine (i.e., a Turing-complete language) can be represented as a recursive function and a function in the lambda calculus, and vice-versa.

And then I can apply deforestation techniques and tail-call optimization to derive the iterative algorithm from the stream algorithm.

How do you use deforestation and tail-call optimization to derive an iterative function from a stream in the general case?

You're also pulling a false comparison: you've jumped from a 'general' algorithm to "what a corecursive function is evaluated as under Haskell's semantics".

Corecursion builds a data structure. A TCO function, for example, won't produce that kind of output. The corecursive function could only be directly equivalent to the linear time, constant memory TCO function in a lazily-evaluated runtime (if that's true - tail recursive functions in Haskell can actually blow up the stack due to lazy evaluation).


How do you apply deforestation ... in the general case?

How do you cut an iron bar in half with a pair of scissors?

Deforestation is a tool; not a universal truth.

You're also pulling a false comparison ... The corecursive function could only be directly equivalent to the linear time, constant memory TCO function in a lazily-evaluated runtime

But I'm using a lazily evaluated runtime. My code is Haskell. And in any case, I used deforestation in the last step to remove the lazily evaluated data structure.

If I understand you correctly, you're telling me that high-level mathematical formalisms work better in Haskell. Yes, they do.


Because you're funding a project that may or may not work out. The whole point of crowdfunding is supposed to be, "You give us money to develop an idea, and you might get a reward in return", not, "I'm paying you $35 for an order that includes a bumper sticker and a t-shirt with your logo on it." It's less like your carpenter example, and more like a VC firm dumping money into a startup.

It's supposed to be speculative investment. Whether or not Kickstarter and others have decided to backpedal and pretend they have some kind of legal precedent for holding project owners to their word, in order to make their own business seem more legitimate than it is, is another story.


There's no requirement that the Kickstarter complete the core project - just that they provide any promised reward.

Many projects, where the project is highly speculative, provide explicit rewards and then an offer to get the benefits of the main project that's conditional.

Also, in what way is it not like an experimental house design using, eg, 3d printed concrete (in some new manner)?

There's a well established body of contract law about how to handle that kind of contract to build a house with a new technology, and the liabilities involved.


There's no requirement that the Kickstarter complete the core project - just that they provide any promised reward.

That makes no sense. Much of the time, the reward is directly related to the core project.

Also, in what way is it not like an experimental house design using, eg, 3d printed concrete (in some new manner)?

I'll repeat myself - I have no idea what model Kickstarter is now framing themselves under or how it applies to the legal system, but in the original model for crowdfunding in general, you were giving a voluntary donation to an idea with the explicit knowledge that you might never receive it or any associated awards if the project failed. You weren't paying for a t-shirt - you were giving money to someone's business/creative idea and receiving a "free" t-shirt in return.

What Kickstarter's Terms of Use, then and now, actually imply and signify in a legal sense, and whether or not the defendant can easily make the claim that risk was fundamental to nature of Kickstarter project backing and thus the backers voluntarily chose to engage in a speculative transaction which had no legal obligation to be fulfilled, are questions for an attorney to answer.

Do I think the guy cut and ran? Yeah. Do I think that's wrong? Yeah. But I also think if it was made completely clear to each and every person donating that what they were doing was contributing to a project, not making a purchase (Kickstarter even states themselves that they are not a store here: https://www.kickstarter.com/blog/kickstarter-is-not-a-store), then enacting Consumer Protection borders on protecting people from their own stupidity.


> That makes no sense. Much of the time, the reward is directly related to the core project.

That is the decision of the project creators. They don't have to offer the product as a reward. They could offer stickers/tshirts/whatever.

Kickstarter is not a store. Its the creators who are choosing to treat it like one, and they should be wholly responsible for their decision to do so.


...this is meant to be funny, right?



Yes, but unfortunately, a lot of the "serious" studies reported in newspapers every day seem to follow the same formula: find an accidental correlation between two causally unrelated variables and say you've found an amazing new result.



The problem is a cultural one, not a technical one.

What makes you think this community of people who work for the very companies that are being gagged, backdoored, surveilled and bribed are the ones who are going to fix it with a magic voting program? This community can't even come to a consensus to admit that Dropbox quite obviously has the hands of the Powers That Be rammed firmly into its asshole now (I apologize - maybe Condoleezza Rice had a revelation after advocating for the invasion of Iraq on false premises, and now deeply cares about the security of the world's porn backups).

Besides, I'd argue the system we have now is better, because it's hard to forge votes when you have hundreds of people across many municipalities counting votes and thinking for themselves. If you implement a national voting system in software, it would be much easier to corrupt by virtue of being centralized.


Major in something unrelated to CS that will challenge you.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: