Hacker Newsnew | past | comments | ask | show | jobs | submit | nhamann's commentslogin

I'm hesitant to bring this up since you seem much more informed about this topic, but your statement "Thuen admitted copying at least some components from 'Sophia'" seems to contradict Thuen's statement from the TechCrunch article:

"Visdom is not a translation of Sophia from C to the languages in which Visdom is written. We did not have the Sophia code when we created Visdom."

So I'm interested in what you mean by "copy".


My information comes from the court order, which states directly that Thuen admitted to some kind of copying (what kind, we do not know).


Uh, no - it states that Battelle claims this, not that the court has found this to be a fact. Kind of a huge difference.


Sorry if this is obvious, but: read books. A good place to start is here: http://cstheory.stackexchange.com/questions/3253/what-books-...

There was another reading list that I remember seeing, I think it was from Stanford's TCS website, but I no longer can find it.

Self-teaching is fun because you get to choose your own curriculum, but it's often frustrating too because if you get stuck, there is no professor or TA to unstick you. This issue can somewhat mitigated via the internet.


Not mentioned in the article is that the Stacks Project is on github https://github.com/stacks

I've always thought that math books should in digraph rather than linear form. What would be interesting is to combine this with a wiki. You could have alternate proofs of the same lemma, or even entirely different presentations (starting from different axioms, for instance)


This article has way too many words and not enough math. There is, in fact, nothing scary about big O notation once you dissect it, and it's a shame that so many people seem to think otherwise.

Here's the definition: if f and g are functions (let's say real-valued functions defined on the positive reals), then we say that f is big O of g, written f = O(g), if there exists a real number y and a real number K, K > 0, such that

  f(x) <= K * g(x)
for every x > y.

If that makes sense, then done.

Else:

The first thing to do when meeting any mathematical definition that you don't understand is to throw away parts of the definition until you do understand it, then add them back in one by one. In this case, let's forget about the constant.

New definition: For functions f, g, f is blorb of g, denoted f = blorb(g), if there is a y such that

  f(x) <= g(x)
for every x > y.

"f is blorb of g" actually just means that there comes a point after which g is never smaller than f. This gives us the first ingredient of big O: we are concerned only with asymptotic behavior. f could take on immense values for small x and still be O(g) as long as f eventually becomes always smaller than g.

The reason for caring about asymptotic behavior is that we often don't care about the time complexity of an algorithm for very small problem sizes. Even the traveling salesman problem is solvable on Raspberry Pi for very small problem sizes.

Okay, I hope we understand the above definition. Now we add the constant back into the fold and see if we can make sense of it. From what I can see, the constant is there for computer scientists who want to paint with broader strokes. There can be a huge practical difference between f1(n) = 2n and f2(n) = 2000n (the difference between a computation taking a day and taking 3 years), but they're both O(n) because complexity theorists are more concerned with O(n^2) versus O(2^n) than they are with O(n) versus O(5n). (Also could be because in practice algorithms with wildly varying constant factors out in front are rarely seen?)

For an alternative to big O notation, you should check out Sedgewick and Wayne's Algorithms, 4th ed. They use something they call "tilde notation" which preserves the leading constant factor. (See: http://introcs.cs.princeton.edu/java/41analysis/)


> Also could be because in practice algorithms with wildly varying constant factors out in front are rarely seen?

The main reason is that you want a result that does not depend on small implementation details, i.e. is consistent across programing languages and CPU architectures.

Things as simple as larger cache size or a slightly better hashing function in a dict can increase the running speed of a program by a constant factor.


It first clicked for me when reading Barr and Wells' Category Theory for Computing Science [0], but I don't know about your mathematical background. Category theory is algebra, so it's probably advisable to study basic group theory before tackling category theory. (I have a hard time seeing how functors make sense until you've understood the general concept of a homomorphism, which is perhaps easiest to do in the context of groups).

[0]: http://www.case.edu/artsci/math/wells/pub/ctcs.html


I tried to switch to Colemak once, but it was too big of a change. I did, however, successfully switch to an alternative of the Carpalx QWKRFY layout: http://mkweb.bcgsc.ca/carpalx/?partial_optimization

Instead of 5 swaps, I only do the first three (K/E, J/O, F/T). According to their scoring function, it gets you most of the way there anyways. (Whatever that's worth).

Their rankings of popular layouts might also be of interest: http://mkweb.bcgsc.ca/carpalx/?popular_alternatives


> Engelbart hated our present-day systems.

I'm not trying to say "[citation needed]" here, but I would be interested in seeing a source for this. Does anyone know of one?


The citation, in this case, is Doug Engelbart. Bret Victor is reporting what Doug Engelbart told him about present day systems.


[citation needed]



I take it you're being sarcastic, but the article doesn't contain any such information. Or are we missing something?


I was able to find some sources. The first is this article: http://www.infoworld.com/d/developer-world/high-performance-....

> Easy-to-use computer systems, as we conventionally understand them, are not what Engelbart had in mind. You might be surprised to learn that he regards today’s one-size-fits-all GUI as a tragic outcome. That paradigm, he said in a talk at Accelerating Change 2004, has crippled our effort to augment human capability. High-performance tasks require high-performance user interfaces specially designed for those tasks. Instead of making every task lie on the Procrustean bed of the standard GUI, we should be inventing new, task-appropriate interfaces. No, they won’t work for everyone. Yes, they’ll require effort to learn. But in every domain there are some experts who will invest that effort in order to achieve greater mastery. We need to do more to empower those people.

The above cites Engelbart's 2004 talk "Large-Scale Collective IQ", so that is probably a good place to look as well.

There's also this page, which presents some interesting related comments by Alan Kay: http://traction.tractionsoftware.com/traction/permalink/Blog...

> Alan Kay: ... If you have ever seen anybody use NLS [Engelbart's 1968 hypertext system for which he invented the mouse and chord key set] it is really marvelous cause you're kindof flying along through the stuff several commands a second and there's a complete different sense of what it means to interact than you have today. I characterize what we have today as a wonderful bike with training wheels on that nobody knows they are on so nobody is trying to take them off. I just feel like we're way way behind where we could have been if it weren't for the way commercialization turned out.


It depends on the type of article. I write a lot of blog posts about math. The most convenient way to do this is using Mathjax. If browsers had native support for mathematical notation then I would be inclined to agree with you, but this is not currently the case.


Can you get away with using unicode chars instead? ... Sounds like this could be a good open source project.


I don't know how extensive Unicode math symbols are, but it's unclear to me how you would display things like matrices and do alignment of multiline equations.


I'm not sure either. But I know that you can do crazy things with unicode, and I'd bet that if you treated treated unicode as a sort of low-level compile-to target you could then design a high-level language from which to write math text. I'm not sure of the practical benefit and maybe this would be even more inaccessible to people... but still, I think JS is great, but wouldn't pure HTML and encodings be even cooler?

Food for thought: http://shapecatcher.com/ 🍏


And load fonts with JS?


Isn't that what MathML is meant to do? Browser support is incomplete, but it sounds like a perfect use case for a javascript polyfill, so you could make pages that would work with either native browser support or javascript.


I had an insight a few months back when I was looking at djb's website (http://cr.yp.to/djb.html). I was spending far too much time playing with toys (static site generators) and not enough time actually producing interesting content. Nobody cares about your pretty blog theme, they care about your ideas.


I've been adopting an incremental approach. Enough style to get you going, and as you produce content, you refine the design. So by the time you announce it to your friends, the world, etc., you've got a bit of content and a good looking site.


The hardware content in this book is not sufficiently detailed for computer engineering. It's really for CS students who want to understand roughly how computers are made. (This is evident by counting pages in the book: the first 5 chapters are the hardware chapters and they span only 100 pages. The remaining 200 pages cover the software stack from assembly up.)

For example, one of the assignments is to design a 16-bit adder in their toy HDL, but they never cover carry lookahead adders. The only thing that matters is that your circuit passes the tests, so ripple carry is considered okay.

Similar efficiency/performance issues are glossed over throughout. Propagation delay is never covered, and the sequential circuits use idealized clocks (instant transition between low and high). They also don't describe how to build up flip flops from latches: the D-Flip Flop is given as a primitive and you build up other elements from there.

K-maps are not covered either. Caches are ignored as well.

Still, the book is amazing for its intended purpose. If you don't already know this stuff, this is an easy way to get a somewhat detailed (though abstract) view of how computers work without getting mired in all the concerns that accompany the engineering of actual computers.


The things you list are pretty much some of the base fundamentals of circuit design. I can't imagine that this curriculum is of much use without them.

Also, I seriously doubt that it covers the entire breadth of information required to create, from scratch, the entire video subsystem required for displaying graphics. Or anything like that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: