Hacker Newsnew | past | comments | ask | show | jobs | submit | vph's commentslogin

I don't believe it.


>One speculation is that Wright was able to ...

Apparently, he was able to prove, but did not allow others to keep copies of the evidence. At best, you can suspect. But how can you conclude so strongly that this was a "scam"?


Because there is no plausible narrative that can account for the known facts and account for the private proof as anything more than smoke and mirrors.

In particular, the person best placed and best motivated to produce such a narrative -- Wright himself -- offers no explanation.


The argument is that there is no legitimate reason for Wright to not allow them to keep copies of the proof, aside from the reason that the hardware/software he provided was a tampered version and the proof would not actually work on any machine not provided by him.


I can think of a reason. If Gavin, for example, had both a new text and a signature that was demonstrably from Satoshi, then he could publish this and screw Craig's big reveal. I mean, if anyone is a reasonably credible Satoshi, it's Gavin.

That, at least, would be a good reason to not give the signature to Gavin or Jon.

However, the moment Craig failed to sign an unambiguously new text on his blog with a known Satoshi key everything that went before was suspect. Would it have proved he was Satoshi? No. Would almost everyone except a few tinfoils give him the benefit of the doubt. Assuredly.

The fact that that didn't happen is very strong evidence that Craig does not have the keys from early blocks. Does that prove he is not Satoshi? No. But he's given about as much reason to believe he is Satoshi as I have. And I'm pretty sure it's not me.

Or is it???


> That, at least, would be a good reason to not give the signature to Gavin or Jon.

Yes, but the message that was (supposedly) signed contained words to the effect of "Craig Wright is Satoshi", or else his initials? How would Gavin be able to use that message to show that Gavin was Satoshi?


That raises serious questions, but is not at all conclusive.


Sure, but then you consider the undeniably faked proof published on his blog, and now this inability to meet his recent promise to provide real proof today. For someone who actually had Satoshi's keys, all of this is probably more work than just publishing real proof in the first place.

If he was actually the creator of Bitcoin, this is the worst possible way he could convince people of it. That's were the extreme doubt comes from.


I think you greatly underestimate the real harm done to people who are victimized by online (and offline) mobs; people become objects of the sport of public vitriol. Just look at the remarks in this discussion; it's not a rational discussion, it's people acting out in anger - because it's become acceptable to hurt this person.

I can completely understand someone not wanting to deal with it any longer. Also, it doesn't matter what he does or says at this point; nobody will look at the evidence (even now, likley few in this discussion know more than what others in the mob have told them) and he will be lynched. Anything he does only will fuel the fire.


Wright has permanently damaged other people's reputations, and put substantial effort into impersonating another person. I don't know if "victim" is appropriate.

At any rate, this conversation is about whether or not he is Satoshi. This is cryptography, there is no need to convince people. There is either definitively absolute evidence, or there is not. And as @lucozade put it, "he's given about as much reason to believe he is Satoshi as I have".


Says who? Who tried and convicted him? The accusations of an angry mob are not at all reliable. And guilty or not, who are we to hurt him? If what you say is true then there is no reason for people to act this way; they simply could forget him and move on.

> This is cryptography, there is no need to convince people. There is either definitively absolute evidence, or there is not

In theory, but unfortunately not always in the real world.


> Who tried and convicted him?

This is not a court of law. People have a right to (and do!) form their own opinions without going through the courts. Calling something a scam does not require a legal judgment, AFAIK... unless you want to argue that it would be slander/libelous to do so. However, this would probably be a tough sell since saying "X is a scam" in everyday life would probably be interpreted (for legal purposes) as saying "It is my opinion that X is a scam" and opinions cannot be slander/libelous AFAIK.


>Because the future is whatever you're building software in today.

This is a seriously unhealthy attitude for software engineering. This attitude will create legacy systems and legacy systems that will live on forever.


What's wrong with "legacy" systems which just WORKS and makes VALUE for people?


There's not much wrong with such legacy systems. Unfortunately, they are more like exceptions. Most likely, your legacy systems break and you have no idea how to fix them. Or, you have to hire one guy for life because he is the only one who can maintain and fix your systems. And you pray that he doesn't get sick or hit by the bus.


I believe that whoever is in charge of federal funding in R&D will have to be extremely smart and balanced in his views because R&D plays a great role in the economy and future of this country. The right approach is a balanced between fundamental and applied research. And it's not just balanced but which areas to invest money into.


I think that the government would be better served to spend money researching fundamental science, that seems to me to be the great shortfall of industry.

Intel, Westinghouse, biotech in general, and Dow do a pretty good job on the applied piece IMHO.


>Intel, Westinghouse, biotech in general, and Dow do a pretty good job on the applied piece IMHO.

Private companies are in a great position to do applied research, but they are often slow or unwilling to distribute the research results for the benefit of everyone.


They actually do a good job of that as well, it's just that they do it in the form of patents which means that you have to pay to benefit from their work.


To be fair, scientists by and large understand the need of replication, and evaluation in general. It's just in certain fields (e.g. psychology) or circumstances with human subjects, it's very expensive or even infeasible to have well controlled repeated experiments.


If you don't have well controlled, repeatable experiments (a.k.a. the scientific method) are you actually doing science? I think we should come up with another word for these types of one-off studies.


Many fields advance without solid well controlled, repeatable experiments, at least through various stages of development: cosmology, geology, most of medicine, philosophy, science of mind, etc. You don't consider those all "science"?

There's more to science than Bacon.


Those fields have scientific and non-scientific components. The non-scientific parts are not science.


Your statement is tautological. You might want to read up on epistemology and the philosophy of science -- I would start with Karl Popper. As I said, Bacon is not the definition of science.

Bizarrely, my unremarkable comment was downvoted!


> Your statement is tautological

You want to label fields that have both scientific components and non-scientific components (in varying proportions) as "science." The point of my statement is that some of those fields have scientific aspects, and there may be sub-fields I'd categorize as "science," but I wouldn't categorize the field as a whole as "science."


I believe you were downvoted because you were factually inaccurate about the presence of scientific studies in medicine especially. Controlled, double-blind, repeatable (from a population standpoint) studies are something medicine is good at. Note that there are counter-examples of medical studies which were poorly conducted, but realize that they are the exception not the rule.


What does this do that nano doesn't?


Only things I saw unless nano already does them:

* Execute commands (edit: to manage the editor settings).

* Support for shortcuts we're all very accustomed to in other software C+S to save, Ctrl+c to copy, Ctrl+v to paste, etc

* Mouse Support

Nano serves it's purpose but sometimes I need something that feels "modern" so I use TextAdept (works on the terminal as well). I might give Micro a try though.


In addition to those features, micro also supports colors better. You can make colorschemes for it so that syntax highlighting is consistent across filetypes, and it supports 256, and true color, while nano only supports 16 colors.

Micro can also easily interface with the system clipboard (Ctrl-c, Ctrl-x, Ctrl-v).


To be fair, author should stick "Great" in front of each region.


A similar quote by Michael Jordan: “I've missed more than 9000 shots in my career. I've lost almost 300 games. 26 times, I've been trusted to take the game winning shot and missed. I've failed over and over and over again in my life. And that is why I succeed.”


> I've failed over and over and over again in my life. And that is why I succeed.

Which is also true for the worst players in NBA history, with the exception of the causality and the success.


If you quit you don't have a chance to get better.


Don't mean to sound like an insult, but this is the first tool (that I know of) intentionally designed to make people stupider.


>Personally, I think the Go community is a little unhealthily obsessed with this particular metric.

Let's not forget that Go was invented to solve Google's problems, one of which is it took hours to compile their C,C++ codes. Compile time of Go 1.7, despite the improvement, is still 2x that of Go 1.4.


C was already taking hours to compile in 2000, whereas languages with modules from Algol family would compile just in several minutes.

What I find positive about Go is making younger generations that only know C and C++ rediscover the compile times we had in the native compilation during the 90's.


Yes, in the mid nineties one of Delphi's killer features was super fast compiles. Partly possible due to a single pass compiler but it was just a very fast language to compile.

Really, most compiled languages except C++ are fast to compile. In the other thread I pointed out that Java compiles very fast, but nobody in the Go community likes to talk about that, it seems. Actually Java does have multi-tiered compilers, so the frontend is very fast as it doesn't optimise, and then the JIT compiler has a fast-but-low-quality compiler and a slower-but-higher-quality compiler, meaning you get the developer benefits of instant turnaround but the code still gets heavily optimised to GCC/LLVM quality in the hot spots.


I agree that the Java compiler itself runs fast, but many of the surrounding tools in the Java ecosystem are not fast. For example, Maven takes several minutes to rebuild Hadoop. Based on my experiences, neither Gradle nor ant are that much fast than Maven (although they are a bit faster.)

I disagree that "most compiled languages... are fast to compile." Scala is relatively slow to compile (I've seen Spark take over an hour to build). OCaml and SML are none too speedy to compile. Rust's compiler is rather slow (they're going to focus on speeding it up, I have heard.) It seems like most compiled languages that people actually use are slow to compile.

Go is a big exception due to its conscious decision to focus on fast compile times. This choice wasn't free-- it involved carefully considering tradeoffs. For example, Go compiles source code files unit-by-unit, forgoing global optimizations in the name of time. The grammar of the language is simpler, in order to enable more efficient parsing. Features like operator overloading are not present. It is trivial to tell what is a function call what is not-- there is no "syntactic sugar" like Ruby's function calls without parens.


Hadoop is 2 million lines of code. Yes, that's going to take a few minutes to compile no matter what language it's written in (if it has a compile stage at all).

By "most compiled languages" I guess I was thinking of the most popular ones, not literally all compiled languages. Sloppy phrasing, sorry. The top compiled languages according to TIOBE are Java, C, C++, C# (actually these are the top languages in general). Then they're all scripting languages until position 11 which is Delphi. So those are the top 5 and except for C/C++ they compile fast.

OCaml, SML, Haskell, Rust etc are all rather rare languages.


I don't agree that 2 million lines of code is "going to take a few minutes to compile no matter what [compiled] language it's written in." Based on my experiences with Go so far, I believe Go could get that significantly faster, probably under a minute. Also, the "several minutes" number I quoted is also for incremental compiles where only one very small thing has changed, not just for full compiles, which take even longer.

TIOBE is a poor measure of programming language popularity for a lot of reasons. But even if we accept that Java, C, C++, and C# are the top languages, I already commented that Java's build is not that fast in the real world (except for very small projects). I don't have any first-hand experience with C#, but I've never heard anyone claim fast compilation is an advantage of that language. C and C++ are slow builders, as we already know. So the top compiled languages are rather slow compilers, which is one reason why scripting languages (I assume you mean non-ahead-of-time-compiled-languages) became popular in the 2000s. This is one part of what Rob Pike was talking about when he said "poorly designed static type systems drive people to dynamic typing."


I would be skeptical of that, but I wasn't able to find LOC/sec stats for the Go compiler(s) anywhere. If you can point me to LOC/sec performance figures, that'd be interesting.

Gradle/Maven don't do incremental compilation at the level of the file, so yes, they're gonna redo the whole thing each time. Use an IDE for that. IntelliJ will happily do incremental compilation in Java projects at the level of the individual file (and there's no relinking overhead).

I understand that you're saying the top languages are slow builders, but it's just not the case. It sounds like you've used tools which simply aren't optimised for build times and judging the language based on that. But I usually have roundtrip times between editing my code and running the recompiled program of about a second when working with Java.


I have used both Eclipse and Intellij for building Hadoop, and neither could get incremental compilation under 30 seconds. In fact, it was often much more than that. Eclipse was a huge pain to use with Hadoop because it was always getting something wrong-- many times when you updated the project dependencies, Eclipse would essentially break and need manual fixing. Even loading up the project in either IDE takes at least a solid minute, with an SSD, Intel i7, and 16 GB of RAM.

Prior to using Java, I used C++ for 10 years. Every project I worked on had compile time problems. It was an open secret that we spent most of our time waiting for compiles.

It's very common for people working on small projects to feel good about their compile times. But sometimes projects grow up, and they outgrow the language and the tools they were designed in.


> TIOBE is a poor measure of programming language popularity for a lot of reasons.

And yet, it is completely consistent with pretty much all other measurements of programming language popularity (github, job postings, StackOverflow answers, etc...).

So maybe it's not that poor a measure after all.


According to TIOBE, Groovy rose from 0.33% to 1.8% in the last two months, and from 0.11% to 1.8% in the last 12 months. Click on "Groovy" from the main page and you'll get the graph at http://www.tiobe.com/tiobe_index?page=Groovy . Now say "maybe TIOBE's not that poor a measure of programming language popularity" with a straight face. These stats are gamed by language backers to make their language look good when selling consulting services and tickets to conferences, convincing programmers to contribute free work to their product, etc.


TIL that there are still people who believe that parsing has any impact on compilation speed.


Do you know Walter Bright?

The guy that created the first C++ native compiler and one of the D authors?

http://www.drdobbs.com/cpp/c-compilation-speed/228701711


At it's slowest point, this compiler took single-digit seconds for its biggest workloads. So: I don't see how Google's C++ compile times are relevant.


> Let's not forget that Go was invented to solve Google's problems

No, it was invented to solve the problem that a handful of Google engineers experienced while writing C++ at Google.

Most of Google's code base is written in Java and as such, doesn't suffer from slow compilations.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: