Hacker Newsnew | past | comments | ask | show | jobs | submit | uaygsfdbzf's commentslogin

Author said almost always. Nothing is absolute in cases like this. I agree with the article. We enable swap on our production servers precisely because the engineers don't rectify their memory leaks or whatever problems are internal to their software. Some of those engineers are our own and we can't convince them to fix their software. Others are for things we have no control over, such as Kafka, for example. When Kafka starts going out of control and you've got no swap, oom killer will kill it and you'll end up with corrupted index files, which is a bigger problem than the performance problem you'll see if it has to swap for a while until it gets itself sorted out.


awesome thanks! I was under the false impression that mxtoolbox.com was trust worthy.


In 10 years flash will be dead and we'll be using memristors.


I read that ten years ago, too. Not that your wrong about this ten years, but damn have I been patient.


Ain't that the truth.

6.6 years ago, here on HN at https://news.ycombinator.com/item?id=177865 , was a link to "Scientists Create First Memristor: Missing Fourth Electronic Circuit Element" at Wired. User rms said: "I don't think we'll have any keeping up with Moore's Law. In 5 years memristor storage will be everywhere. IBM will develop memristor processors for the Blue Brain project." User TrevorJ said: "I fear that the huge inertia that is the software and hardware industry ... will keep this out of mainstream for 5-8 years."

In 2010 Engadget (at http://www.engadget.com/2010/08/31/hp-labs-teams-up-with-hyn... ) described a collaboration between HP Labs and Hynix. "Williams hopes to see the [memristor] transistors in consumer products by this time 2013, for approximately the price of what flash memory will be selling for at the time but with "at least twice the bit capacity.""

If anything, the optimism has become more pessimistic, as the future horizon lengthened from 5 years to 10. :)


In case you hadn't noticed, the authenticity model of SSL was an afterthought and is completely broken. I recommend listening to Moxie's talk about this:

https://www.youtube.com/watch?v=pDmj_xe7EIQ http://www.thoughtcrime.org/blog/ssl-and-the-future-of-authe...


Everyone has noticed. This cannot be solved overnight and rolled out to the entire planet. Stop beating this dead horse.


I don't think that's very relevant for this use case, though - JMAP is clearly intended for use by custom clients, not browsers, and those can use SSL with a completely different model from the CA scheme, including bundling certs for the most popular providers (similar to HSTS preload lists).


You don't need to trust the keyservers since you verify the fingerprints of downloaded keys against the fingerprints given to you in person.

The keyserver protocol includes commands to include keys in the keyserver network. How you send keys depends on the UI of the tool you use, geeks will do this:

gpg --keyserver <keyserver> --send-keys <fingerprint>


Right. I'm a little rusty on the details. My point stands - key distribution is not a solved problem for the average user.


It is solved for some specific platforms, GNOME for example:

https://mail.gnome.org/archives/gnome-announce-list/2014-Nov...

The average user doesn't use GNOME or Linux though :(


Committing generated code is just a recipe for badness. I never commit the output of cpp/yacc/etc to git, why would I do that for go stuff?


You should definitely commit the output of yacc to git. This way not only you don't require your users to have yacc, but you make sure they are using the correct yacc version; you know, the one you tested against. And when you change the yacc version, you'll know because it's in your history; otherwise it's hard to remember at which particular time you decided to upgrade the yacc version used by the project.

Committing cpp output makes no sense because cpp output is specific to the machine which ran cpp.


Ick...


There was a more productive thread on a related topic on debian-devel recently:

https://lists.debian.org/20141213135648.GA14679@fishbowl.rw....

Basically it isn't possible to check at runtime if the next boot will require an fsck, so developer filed a bug asking for mechanism to do that:

https://bugs.debian.org/773267


What are you using for Perl and JavaScript?


I'm still working on it, but Perl::Critic is good (I've been using it in vim for a while, which checks every time you save), strict and warnings is good (which would be enabled on any new Perl program, but I'm working with a 15 year old codebase, so enabling these automatically creates a bunch of new failing "tests" that need fixing). There's a few different ways to use Perl::Critic in tests, the most promising seems to be Test::Perl::Critic::Progressive, which allows gradual introduction of Critic-friendly code without turning your whole smoker page red. (Which for me, is a problem...we'd have hundreds of failing tests, which would be overwhelming and make it hard to spot tests that will affect users.)

Devel::NYTProf is great for spotting performance regressions.

And, since we're pushing out HTML as the result of our application scripts running, I've been working on integrating HTML::Lint (I had to patch it to allow HTML::Pluggable::HTML5 to work, but I've sent a pull request to Andy, so the next version will likely support HTML5) into our test suite. This will tell us if any script breaks in such a way that it produces invalid HTML.

For JavaScript, jslint and jshint exist. I haven't started integrating or experimenting with either, yet. Our JavaScript is pretty minimal at the moment (a few thousand lines vs 500k+ lines of Perl), so it's not a priority compared to the Perl, and the Perl stuff is still in progress. JavaScript also has strict, which can catch some common problems, like the global scope "this".


Some help on spellcheck like stuff in perl would be perl-critic and perl-tidy.

Beware using this on old code though.

http://search.cpan.org/~thaljef/Perl-Critic-1.123/lib/Perl/C... http://search.cpan.org/~shancock/Perl-Tidy-20140711/lib/Perl...

I know nothing on javascript.


For javascript, eslint (http://eslint.org/) is now what I use. It's more configurable than what has come before it and has good, clear documentation.


For JavaScript, I like jshint: http://jshint.com , as I heard jslint was too opinionated.


Some ideas for the future:

Add support for the Firehose XML output format:

https://github.com/fedora-static-analysis/firehose

Checks for features that are specific to one shell (bashisms etc) when used with POSIX #!/bin/sh.

Checks for features that aren't implemented in the desired shell (for running on busybox sh for example).

Check for the various issues listed here:

http://mywiki.wooledge.org/BashPitfalls


Thanks!

ShellCheck already checks for bashism if the script is declared with #!/bin/sh, and finds most of the ones that checkbashisms does along with a few others. It's geared towards POSIX compliance though, and doesn't know which non-POSIX feature are available in e.g. busybox sh.

There is also some limited support for warning about features not available between non-POSIX shells, such as trying to use zsh =(temp files) in bash or bash ;;& case continuations in zsh, but this isn't as much of a focus as POSIX checks since it's not a common source of issues.

The Wooledge pitfall list was the original implementation checklist. About 38 out of the 46 are already covered in one form or another.

I haven't heard of Firehose before, but if you want to use it right now you can run shellcheck with -f gcc, to get gcc style error messages that Firehose can already parse.


Personally I would just use the Firefox web developer tools rather than mitmproxy, much easier.


To log app requests? Firefox doesn't have a proxy unless I'm mistaken.


There are at least three addons that allow viewing network activity with Firefox:

https://getfirebug.com/ http://chrispederick.com/work/web-developer/ https://addons.mozilla.org/en-US/firefox/addon/live-http-hea...

I haven't tried it but I expect the developer edition does too:

https://www.mozilla.org/firefox/developer/


But is there any way to use those to inspect traffic originating from outside of Firefox? With mitmproxy (or other tools, such as Charles/Fiddler/etc) you can run pretty much anything through them.


I would just use wireshark for that.


And so we're back to the original article. How do you deal with encrypted connections?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: