Hacker Newsnew | past | comments | ask | show | jobs | submit | dmbass's commentslogin

The sentiment is that goods should be produced sustainably instead of unsustainably or not at all. Obviously consumers gonna consume whatever is available to them (that's why they are called consumers).


But consumers are not consuming whatever is available to them. You don't create a product without knowing that somebody would want to buy it. Consumers do make choices. Perhaps some kind of labeling for how environmentally friendly a product and its packaging are? Kind of like calorie content labeling. Eg some estimate of X amount of carbon to produce, package, transport the goods based on country of origin, transport method, raw materials and packaging.

If you try to force producers to take all the burden and blame (like many people are) then you'll just get lied to. People are generally unwilling to accept responsibility for these kinds of things without there being overwhelming evidence.


puppeteer only supports Chorme so you will need to use Slimer if you wish to use Firefox.


Correct for now. However...

Firefox did start investigating supporting the Chrome DevTools Protocol: https://groups.google.com/d/topic/mozilla.dev.platform/4-4A8...

We (Chrome DevTools/Puppeteer team) have had detailed discussions with their DevTools team on this, recommending what subset would enable Puppeteer. I don't know where they are with this, but in the long-term, this solution looks most promising.


Firefox/Mozilla has been doing a great job with cross-browser initiatives like the WebExtension APIs, and dipping their toes into CDP. I am STOKED to hear you guys have been having cross-team conversations on these details!

Shameless fanboy moment: have followed your stuff for years, and I absolutely love the products your team(s) put out (CDP/Puppeteer)!


Aww thanks amigo! Thanks v much. Plz holler my way if there's anything we can do better.

The devtools teams (Chrome, FF, Safari, and Edge) actually talk quite a bit. We collaborate on the console spec [0] and keep each-other in check on feature parity. ;)

[0] https://github.com/whatwg/console


It would be immeasurably useful for Firefox to have support for Chrome DevTools Protocol (or have something with the same functionality).

Anyone from Mozilla know where to follow the work for this?



100% honest and non-hostile question - why would you want to use Firefox?


That question could be easily turned around: why would you want to use Chrome?

Personally, if I were to set up automated testing, I'd like to do it with both Chrome and FF, especially given that FF seems to be making a comeback lately. Another reason might be that I've never had anything but problems building Chromium from source, but have successfully built Firefox. So in situations where you want to know with certainty that your binary is derived from the published source code, Firefox wins IMO.


> why would you want to use Chrome?

Chrome as a platform has the majority of browser market share and is continuing to be the top contender, I don't think anyone's going to knock them off their high horse anytime soon. Targeting the most common platform for applications deployment is common sense.

> especially given that FF seems to be making a comeback lately

FF is stagnant as far as market share, and has been trending down for at least 2-3 years. Currently it's around 5-6% and has been for awhile. Saying FF is making a comeback is disingenuous to the actual stats.

> Another reason might be that I've never had anything but problems building Chromium from source

Totally understand this one - I've found my roommate's Gentoo laptop in the fridge doing a full Chromium build more than a few times.

All-in-all I totally get where you're coming from, Chrome and FF are my preferred automation/testing platforms for sure =)


>Targeting the most common platform for applications deployment is common sense.

If I recall, people used to say something very similar about Internet Explorer. Of course Chrome isn't as awful as IE back then but still...

IMO it makes the most sense to support two independent implementations, especially for browsers.

>Currently it's around 5-6% and has been for awhile. Saying FF is making a comeback is disingenuous to the actual stats.

For desktop FF has about 10% marketshare. 5-6% includes iOS and Android, the later comes with a preinstalled chrome with little incentive to switch...

With the upcoming ESR 60 release, I also expect (and heard) a lot of companies will switch to Firefox, atleast in the german region (where marketshare increased from Dec to Jan). Some Linux distros might switch over from the previous ESR release and a lot of users could potentially reevaluate.

>I've found my roommate's Gentoo laptop in the fridge doing a full Chromium build more than a few times.

Chromium is not Chrome, it's Chrome with all the Google Binaries ripped out. Chrome itself is, to my knowledge, not open source, only Chromium is.


Linux has ~1% market share, though, and probably alreay has a pretty large proportion of firefox users. I doubt linux will affect anything much.


Linux has 2.3% marketshare (counting ChromeOS and not accounting for users blocking scripts, which is also disproportionally large on linux)


It may be disproportionately large, but it's still small enough to not have useful effect on such statistics.


Do you know this wouldn't have an effect or do you guess it wouldn't?

In the end, support chrome is simply supporting another Internet-Explorer-era. Maybe not as IE but it would be very similar.

If you or your company only develop for chrome you're not better than any of the websites that proudly state they will only work under IE6. End of story.


> In the end, support chrome is simply supporting another Internet-Explorer-era. Maybe not as IE but it would be very similar.

IMO - it is foolish to put Chrome and IE in the same box.

> If you or your company only develop for chrome you're not better than any of the websites that proudly state they will only work under IE6. End of story.

I don't think anyone is going to make an argument to only support Chrome, there's clearly other browser tech out there that we as devs must support. I will however say that you should prioritize supporting the most popular browsing platforms (browser, screen, device, OS, etc) when building applications and/or during testing.

Since you're touting Linux utilization - I will say that I better have a helluva testing budget (time and money) if I'm going to even touch on the long-tail platform combos (anything < 5%). The only reason my apps are heavily tested in Linux is because they're developed on a Debian Jessie box =D


Allows the stock owners liquidate in public market


I don't see how you can be against subscription pricing for living software. You're paying the publisher/developers to continue developing and supporting the software and run associated services. It's the only sustainable model for long lived software and 1Password is good enough to be worth it.


In this case, the subscription service doesn't allow for local vaults. You need to be using the 1Password hosted cloud service to store your passwords. For most of us that is a nonstarter.


This isn't true. Open up Preferences, go to the Advanced tab, and there's a checkbox "Allow creation of vaults outside of 1Password accounts". Check this and you can create local vaults again.

I believe it's disabled by default for 2 reasons:

1. Simplicity. Most people who use the subscription service don't want local vaults, because they won't be synced or available on the web (or subject to administrative controls). By disabling local vaults, you can't accidentally create one.

2. Once you re-enable local vaults, you have a local Primary vault and unlocking that is how you unlock 1Password on this computer. This is a potential point of confusion, because you'll be using a different password now than your 1Password Account password, which comes back to the simplicity angle.


If you already paid a substantially higher price when they were non-subscription based, it can rankle. Especially if the amount paid would have covered well past this point if subscription pricing was available at the time.

I don't think people are complaining that it's a subscription much as they are complaining that the switch seems to leave them with only semi-supported software relatively soon after a full price purchase (even if that may or may not be an accurate assessment of reality).


They are complaining about both, which means they might have a good argument but it's infected with their bad argument.

Can't have your cake and eat it, too. But I suspect they also just don't want to pay a subscription. Well, who does? I also don't like having to pay for coffee.


There is nothing wrong with full price software. It fully works, it just needs to be clearly outlines how long your support lifecycle is, which many companies don't think about. Pay full price, get X years of bugfixes (and access to new versions, depending) from date of purchase.

There's three types of model really, and the problem is when companies have multiple models in place and don't treat them equivalently.

Pay in advance - This is like buying a car. You get a warranty for certain covered problems. Costs a lot.

Pay subscription for local software. - This is like leasing a car. You pay a lesser amount each month, but have full usage of the vehicle. You lose access at the end of the lease.

Pay subscription for cloud based software - This is like using ride-sharing (or maybe a community shared vehicle system) for everything. You have access to the software on demand when you need it, and lose access when no longer a member / you stop using it.

All these models work. The problem with some full price software is they were selling it like it had a warranty and not defining the warranty. Expectations are all over the place, and when they don't match reality, people get upset.

Additionally, in this case people are upset about how there's two models in place and they don't appear to be handled the same, as they had some expectation of a type of warranty with full price software, and all of a sudden it feels like the company has changed how they are handling warranty requests for already sold full price software.

As a company, communicate clearly and follow through with that you've stated, and these problems go away.


I don't want any associated services - Dropbox will host and sync the password wallet files for free. Their other service is downloading icons for websites, but I don't want to send them a list of all my websites.

Yes, there is the occasional need to update Dropbox API code, or Chrome/Android API changes, but their asking price is very steep.

I already paid over $60 for the Android and OS X versions.


So what's wrong with charging for major upgrades like everyone else used to do? Rather than locking you into a greedy subscription forever.


For 1PW it might be ideal to push updates and not charge for them. As a security/tool provider out of date software could be compromised or have issues which might damage their public image.


I'd actually be fine paying a subscription for 1Password, and I do pay a subscription for plenty of other software I use.

I just want the ability to keep using vaults where I control the syncing, vs using the Agilebits-hosted vaults.


A negative review from him would probably be pretty damning and deter lots of techies who are stilly trying to buy the phone.


Taking it back for the people.


Didi already has a non-compete pact with Uber (they acquired Uber's Chinese arm from Uber).


No - this only applies to Uber not operating in China. Didi can still compete with Uber in any other country.


from the article:

> Our code is completely open, but piping to bash can be dangerous. For a safer install, review the code and then run the installer locally.


The compounds in this medicine are public knowledge, but taking them could be dangerous. For a safer experience, review all medical literature pertaining to these compounds before consuming.


Not really the same. One of the main issues with curl pipes is that the server (or MITM) can detect that the request goes into a pipe.

This allows an attacker to display one (safe) source when you view it in your browser on your workstation, or wget it, and serve a different (nefarious) source when you curl/pipe it.

So, a more complete analogy would be: a bottle that gives you a safe chemical compound when you extract it for analysis, but throws in some VX when you go to administer it.


How can you detect if the output is curl/piped?



https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-b...

Summary: Fill your script with an invisible payload that fills any buffers, and put something time consuming (say `sleep 5`) early in your script in order to detect that the script is being executed directly rather than just stored to disk. If the client halts before having read all data, it is likely a `curl | bash` scenario. If it just keeps reading, it's a regular browser just downloading.


I would hazard a guess that curl won't send the standard request headers that browsers would.


I actually do just that whenever I decide to self-medicate with a new drug. Were you being facetious?


Instead of writing that, they should first use cURL, and then sh, without any piping. See http://unix.stackexchange.com/a/339276

That way, it is the same as running cURL without piping the output to bash, so people can easily check the code without worrying if the server is sending them different code when they pipe to bash


I feel like they should state this first before giving the command. I had to scroll down the page to see this warning.

Anyway, if you decide to live on the edge.. don't copy-paste: http://thejh.net/misc/website-terminal-copy-paste


I think a lot of the "complexity" of web development comes from the fact that most of us are just YOLOing to get something that "mostly works" out the door. For the most part, lives don't depend on our systems and the standard for performance is much lower than something like a video game or operating system thanks to the legacy of dialup and ads.


To be frank, you have no idea what "most" people are doing, you only know your small sphere of experience. For example, I've worked on several numerical simulation systems where the results were used for safety ratings of different industrial processes. This was all on the Web, and it honestly did fit for the Web.

The market for all types software is huge and we only ever hear from a very small section of it. It's why I don't take the concept of programming-language-oriented "communities" too seriously. There is a huge body of dark-matter programmers out there--probably 90% in any particular language--who do not participate in the meetups and conferences and discussion forums. They just work and then go home at the end of the day to do something completely unrelated to programming.


Babel and webpack are megaoverkill for this and account for a good chunk of the 2k gzipped size. The actual library code is barely 140SLOC. There's a lot of room for improvement if this is intended to be a real standalone library (vs. a webpack/babel test).


I've found rollup, https://rollupjs.org/, to be really useful for this.


This looks nice. Is it as battle-tested as browserify?


It's fairly new but The Guardian uses it for all their JavaScript (the creator works there).


tl;dr—don't use rollup with large untested dependencies

I've had projects that got really strange error messages when using rollup that completely went away when I switched back to Babel.

One "issue" with rollup is that it is not 100% semantically correct. Neither is Babel in all cases, but the creator argues that if you're not following exact semantics now, you're relying on your tests to ensure proper behavior, so use something that is at least more minimal and keeps your bundle size down.

So when you're writing code and bundling with rollup, you can pretty much ensure everything works fine, but as soon as you pull in an extensive third-party library you have no assurances that it has been tested with rollup and will work correctly in all cases. In the worst case, it will seem to work fine but in weird situations will actually error out. This was my experience with rollup.


I should clarify on my above comment, in rollup the semantics don't matter as much but if you're using Bublé instead of Babel, the semantics may very well come into play. In either case, I didn't have luck on a project until I moved fully to webpack+Babel.


Webpack's runtime overhead is tiny, babel's varies based on what polyfills you need but very little is included here.

Most of the code here is actually from lodash due to the use of `throttle`, not either of the projects you mentioned.


I believe Waypoints also does this same thing, it's a fairly mature lib. https://github.com/imakewebthings/waypoints

kudos for in-view's demo page though


Why is Babel overkill? It lets you write your code in ES6, which is a huge benefit for many reasons.


The fact that it's delivering 140 lines in 5.5kb. That's a lot of bloat! There are ways to write code in ES6 without that (see sibling responses).


Hey, author here. I'm very open to ideas to reduce bloat. It's 140 SLOC and ~1.1kb gzipped without lodash/throttle. I considered writing my own throttle, but wanted something more battle-tested.

Does Rollup produce a more efficient bundle in your experience?


Not sure why I was downvoted, I think it's a valid point. To prove it, here's the functionality of your library in ES5 with a naive throttle function: https://gist.github.com/nathancahill/f7ea239306737f2075a94de...

Minified (1.49kb) and gzipped (677b).

Whether lodash functions should be used instead of naive functions is up for debate. My opinion is if the lodash functions are 5x the size of the entire library, it's probably best to not include it, or to include it as a build option (if the user's project already includes lodash for example).


No worries, lodash/debounce can be reduced further with babel+webpack plugins.

Assuming the current setup of babel+webpack the difference between your naive version is just ~0.5 kB.


Which still doubles the size of the library: ~0.6kb to ~1.1kb. So while it's just half a kilobyte, do that over and over, with nested dependencies and it really adds up.



Can you point me to an explanation of the differences between the 'lodash.throttle' module and importing 'lodash/throttle'?


lodash.throttle is a standalone zero-dependency package of just the throttle module. The `lodash` package is a collection of modules one of which is `lodash/throttle`.

You can generally get smaller bundles using `lodash/xyz` modules over the `lodash.xyz` packages because of plugins like:

https://github.com/lodash/babel-plugin-lodash

https://github.com/lodash/lodash-webpack-plugin

Though in the future lodash-webpack-plugin may support `lodash.xyz` packages too.


You should get about the same results using a bundler. Perhaps the lodash.throttle package is for reducing `npm install` time or reducing bloat if you check in your node modules?


OP is using that already.


rollup can only load the functions of lodash you actually use into your bundle


So can lodash. That's what the author is doing here by using

  import throttle from 'lodash/throttle';
rather than

  import { throttle } from 'lodash';


The only sibling response is rollupjs, which replaces webpack with ES6 import syntax. You still need bablel for the rest of the ES6 syntax.


> You still need bablel for the rest of the ES6 syntax.

Or Bublé[1] ;) which i've found to be much faster than Babel, even for relatively small codebases (~ a couple thousand LoC).

[1]: https://gitlab.com/Rich-Harris/buble


But if your entire project uses Babel, those 140 lines will already be in your code and thus won't be added.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: