However, the project tagline on Github (but not the docs), which is but one click away, states “A rugged, minimal framework for composing JavaScript behavior in your markup.”
Modules that are part of your build, tests and npm scripts should be installed locally so that any contributor can run npm install/yarn and the develop your project without any other need and they will also get the correct version of the module.
Global installs are for modules that are outside your projects dev cycle. With that in mind, a module such as nodemon can be installed both globally and locally. It depends on the use case.
Sounds interesting. I'll report on my results here once I have uploaded my otherwise end to end encrypted private and personal conversations into this unknown script...
It gets better. Soon it will imitate you without you uploading your conversations, because enough people around you did it (making themselves guilty of private conversation disclosure, at least 6 months emprisonment in France).
It’s not that privacy is an endless pursuit, it’s more that the governments enjoy it so much that they don’t really work on preventing it.
I do event sourcing in my company and we had a look at this when we started out 4 years ago.
What I don't understand is why build a database? Why not build something in the application layer that uses your fav database as a storage mechanism instead?
Aren't the existing databases more mature and better to use?
There are a lot of non-trivial aspects to scaling event sourcing systems. Particularly if you want atomicity or asynchronous two-phase committing. Having implemented them ~5 different ways for different services at my startup, I’d definitely welcome a reliable DB-level abstraction.
Also putting that abstraction in the app layer allows developers to break it. Either by accident, or as a temporary hack that never gets fixed, or a temporary hack that has unforeseen side effects.
Sorry for my inexperience in this kind of implementation but wouldn't an sql transaction achieve something similar? And with table partitioning (at least on postgres) you should be able to go quite far, instead of neediness a new db (and a lot of related stuff to study)
No because in a distributed system, you can't rely on SQL transactions. i.e. if you need to make one API request to start the transaction and a second API request to commit it/rollback - you can't hold the SQL transaction open between those.
If you need total system order then that will always be your bottleneck, although you can make it very fast by scoping it to just a sequence number generator and doing the actual work in separate processes.
Otherwise, most event sourcing uses different "streams" of events for different application functions, so you can shard by stream in whatever way works for you.
You could shard based on uuid, so that each shard has its set of objects that it manages.
The easiest way would be to cast uuid as a 64bit unsigned int, then mod by the number of shards. If the number of shards is dynamic, then use consistent hashing.
I could not throw anything together of this quality. If I were to create a product by myself, using one of the ones available would certainly be a step up.
I cannot respect a TDD guide that is dependent on your editor. Doing tdd is about code structure and implementations decisions imho. If you need to check some box in a dialog window of a certain editor in order to run your tests, your test runner isn't good enough.
It's not dependent upon my editor. Maybe I should move this up higher in the readme: https://github.com/zpratt/react-tdd-guide#running-the-tests-... . All I was adding for Webstorm was a note for the convenience of people who choose to use it, which I might be able to move into a mocha.opts.