Hacker Newsnew | past | comments | ask | show | jobs | submit | lifepillar's commentslogin

>I'd like to move the cursor backwards and forwards in long commands easier, maybe even with the mouse

In Terminal.app you may alt-click to make the cursor jump to where you’ve clicked. Besides, I use alt-arrows to jump between words: I don’t remember whether that’s out of the box, though. In any case, you may configure the relevant codes in the Keyboard section of the preferences.


>Our technology […] collects opt-in customer behavior data from hundreds of popular websites that offer top display, video platforms, social applications, and mobile marketplaces that allow laser-focused media buying.

To me this seems:

1. not mobile specific; 2. totally plausible; 3. despicable in many ways, but “opt-in” makes me think of (a) masterfully crafted fine print in some Terms of Service that would acknowledge the collection of audio, and (b) that this has nothing to do with a phone mic maliciously being turned on without the user noticing, but it’s rather recording from a mic intentionally activated by the user during the normal interaction with an app or web site.


I see that MVCC is still your preferred way of doing CC, and what academic research is mostly focused. I am wondering whether that’s an advantage for in-memory databases specifically.

I was once discussing MVCC vs 2PL with an experienced Sybase and SQL Server guy, and he claimed that, when transactions are implemented properly and the database is well-designed (no surrogate keys, in particular), 2PL leads to better performance and no deadlocks, while “readers do not block writers” leads to lots of aborted transactions in a heavy OLTP workload. I verified that (I should still have the code around): lots of conflicts in PostgreSQL vs smooth concurrent execution with no retries in Sybase and SQL Server.

I have since heard similar opinions from other SQL Server practitioners: they disable MVCC and rely only on good ol’ 2PL.


See our 2014 paper on evaluating CC protocols on in-memory system with high contention / core counts:

https://www.vldb.org/pvldb/vol8/p209-yu.pdf

All the protocols regress to the same. This evaluation was only with stored procedures though. It would be worth doing a similar investigation with conversational DB protocols (e.g., JDBC, ODBC).


Thanks for the reminder.. this was a great paper, and I'm curious - why would conversational protocols indicate any difference? Also - curious if any such tests have used libraries such as seastar?


Yeah, it’s better to use a “high-level” language designed for the architecture, such as this: https://github.com/dschmenk/PLASMA


FWIW: really the best example of that kind of thought was OSS's "Action!" language for the Atari computers. Unlike all the modern ideas in this space, this was actually a contemporary product (released in 1983). It's a little like K&R C in terms of semantics, with a bunch of concessions[1] to the limitations of the device.

Alas it came at a time in history where it couldn't make much impact. The Atari platform was being eclipsed by Commodore at the time, and the market for Serious Tool Development in the desktop world was swinging hard toward IBM (Turbo Pascal for MSDOS was arriving right about the same time).

[1] Like a more pascal-ish syntax. Atari didn't have curly braces on its keyboard! Also IIRC "POINTER" types were limited to being statically linked into the zero page, as that matched the operation of the hardware without requiring the runtime to move stuff around for you every time you wanted in do a load.


> Atari didn't have curly braces on its keyboard!

Isn't that why C also allows the <% and %> digraphs as alternatives?


Not the Atari itself, which never had a C compiler. But systems like that, yeah. Lots of 70's systems were designed for non-ASCII or ASCII-subset[1] character sets, whereas Unix had made very comfortable use of every funny symbol it could find.

[1] e.g. the original Apple II character ROM had only the 64 characters from 0x20 to 0x5f (no lower case, even) squished into 6 bits of addressing. The keyboard reflected that limitation, as did those from Atari and Commodore. This wasn't rectified by Apple itself until the IIe, though there were 80 column cards on the market from 1980 on that implemented the full


Thanks for this tool! I found it when Apple started enforcing stricter requirements for certificates, and the commands I was using to create certificates at the time had become inadequate. I have since used mkcert to generate dozen of certificates for my local network, which work on any service and device.

The only drawback of mkcert is that it makes you forget the steps needed to make a certificate!


You may create a local account and sync via iCloud. But NNW also supports six online services, and self-hosted FreshRSS. The latter is probably what best answers your question.

Sent from my NNW :-)


I wonder whether people monitor resources consumed by processes… A couple of years ago I had installed Gitea, but it was constantly using between 5-10% CPU. I switched to Gogs, and it doesn’t waste CPU cycles. So, I have stayed with Gogs. My needs are minimal (self-hosting for personal use), so I could probably switch to something even more minimal, but so far Gogs works fine.


You may be interested in this analysis: https://www.scss.tcd.ie/Doug.Leith/pubs/browser_privacy.pdf


I have configured local Unbound to use four different open DNS provideds, round-robin, the rationale being each one gets 1/4th of requests. On the other hand, I am sending requests to four providers instead of one, so I have to trust four providers instead of one. What’s better?


You can set up recursive resolver to query directly the DNS root servers.

https://www.iana.org/domains/root/servers


> What’s better?

4 providers, at the same time, all times.

It is wasteful but you can run the alarms when responses don't match.


But responses will not match for legitimate reasons of dynamic configuration at specific domains.


This isn't a concrete software recommendation, but the DNS would lend itself exceptionally well to a distributed p2p resolver that would prevent most queries from hitting centralized servers in the first place - it's just a simple distributed database. You'd need some sort of multiparty trust metric to avoid tampering (or DNSSEC), but modulo that peers could just freely pass around records.

There would be a little more complexity these days with large companies changing answers based on the address of the requester, but that doesn't seem too hard to account for or even just ignore. And to reduce traffic even further, you could take some liberties with TTLs, for records that actually don't change often.


What about remote address correlation? DNS is transmitted in plaintext. Transit providers could evasedrop on your traffic and short circuit your work.


They could sniff but not tamper as long as you are using DNSSEC.


At least you've removed one single point of failure for DNS lookups.


One thing PostgreSQL would likely not be able to adapt to, at least without significant effort, is dropping MVCC in favor of more traditional locking protocols.

While MVCC is fashionable nowadays, and more or less every platform offers it at least as an option, my experience, and also opinions I have heard from people using SQL Server and similar platforms professionally, is that for true OLTP at least, good ol’ locking-based protocols in practice outperform MVCC-based protocols (when transactions are well programmed).

The “inconvenient truth” [0] that maintaining multiple versions of records badly affects performance might in the future make MVCC less appealing. There’s ongoing research, such as [0], to improve things, but it’s not clear to me at this point that MVCC is a winning idea.

[0] https://dl.acm.org/doi/10.1145/3448016.3452783


I sort of want the opposite. Except for extremely high velocity mutable data, why do we ever drop an old version of any record? I want the whole database to look more like git commits - completely immutable, versionable, every change attributable to a specific commit, connection, client, user.

So much complexity and stress and work at the moment comes from the fear of data loss or corruption. Schema updates, migrations, backups, all the distributed computing stuff where every node has to assume every other node could have mutated the data .... And then there are countless applications full of "history" type tables to reinstate audit trails for the mutable data. It's kind of ridiculous when you think about it.

It all made sense when storage was super expensive but these days all the counter measures we have to implement to deal with mutable state are far more expensive than just using more disk space.


If the old versions of records stay where they are, they will start to dominate heap pages and lead to a kind of heap fragmentation. If the records are still indexed, then they will create an enormous index bloat. Both of these will make caches less effective and either require more RAM or IOPS, both of which are scarce in a relational db.

You probably need a drastically different strategy, like moving old records to separate cold storage instead (assuming you might ocassionally want to query it. Otherwise you can just retain your WAL files forever).


Absolutely ... it needs to be designed from the ground up - why I think it fits the question of something a modern database could do that Postgres would struggle with.


> moving old records to separate cold storage

FWIW this is available in ClickHouse (which is an analytics database, though)

https://clickhouse.tech/docs/en/engines/table-engines/merget...


You should check out dolt, does exactly what you're describing, and is a drop-in MySQL replacement:

https://github.com/dolthub/dolt


The downside of "good ol' locking" is that you can end up with more fights and possibly deadlocks over who gets access and who has to wait.

With Postgres/Oracle MVCC model, readers don't block writers and writers don't block readers.

It's true that an awareness of the data concurrency model, whatever it is, is essential for developers to be able to write transactions that work as they intended.


It’s not that conflicts magically disappear if you use MVCC. In some cases, PostgreSQL has to rollback transactions whereas a 2PL-based system would schedule the same transactions just fine. Often, those failures are reported ad “serialization errors”, but the practical result is the same as if a deadlock had occurred. And Postgres deadlocks as well.


This is already being developed. zheap [1] is based on the undo/redo log system that databases such as Oracle use, and will be one of the options once Postgres supports pluggable storage backends.

[1] https://www.cybertec-postgresql.com/en/postgresql-zheap-curr...


I don't know. Lock-based concurrency control has problem scaling up concurrent access among readers and writers for read consistency. Of course Oracle with MVCC still beats everybody performance wise.


Does "more traditional locking" mean that the clients request locks? Isn't it already offered by PG with table-level and row-level locks, and also in an extremely flexible way (letting the client developer define the semantics) by pg_advisory_lock?

https://www.postgresql.org/docs/current/explicit-locking.htm...


Having time travel capabilities (eg. cockroachdb) is a really useful side effect of MVCC though. Postgres once had this capability. Need for garbage collection/vacuuming is a downside.

I think it all depends on the pattern of reads, writes, and the types of writes. Mysqls innodb is often faster than postgres but under some usage patterns suffers from significant lock contention. (I have found it gets worse as you add more indexes)


Personally I wish Postgres would add support for optimistic concurrency control (for both row-level and "predicate" locks), which can be a big win for workloads with high throughput and low contention.


> good ol’ locking-based protocols in practice outperform MVCC-based protocols

Doesn't make sense to me. Oracle supports MVCC. SQL Server doesn't scale as well as Oracle.


There is absolutely no reason at all that using a Model View Controller architecture should say anything about your persistence layer. Model View Controller is an abstraction for managing code that generates things that are put on a screen. It says that you should have code that represents underlying data separate from code that represents different ways to show the data to the user and also separate from code that represents different ways for the user to specify how they would like to view or modify the data.

The Model portion of MVC should entirely encapsulate whether you are using a relational database or a fucking abacus to store your state. Obviously serializing and deserializing to an Abacus will negatively impact user experience, but theoretically it may be a more reliable data store than something named after a Czech writer famous for his portrayl of Hofbureaucralypse Now .


What's being discussed is MVCC, or multiversion concurrency control

https://en.wikipedia.org/wiki/Multiversion_concurrency_contr...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: