So a couple of corrections that people are getting wrong over and over again in this discussion. The reason why data=ordered is the default is not because it's more likely for the file system to be recovered after a crash. It's default for security reasons. The "data-writeback" mount option has the tradeoff that there can be uninitialized blocks attached to files after a crash. These uninitialized blocks could contain someone else's love letters, porn stash, medical records, etc.
The main issue with "data=ordered" is not that it imposes a global order, but that if you do block allocations, fsync() requires a journal commit, and the security guarantees behind "data=ordered" require that all newly allocated blocks must be written out before the filesystem-wide journal commit can be allowed to complete. (Ext4 doesn't have this problem because it uses delayed allocation.)
This isn't an issue with other databases such as Oracle, DB2, MySQL, etc. They have no problems running on ext3. This is partially because they don't use fsync() at all. Oracle and DB2 use O_DIRECT to blocks that are allocated once (new blocks are only allocated when the table space file needs to be grown), and MySQL uses fdatasync() to files that aren't constantly being created and destroyed. SqlLite, because it is trying to pretend that it only needs one file, uses many, MANY temporary files which are being constantly created and destroyed. This is not efficient, and is why SQLite has all of these problems, but millions and millions of dollars worth of enterprise servers can use ext3 running on RHEL3, RHEL4, and RHEL5 on Oracle without hitting these issues.
The sad thing is most people like to use SQLite because it doesn't require manual scheme generation, not because it only uses one file. (In fact it doesn't; there are multiple temporary files, some of such must be there in case of an unclean shutdown. So if you copy just the single file after a program crash, you'll screw up the database.) If someone created a lightweight database that uses SQLite's interfaces, but used a single directory to hold all of the database's files, and then didn't constantly copy data back and forth between temporary files which were being constantly created and destroyed, but instead used the storage strategies used by the more sophisticated database systems, the result would work well on ext3, and for all file systems, it would use less data writes, which would save battery usage, SSD write endurance, and many other things. The last time I measured it, Firefox's "awesome bar" was consuming a third of a megabyte of write bandwidth to the disk per click; and it was updating at most a few hundred bytes of data. The rest was all overhead due to the catastrophic inefficiencies of SQLite.
P.S. And if you do implement this, please consider releasing it under the Apache license so I can hopefully convince the Android team to drop SQLite in favor of something that was a bit more written towards performance --- and Android users all over the world will thank you. :-)
The main issue with "data=ordered" is not that it imposes a global order, but that if you do block allocations, fsync() requires a journal commit, and the security guarantees behind "data=ordered" require that all newly allocated blocks must be written out before the filesystem-wide journal commit can be allowed to complete. (Ext4 doesn't have this problem because it uses delayed allocation.)
This isn't an issue with other databases such as Oracle, DB2, MySQL, etc. They have no problems running on ext3. This is partially because they don't use fsync() at all. Oracle and DB2 use O_DIRECT to blocks that are allocated once (new blocks are only allocated when the table space file needs to be grown), and MySQL uses fdatasync() to files that aren't constantly being created and destroyed. SqlLite, because it is trying to pretend that it only needs one file, uses many, MANY temporary files which are being constantly created and destroyed. This is not efficient, and is why SQLite has all of these problems, but millions and millions of dollars worth of enterprise servers can use ext3 running on RHEL3, RHEL4, and RHEL5 on Oracle without hitting these issues.
The sad thing is most people like to use SQLite because it doesn't require manual scheme generation, not because it only uses one file. (In fact it doesn't; there are multiple temporary files, some of such must be there in case of an unclean shutdown. So if you copy just the single file after a program crash, you'll screw up the database.) If someone created a lightweight database that uses SQLite's interfaces, but used a single directory to hold all of the database's files, and then didn't constantly copy data back and forth between temporary files which were being constantly created and destroyed, but instead used the storage strategies used by the more sophisticated database systems, the result would work well on ext3, and for all file systems, it would use less data writes, which would save battery usage, SSD write endurance, and many other things. The last time I measured it, Firefox's "awesome bar" was consuming a third of a megabyte of write bandwidth to the disk per click; and it was updating at most a few hundred bytes of data. The rest was all overhead due to the catastrophic inefficiencies of SQLite.
P.S. And if you do implement this, please consider releasing it under the Apache license so I can hopefully convince the Android team to drop SQLite in favor of something that was a bit more written towards performance --- and Android users all over the world will thank you. :-)