Locking and sqlcipher_export() with a busy database

Hi,

I am implementing the use of sqlcipher_export() to perform backups of a SQLCipher database.

I understand that SQLCipher has disabled the use of the SQLite online backup API.

My situation is that my database is very busy: it gets very frequent writes (one or more per second, with few pauses, of which I can’t rely on being predictable).

I need to design my backup to be a consistent ‘snapshot’ of the database. Obviously copying the raw file is not going to work as the file might get written to, so I’m very thankful for the sqlcipher_export() to help me here!

I see that sqlcipher_export() essentially does an INSERT INTO {newdb}.{table} (SELECT FROM {origdb}.{table}) iterating over all the tables (I know it’s more than this, but simplifying for the sake of discussion)

My question(s):

  1. if I run sqlcipher_export(), will it ‘lock’ the database in some way that prevents writes coming in until the export completes?

  2. or will it allow writes but it still obtains a ‘snapshot’ of how the tables/rows looked prior to those writes? (I don’t care that it might not ‘catch’ the new stuff coming in in real-time, I just want to know that my exported database is a consistent snapshot without any weird half-and-half data or corruption. In other words, even if it’s slightly out of date, that’s ok, so long as it’s a consistent ‘snapshot in time’). This is my hope and preference.

Essentially what I am looking for is the equivalent to mysqldump’s --single-transaction in an InnoDB database, where it doesn’t lock all the tables and stall any pending writes - instead it seems to get a ‘snapshot in time’ and back that up, and writes continue to get through.

I have been reading File Locking And Concurrency In SQLite Version 3 but I’m not really sure if I need to be wrapping my SELECT sqlcipher_export() in a BEGIN transaction or not.

Thanks for any help!

Hi @mig5,

Thanks for reaching out with regard to SQLCipher. In SQLCipher 4.3.0 we adjusted the backup API to support encrypted-to-encrypted backups, additionally you can perform plaintext-to-plaintext, you just cannot mix them via the backup API. The latest version of SQLCipher is 4.5.3 currently. Would you give that a try for your scenario and let us know your results?

@developernotes thanks for your quick response. I think you’ve misunderstood my question, though.

I am using v4.5.3 and I am doing an encrypted to encrypted backup via sqlcipher_export() and it’s already working great in that sense.

My question is: what happens when the sqlcipher_export() function is running, if there are write requests coming in (INSERTs or UPDATEs or DELETEs) to the database. I have not yet tested that scenario but that is going to be what I face in production with a very busy database.

Are those writes forced to wait due to a lock having been acquired for the export to take place? Or are they able to complete and the export grabs whatever it saw at the moment it started (or do I need to wrap it in a transaction to do that? Or, does wrapping it in a transaction cause a lock that blocks writes?)

Hello @mig5 - Thanks for clarifying. The behavior would depend on what journal mode is in use. When using the default journal mode, writers on the main database will be blocked while the export is occurring. However, if you switch to use write ahead logging using PRAGMA journal_mode=WAL; then the main database will be writeable while the reader is operating. We would normally recommend WAL for applications that have significant concurrent access requirements. You will likely see much better performance. Either way, it is also important for an application to set a reasonable Busy Timeout, e.g. PRAGMA busy_timeout=5000; to more gracefully handle locking scenarios.