Memory usage spikes during VACUUM on iOS - How to limit max memory?

Hello!

I am using SQLCipher 4.4.0 on iOS together with IOCipher/libsqlfs.

The database is roughly around 1GB. After moving a bunch of data around (moving files in the libsqlfs which translates to updating the paths in a table), if I call VACUUM, it takes about 2+ minutes on an iPhone 7 to complete.

But my biggest issue here is with the memory usage, during VACUUM it spikes to over 1GB, if not more. As this is a resource constrained device, this is not ideal, especially as I sometimes need to perform this action while the app is not necessarily in the foreground.

Is there a way to limit the max amount of memory that is allowed to be used by SQLite/SQLCipher at any given time?

Thanks!

Edit: I’ve experimented with PRAGMA hard_heap_limit and PRAGMA soft_heap_limit. The soft limit is ignored (as expected, as it’s only advisory), while the hard limit kills the VACUUM operation when the limit is reached.

Hello @alin - By default SQLCipher is built to use in-memory temporary storage by default (-DSQLITE_TEMP_STORE=2). This reduces the risk of unencrypted data being temporarily written to permanent storage (even if it is transient and would be deleted immediately after) but at the cost of higher memory utilization. If you are in a memory-constrained state, try using the following PRAGMA prior to running vacuum in order to set the temp store to FILE:

PRAGMA temp_store = 1;

Give that a try and let us know if it helps.

Hello @sjlombardo! Thank you for the reply.

I’ve actually been wary of doing that because I am not sure whether the temporary file will be encrypted or not. The data that I am storing should never touch the disk in an unencrypted state.

Do you know whether the temporary file is encrypted as well?

Thanks!

Hello @alin, unfortunately temporary files are not guaranteed to be encrypted, which is why memory-based temporary storage is enabled by default instead of file based. One other alternative might be to try using sqlcipher_export() to an attached database (see SQLCipher API - Zetetic). If you attached a new encrypted database, exported to it, the result would be optimally compacted. You could try that approach to see if it uses less memory for your use case.

Thank you for the idea!

For the moment I opted to just copy over the data to a new database through a buffered API so I am in control of the resources.

It’s not great performance, but at least it gives me the opportunity to skip data that I don’t need to and to provide progress feedback during the process.

Another option is to use VACUUM INTO 'newfile.db', which creates a new database which apparently uses the same encryption parameters as the original database. After the operation completes, you can replace the old database file with the new one (maybe simply with mv newfile.db oldfile.db).

The main downside is that unlike regular VACUUM, this isn’t an atomic operation, but if you have some external synchronization to ensure no other process uses the database, this shouldn’t be a problem.