A customer reported a partial export_status.json being written to an external
drive. Forcing a flush to attempt to reduce chances of this happening. Since
this particular code path is only used for writing JSON files (export status and
metadata), we unconditionally enable this for all writes.
Two reasons:
- Electron 30 is end of support
- The prev-to-prev commit didn't fix all OOMs
(3511fcf723), and they still sporadically
occur. But there isn't any any aberrant memory consumption I can spot (See
prev commit for some example instrumentation, the app's memory usage doesn'
exceed a few hundred MBs at any point). So to rule out an upstream issue.
## Description
- Quantized the CLIP text encoder
- Moved preprocessing and postprocessing of face detection inside the
model
- Optimised the ONNX models more wherever possible
- Created a place in infra for ML version control of sorts
## Tests
Have tested the changes on mobile, but not on desktop. Please carefully
review the changes on desktop, especially regarding the face detection
post-processing, more specifically the image (re-)size correction.
I'm not sure what was the issue in the existing code, but I happened to chance
on a setup that reproduced the flakiness that some customers have reported (that
reading the zips sometimes fails). There wasn't anything specific in the setup -
I was reading a 50 MB zip file, a file which I'd read multiple times before,
except this time it seemed to invariably result in failures during read.
Replacing the node stream to web stream conversion with this new approach fixes
the flakiness, at least in the reproducible scenario that I was encountering.
The previous approach worked, but we ran into some other issues
Uncaught Exception:
Error: Cannot find module 'ajv/dist/compile/codegen'
Require stack:
- /Applications/ente.app/Contents/Resources/app.asar/node_modules/ajv-formats/dist/limit.js
As an alternative, try to use the yarn equivalent(-ish).