A recent PR seems to have added "Render.svg". I'm not sure of the intent, but
this seems to have been meant as replacement of the existing "render.svg" (note
the different case). Because of how the macOS APFS filesystem and git interact,
main is now showing as dirty on a git checkout.
Based on a visual comparison, and assuming the most recent PR is the meant to
intentionally update this file, I've retained "Render.svg".
## Description
Note:
This API won't really return status/diff for deleted files. The clients
will primarily use this data to identify for which all files we already
have preview generated or it's ML inference is done.
This doesn't simulate perfect diff behaviour as we won't maintain a
tombstone entries for the deleted API.
## Tests
It is hard for me to be certain, but I feel this should resolve the
sporadic OOMs that have been reported when uploading large libraries.
- https://github.com/ente-io/ente/issues/2500
- https://github.com/ente-io/ente/discussions/3420
There are two fixes here:
1. First one is a inefficient array concat in our code. This was not
incorrect per se, but it did lead to an allocation pattern that caused
V8's GC to crash the renderer with OOMs.
2. But even after the first fix, I was able to sometimes reproduce OOMs.
I added a lot of instrumentation (I've cherry-committed some of it to
git history for future reference when debugging similar issues), but I
couldn't spot any abnormal allocation patterns during uploads. Out of
ideas, I started imagining it was a Chromium issue, and on a whim, I
updated Electron 30 => 33 (something I needed to do anyway, as part of
regular app dependency updates). That apparently has resolved the
remaining OOMs.
With these changes, I've not been able to reproduce a crash even after
bumping up the parallel upload count from 4 to 12. I've let the parallel
upload count be at the existing 4 for now, but if indeed we stop getting
field reports of OOM crashes after this is released, we can increase
that too in the future.
Two reasons:
- Electron 30 is end of support
- The prev-to-prev commit didn't fix all OOMs
(3511fcf723), and they still sporadically
occur. But there isn't any any aberrant memory consumption I can spot (See
prev commit for some example instrumentation, the app's memory usage doesn'
exceed a few hundred MBs at any point). So to rule out an upstream issue.
Should reduce the following occurrences (This should make it better, but there
might be other reasons for the OOM too): -
https://github.com/ente-io/ente/issues/2500 - -
https://github.com/ente-io/ente/discussions/3420
---
Here, the issue is that the combineChunksToFormUploadPart function, while not
incorrect, is terribly inefficent in how it combines Uint8Arrays byte by
byte. This apparently causes an allocation pattern that the V8 garbage
collector, Oilpan, doesn't like, and crashes the renderer process with:
[main] <--- Last few GCs --->
[main]
[main] [17639:0x13000e90000] 39409 ms: Mark-Compact (reduce) 48.1 (57.8) -> 47.7 (52.8) MB, pooled: 0 MB, 35.08 / 0.04 ms (average mu = 0.857, current mu = 0.906) CppHeap allocation failure; GC in old space requested
[main]
[main]
[main] <--- JS stacktrace --->
[main]
[main] [17639:1025/145540.195043:ERROR:v8_initializer.cc(811)] V8 process OOM (Oilpan: Large allocation.).
The effort was primarily spent in getting it to a reproducible-ish state, and I
can now sporadically reproduce this watching a folder full of large videos, and
setting the network conditions in DevTools to 3G. For real users, what probably
happens is, depending on network speed, there is a potential race condition
where 4 multipart uploads may start within the same GC cycle (but I'm guessing
here, since the setup I have for reproducing this is still very sporadic).
Here is a smaller isolated example. This code, when repeatedly invoked in a
setTimeout (independent of any uploads or anything else in the app), causes the
renderer to OOM within a minute.
import { wait } from "@/utils/promise";
async function combineChunksToFormUploadPart() {
const combinedChunks = [];
for (let i = 0; i < 5 * 5; i++) {
const { done, value: chunk } = await readDo();
if (done) {
break;
}
for (let index = 0; index < chunk.length; index++) {
combinedChunks.push(chunk[index]!);
}
}
return Uint8Array.from(combinedChunks);
}
const readDo = async () => {
await wait(10);
const ENCRYPTION_CHUNK_SIZE = 4 * 1024 * 1024;
return {
done: false,
value: Uint8Array.from(
Array(ENCRYPTION_CHUNK_SIZE).fill(Math.random()),
),
};
};
---
Some flags which helped in debugging:
app.commandLine.appendSwitch("js-flags", "--expose_gc --trace_gc --trace_gc_verbose");
## Description
- Adds an option to not index files locally on mobile
- Uses the global ML flag for consent
## Tests
Tested in debug mode on my Pixel 8.