You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Blobs are currently stored for the mandatory time in database and then deleted.
Rather than simply deleting them, it'd be helpful to be able to write the blobs to filesystem.
I could imagine keeping this relatively simple and just specifying something like --data-storage-store-old-blobs-path=/data and we'd then start writing all blobs to the filesystem before they get archived.
Theres a tool already in use https://github.com/base-org/blob-archiver that basically dumps the blobs api output per block, we could do similar re. file naming etc potentially if its not too crazy.. Remembering there'll be 7200 slots per day so we probably want output in some kind of structure.
The text was updated successfully, but these errors were encountered:
Blobs are currently stored for the mandatory time in database and then deleted.
Rather than simply deleting them, it'd be helpful to be able to write the blobs to filesystem.
I could imagine keeping this relatively simple and just specifying something like
--data-storage-store-old-blobs-path=/data
and we'd then start writing all blobs to the filesystem before they get archived.Theres a tool already in use https://github.com/base-org/blob-archiver that basically dumps the blobs api output per block, we could do similar re. file naming etc potentially if its not too crazy.. Remembering there'll be 7200 slots per day so we probably want output in some kind of structure.
The text was updated successfully, but these errors were encountered: