You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now, we extract archives and then just "git add" the results. If we use "git fast-import", though, we could write the uncompressed archive data stream to a pipe and let Git repack it into a repository (git fast-import streams are uncompressed raw data containing all file attributes and whatever else). I did some basic testing, and "git fast-import" is faster even on a ramdisk than "git add && git commit": the former took around 6 seconds while the latter took almost 20. When the files aren't on a ramdisk or are too large to all fit in the disk cache, this would probably be an even bigger improvement.
The text was updated successfully, but these errors were encountered:
Right now, we extract archives and then just "git add" the results. If we use "git fast-import", though, we could write the uncompressed archive data stream to a pipe and let Git repack it into a repository (git fast-import streams are uncompressed raw data containing all file attributes and whatever else). I did some basic testing, and "git fast-import" is faster even on a ramdisk than "git add && git commit": the former took around 6 seconds while the latter took almost 20. When the files aren't on a ramdisk or are too large to all fit in the disk cache, this would probably be an even bigger improvement.
The text was updated successfully, but these errors were encountered: