Skip to content

Commit

Permalink
fix: link to current unixfs importer (#1795)
Browse files Browse the repository at this point in the history
The link in the text currently is to the new one but is being referenced as the old.
  • Loading branch information
Alan Shaw authored Apr 11, 2022
1 parent 8c0e5e6 commit 43b0f5f
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion packages/website/posts/2022-04-07-q2.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Uploads in NFT.Storage are designed to be trustless, with the Javascript client

We are currently building Uploads V2, an improved backend for this process, along with improved session handling, which will allow faster, more reliable uploads of more and larger files from even more constrained clients like web browsers and mobile apps.

The Upload v1 flow comes with its share of usability constraints. For instance, client-side CAR generation and splitting with the current version of [js-unixfs](https://github.com/ipld/js-unixfs), the [IPLD](https://ipld.io/) implementation that Javascript IPFS utilizes, requires that CAR conversion put the whole file or directory into memory, which can constrain the user from uploading larger files (especially folks like artists trying to upload from their local computer, or even the website, which is subject to even further memory constraints). Further, in Uploads V1, NFT.Storage only knows whether it received all the partial chunks for a full CAR file once it attempts to pin the root CID to its IPFS Cluster. This is done asynchronously, but with a Cluster handling 50M+ uploads, sometimes it takes a while for the Cluster to know whether an upload was fully successful, and even then NFT.Storage has to know to update the status in its database (why you might see "Queuing" for an upload's status even if it was successfully uploaded). This creates opaqueness that is not ideal for the user.
The Upload v1 flow comes with its share of usability constraints. For instance, client-side CAR generation and splitting with the current version of [js-unixfs](https://github.com/ipfs/js-ipfs-unixfs), the [IPLD](https://ipld.io/) implementation that Javascript IPFS utilizes, requires that CAR conversion put the whole file or directory into memory, which can constrain the user from uploading larger files (especially folks like artists trying to upload from their local computer, or even the website, which is subject to even further memory constraints). Further, in Uploads V1, NFT.Storage only knows whether it received all the partial chunks for a full CAR file once it attempts to pin the root CID to its IPFS Cluster. This is done asynchronously, but with a Cluster handling 50M+ uploads, sometimes it takes a while for the Cluster to know whether an upload was fully successful, and even then NFT.Storage has to know to update the status in its database (why you might see "Queuing" for an upload's status even if it was successfully uploaded). This creates opaqueness that is not ideal for the user.

Throughout Q1, we laid the groundwork for an improved Uploads V2 flow, including work on a new, cloud-native implementation of IPFS called IPFS Elastic Provider (which natively "speaks" CAR file rather than in terms of pins), improvements to `js-unixfs` to enable CAR file generation that isn't memory-constrained, and UCANs. We're excited to announce that in Q2, we are stitching together all these parts and replacing the current uploads flow - keeping it trustless but making it much more robust and reliable from the user's standpoint.

Expand Down

0 comments on commit 43b0f5f

Please sign in to comment.