You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| ChunkSize | 1 Mib | The size of a single chunk of a file that is downloaded from remote and cached locally. ||
324
-
| PrefetchWorkers | 50 | The total number of workers available for downloading file chunks. |
325
-
326
-
##### File System Layout
327
-
328
-
Below is an example of what the file cache looks like. Here, five files are cached (the folder name of each is its digest,
329
-
shortened in the example below), and for each file, some chunks have been downloaded. For example, for the file
330
-
095e6bc048, four chunks are available in the cache. The name of each chunk corresponds to an offset in the file. So,
331
-
chunk 0 is the portion of 095e6bc048 starting at offset 0 of size ChunkSize. Chunk 1048576 is the portion of 095e6bc048
332
-
starting at offset 1048576 of size ChunkSize. And so on.
333
-
334
-
![file-system-layout]
335
-
336
-
#### Containerd Content Store Subscriber
337
-
338
-
This component is responsible fordiscovering layersin the local containerd content store and advertising them to the
339
-
p2p network using the p2p router component, enabling p2p distribution for regular image pulls.
340
-
341
-
#### P2P Proxy Server
342
-
343
-
The p2p proxy server (a.k.a. p2p mirror) serves the node’s content from the file cache or containerd content store.
344
-
There are two scenarios for accessing the proxy:
345
-
346
-
1. Overlaybd TCMU driver: this is the Teleport scenario.
347
-
348
-
The driver makes requests like the following to the p2p proxy.
349
-
350
-
```bash
351
-
GET http://localhost:5000/blobs/https://westus2.data.mcr.microsoft.com/01031d61e1024861afee5d512651eb9f36fskt2ei//docker/registry/v2/blobs/sha256/1b/1b930d010525941c1d56ec53b97bd057a67ae1865eebf042686d2a2d18271ced/data?se=20230920T01%3A14%3A49Z&sig=m4Cr%2BYTZHZQlN5LznY7nrTQ4LCIx2OqnDDM3Dpedbhs%3D&sp=r&spr=https&sr=b&sv=2018-03-28®id=01031d61e1024861afee5d512651eb9f
352
-
353
-
Range: bytes=456-990
354
-
```
355
-
356
-
Here, the p2p proxy is listening at `localhost:5000`, and it is passed in the full SAS URL of the layer. The SAS URL was
357
-
previously obtained by the driver from the ACR. The proxy will first attempt to locate this content in the p2p network
358
-
using the router. If found, the peer will be used to reverse proxy the request. Otherwise, after the configured resolution
359
-
timeout, the request will be proxied to the upstream storage account.
360
-
361
-
2. Containerd Hosts: this is the non-Teleport scenario.
362
-
363
-
Here, containerd is configured to use the p2p mirror using its hosts configuration. The p2p mirror will receive registry
364
-
requests to the /v2 API, following the OCI distribution API spec. The mirror will support GET and HEAD requests to `/v2/`
365
-
routes. When a request is received, the digest is first looked up in the p2p network, and if a peer has the layer, it is
366
-
used to serve the request. Otherwise, the mirror returns a 404, and containerd client falls back to the ACR directly (or
367
-
any next configured mirror.)
368
-
369
-
### Performance
370
-
371
-
The following numbers were gathered from a 3-node AKS cluster.
372
-
373
-
#### Peer Discovery
374
-
375
-
In broadcast mode, any locally available content is broadcasted to the k closest peers f the content ID. As seen below,
376
-
the performance improves significantly, with the tradeoff that network traffic also increases.
377
-
378
-
**Broadcast off**
379
-
380
-
| Operation | Samples | Min (s) | Mean (s) | Max (s) | Std. Deviation |
0 commit comments