Skip to content

Latest commit

 

History

History
3093 lines (2203 loc) · 183 KB

CHANGELOG.md

File metadata and controls

3093 lines (2203 loc) · 183 KB

IPFS Cluster Changelog

v1.0.7 - 2023-10-12

IPFS Cluster v1.0.7 is a maintenance release.

This release updates dependencies and switches to the Boxo library suite with the latest libp2p release.

See the notes below for a list of changes and bug fixes.

List of changes

Breaking changes

There are no breaking changes on this release.

Features
Bug fixes
Other changes

Upgrading notices

Configuration changes

A new option cluster.pin_only_on_untrusted_peers has been added, opposite to the pin_only_on_trusted_peers that already existed. Defaults to false. Both options cannot be true. When enabled, only "untrusted" peers are considered for pin allocations.

REST API

A new /health endpoint has been added, returns 204 (No Content) and no body. It can be used to monitor that the service is running.

Pinning Service API

A new /health endpoint has been added, returns 204 (No Content) and no body. It can be used to monitor that the service is running.

IPFS Proxy API

Calling /api/v0/pin/ls on the proxy api now adds a final new line at the end of the response. This should align with what Kubo does.

Go APIs

No relevant changes.

Other

ipfs-cluster-service now sends a notification to systemd when it becomes "ready" (that is, after all initialization is completed). This means systemd service files for ipfs-cluster-service can use Type=notify.

The official docker images are now built with support for linux/amd64, linux/arm/v7 and linux/arm64/v8 architectures. We have also switched to Alpine Linux as base image (instead of Busybox). Binaries are now built with CGO_ENABLED=0.


v1.0.6 - 2023-03-06

IPFS Cluster v1.0.6 is a maintenance release with some small fixes. The main change in this release is that pebble becomes the default datastore backend, as we mentioned in the last release.

Pebble is the datastore backend used by CockroachDB and is inspired in RocksDB. Upon testing, Pebble has demonstrated good performance and optimal disk usage. Pebble incorporates modern datastore-backend features such as compression, caching and bloom filters. Pebble is actively maintained by the CockroachDB team and therefore seems like the best default choice for IPFS Cluster.

Badger3, a very good alternative choice, becomes the new default for platforms not supported by Pebble (mainly 32bit architectures). Badger and LevelDB are still supported, but we heavily disencourage their usage for new Cluster peers.

List of changes

Breaking changes

There are no breaking changes on this release.

Features
Bug fixes
Other changes

Upgrading notices

Configuration changes

The pebble section of the configuration has some additional options and new, adjusted defaults:

  • pebble:
    "pebble": {
      "pebble_options": {
        "cache_size_bytes": 1073741824,
        "bytes_per_sync": 1048576,
        "disable_wal": false,
        "flush_delay_delete_range": 0,
        "flush_delay_range_key": 0,
        "flush_split_bytes": 4194304,
        "format_major_version": 1,
        "l0_compaction_file_threshold": 750,
        "l0_compaction_threshold": 4,
        "l0_stop_writes_threshold": 12,
        "l_base_max_bytes": 134217728,
        "levels": [
          {
            "block_restart_interval": 16,
            "block_size": 4096,
            "block_size_threshold": 90,
            "compression": 2,
            "filter_type": 0,
            "filter_policy": 10,
            "index_block_size": 4096,
            "target_file_size": 4194304
          },
          {
            "block_restart_interval": 16,
            "block_size": 4096,
            "block_size_threshold": 90,
            "compression": 2,
            "filter_type": 0,
            "filter_policy": 10,
            "index_block_size": 4096,
            "target_file_size": 8388608
          },
          {
            "block_restart_interval": 16,
            "block_size": 4096,
            "block_size_threshold": 90,
            "compression": 2,
            "filter_type": 0,
            "filter_policy": 10,
            "index_block_size": 4096,
            "target_file_size": 16777216
          },
          {
            "block_restart_interval": 16,
            "block_size": 4096,
            "block_size_threshold": 90,
            "compression": 2,
            "filter_type": 0,
            "filter_policy": 10,
            "index_block_size": 4096,
            "target_file_size": 33554432
          },
          {
            "block_restart_interval": 16,
            "block_size": 4096,
            "block_size_threshold": 90,
            "compression": 2,
            "filter_type": 0,
            "filter_policy": 10,
            "index_block_size": 4096,
            "target_file_size": 67108864
          },
          {
            "block_restart_interval": 16,
            "block_size": 4096,
            "block_size_threshold": 90,
            "compression": 2,
            "filter_type": 0,
            "filter_policy": 10,
            "index_block_size": 4096,
            "target_file_size": 134217728
          },
          {
            "block_restart_interval": 16,
            "block_size": 4096,
            "block_size_threshold": 90,
            "compression": 2,
            "filter_type": 0,
            "filter_policy": 10,
            "index_block_size": 4096,
            "target_file_size": 268435456
          }
        ],
        "max_open_files": 1000,
        "mem_table_size": 67108864,
        "mem_table_stop_writes_threshold": 20,
        "read_only": false,
        "wal_bytes_per_sync": 0
      }
    }
REST API

No changes.

Pinning Service API

No changes.

IPFS Proxy API

No changes.

Go APIs

No relevant changes.

Other

The --datastore flag to ipfs-cluster-service init now defaults to pebble in most platforms, and to badger3 in those where Pebble is not supported (arm, 386).


v1.0.5 - 2023-01-27

IPFS Cluster v1.0.5 is a maintenance release with one main feature: support for badger3 and pebble datastores.

Additionally, this release fixes compatibility with Kubo v0.18.0 and addresses the crashes related to libp2p autorelay that affected the previous version.

pebble and badger3 are much newer backends that the already available Badger and LevelDB. They are faster, use significantly less disk-space and support additional options like compression. We have set pebble as the default datastore used by the official Docker container, and we will be likely making it the final default choice for new installations. In the meantime, we encourage the community to try them out and provide feedback.

List of changes

Breaking changes

There are no breaking changes on this release.

Features
Bug fixes
Other changes

Upgrading notices

Configuration changes

The datastore section of the configuration now supports the two new datastore backends:

  • badger3:
    "badger3": {
      "gc_discard_ratio": 0.2,
      "gc_interval": "15m0s",
      "gc_sleep": "10s",
      "badger_options": {
        "dir": "",
        "value_dir": "",
        "sync_writes": false,
        "num_versions_to_keep": 1,
        "read_only": false,
        "compression": 0,
        "in_memory": false,
        "metrics_enabled": true,
        "num_goroutines": 8,
        "mem_table_size": 67108864,
        "base_table_size": 2097152,
        "base_level_size": 10485760,
        "level_size_multiplier": 10,
        "table_size_multiplier": 2,
        "max_levels": 7,
        "v_log_percentile": 0,
        "value_threshold": 100,
        "num_memtables": 5,
        "block_size": 4096,
        "bloom_false_positive": 0.01,
        "block_cache_size": 0,
        "index_cache_size": 0,
        "num_level_zero_tables": 5,
        "num_level_zero_tables_stall": 15,
        "value_log_file_size": 1073741823,
        "value_log_max_entries": 1000000,
        "num_compactors": 4,
        "compact_l_0_on_close": false,
        "lmax_compaction": false,
        "zstd_compression_level": 1,
        "verify_value_checksum": false,
        "checksum_verification_mode": 0,
        "detect_conflicts": false,
        "namespace_offset": -1
      }
    }
  • pebble:
    "pebble": {
      "pebble_options": {
        "bytes_per_sync": 524288,
        "disable_wal": false,
        "flush_delay_delete_range": 0,
        "flush_delay_range_key": 0,
        "flush_split_bytes": 4194304,
        "format_major_version": 1,
        "l0_compaction_file_threshold": 500,
        "l0_compaction_threshold": 4,
        "l0_stop_writes_threshold": 12,
        "l_base_max_bytes": 67108864,
        "levels": [
          {
            "block_restart_interval": 16,
            "block_size": 4096,
            "block_size_threshold": 90,
            "compression": 1,
            "filter_type": 0,
            "index_block_size": 4096,
            "target_file_size": 2097152
          }
        ],
        "max_open_files": 1000,
        "mem_table_size": 4194304,
        "mem_table_stop_writes_threshold": 2,
        "read_only": false,
        "wal_bytes_per_sync": 0
      }
    }

In order to choose the backend during initialization, use the --datastore flag in ipfs-cluster-service init --datastore <backend>.

REST API

No changes.

Pinning Service API

No changes.

IPFS Proxy API

No changes.

Go APIs

No relevant changes.

Other

Docker containers now use pebble as the default datastore backend. Nothing.


v1.0.4 - 2022-09-26

IPFS Cluster v1.0.4 is a maintenance release addressing a couple of bugs and adding more "state crdt" commands.

One of the bugs has potential to cause a panic, while a second one can potentially dead-lock pinning operations and hang new pinning requests. We recommend all users to upgrade as soon as possible.

List of changes

Breaking changes

There are no breaking changes on this release.

Features
Bug fixes
Other changes

No other changes.

Upgrading notices

Configuration changes

There are no configuration changes for this release.

REST API

No changes.

Pinning Service API

No changes.

IPFS Proxy API

No changes.

Go APIs

No relevant changes.

Other

Nothing.


v1.0.3 - 2022-09-16

IPFS Cluster v1.0.3 is a maintenance release addressing some bugs and bringing some improvements to error handling behavior, as well as a couple of small features.

This release upgrades to the latest libp2p release (v0.22.0).

List of changes

Breaking changes

There are no breaking changes on this release.

Features
Bug fixes
Other changes

Upgrading notices

Configuration changes

There are no configuration changes for this release.

REST API

No changes.

Pinning Service API

No changes.

IPFS Proxy API

The IPFS Proxy now intercepts /block/put and /dag/put requests. This happens as follows:

  • The request is first forwarded "as is" to the underlying IPFS daemon, with the ?pin query parameter always set to false.
  • If ?pin=true was set, a cluster pin is triggered for every block and dag object uploaded (reminder that these endpoints accept multipart uploads).
  • Regular IPFS response to the uploads is streamed back to the user.
Go APIs

No relevant changes.

Other

Note that more than 10 failed requests to IPFS will now result in a rate-limit of 1req/s for any request to IPFS. This may cause things to queue up instead hammering the ipfs daemon with requets that fail. The rate limit is removed as soon as one request succeeds.

Also note that now Cluster peers that are started will not become fully operable until IPFS has been detected to be available: no metrics will be sent, no recover operations will be run etc. essentially the Cluster peer will wait for IPFS to be available before starting to do things that need IPFS to be available, rather than doing them right away and have failures.


v1.0.2 - 2022-07-06

IPFS Cluster v1.0.2 is a maintenance release with bug fixes and another iteration of the experimental support for the Pinning Services API that was introduced on v1.0.0, including Bearer token authorization support for both the REST and the Pinning Service APIs.

This release includes a security fix in the go-car library. The security issue allows an attacker to crash a cluster peer or cause excessive memory usage when uploading CAR files via the REST API (POST /add?format=car endpoint).

This also the first release after moving the project from the "ipfs" to the the "ipfs-cluster" Github organization, which means the project Go modules have new paths (everything is redirected though). The Docker builds remain inside the "ipfs" namespace (i.e. docker pull ipfs/ipfs-cluster).

IPFS Cluster is also ready to work with go-ipfs v0.13.0+. We recommend to upgrade.

List of changes

Breaking changes
Features
Bug fixes
Other changes

Upgrading notices

Configuration changes

There are no configuration changes for this release.

REST API

The REST API has a new POST /token endpoint, which returns a JSON object with a JWT token (when correctly authenticated).

This token can be used to authenticate using Authorization: Bearer <token> header on subsequent requests.

The token is tied and verified against a basic authentication user and password, as configured in the basic_auth_credentials field.

At the moment we do not support revocation, expiration and other token options.

Pinning Service API

The Pinning Service API has a new POST /token endpoint, which returns a JSON object with a JWT token (when correctly authenticated). See the REST API section above.

IPFS Proxy API

No changes to IPFS Proxy API.

Go APIs

All cluster modules have new paths: every instance of "ipfs/ipfs-cluster" should now be "ipfs-cluster/ipfs-cluster".

Other

go-ipfs v0.13.0 introduced some changes to the Block/Put API. IPFS Cluster now uses the cid-format option when performing Block-Puts. We believe the change does not affect adding blocks and that it should still work with previous go-ipfs versions, yet we recommend upgrading to go-ipfs v0.13.1 or later.


v1.0.1 - 2022-05-06

IPFS Cluster v1.0.1 is a maintenance release ironing out some issues and bringing a couple of improvements around observability of cluster performance:

  • We have fixed the ipfscluster_pins metric and added a few new ones that help determine how fast the cluster can pin and add blocks.
  • We have added a new Informer that broadcasts current pinning-queue size, which means we can take this information into account when making allocations, essentially allowing peers with big pinning queues to be relieved by peers with smaller pinning queues.

Please read below for a list of changes and things to watch out for.

List of changes

Breaking changes

Peers running IPFS Cluster v1.0.0 will not be able to read the pin's user-set metadata fields for pins submitted by peers in later versions, since metadata is now stored on a different protobuf field. If this is an issue, all peers in the cluster should upgrade.

Features
Bug fixes
Other changes

Upgrading notices

Configuration changes

There is a new pinqueue configuration object inside the informer section on newly initialized configurations:

  "informer": {
    ...
    "pinqueue": {
      "metric_ttl": "30s",
      "weight_bucket_size": 100000
    },
	...

This enables the Pinqueue Informer, which broadcasts metrics containing the size of the pinqueue with the metric weight divided by weight_bucket_size. The new metric is not used for allocations by default, and it needs to be manually added to the allocate_by option in the allocator, usually like:

"allocator": {
   "balanced": {
     "allocate_by": [
       "tag:group",
       "pinqueue",
       "freespace"
     ]
   }
REST API

No changes to REST API.

IPFS Proxy API

No changes to IPFS Proxy API.

Go APIs

No relevant changes to Go APIs, other than the PinTracker interface now requiring a PinQueueSize method.

Other

The following metrics are now available in the Prometheus endpoint when enabled:

ipfscluster_pins_ipfs_pins gauge
ipfscluster_pins_pin_add counter
ipfscluster_pins_pin_add_errors counter
ipfscluster_blocks_put counter
ipfscluster_blocks_added_size counter
ipfscluster_blocks_added counter
ipfscluster_blocks_put_error counter

The following metrics were converted from counter to gauge:

ipfscluster_pins_pin_queued
ipfscluster_pins_pinning
ipfscluster_pins_pin_error

Peers that are reporting freespace as 0 and which use this metric to allocate pins, will no longer be available for allocations (they stop broadcasting this metric). This means setting StorageMax on IPFS to 0 effectively prevents any pins from being explicitly allocated to a peer (that is, when replication_factor != everywhere).


v1.0.0 - 2022-04-22

IPFS Cluster v1.0.0 is a major release that represents that this project has reached maturity and is able to perform and scale on production environment (50+ million pins and 20 nodes).

This is a breaking release, v1.0.0 cluster peers are not compatible with previous cluster peers as we have bumped the RPC protocol version (which had remained unchanged since 0.12.0).

This release's major change is the switch to using streaming RPC endpoints for several RPC methods (listing pins, listing statuses, listing peers, adding blocks), which we added support for in go-libp2p-gorpc.

This causes major impact on two areas:

  • Memory consumption with very large pinsets: before, listing all the pins on the HTTP API required loading all the pins in the pinset into memory, then responding with a json-array containing the full pinset. When working at large scale with multimillion pinsets, this caused large memory usage spikes (whenever the full pinset was needed anywhere). Streaming RPC means components no longer need to send requests or responses in a single large collection (a json array), but can individually stream items end-to-end, without having to load-all and store in memory while the request is being handled.

  • Adding via cluster peers: before, when adding content to IPFS though a Cluster peer, it would chunk and send every individual chunk the cluster peers supposed to store the content, and then they would send it to IPFS individually, which resulted in a separate block/put request against the IPFS HTTP API. Files with a dozen chunks already showed that performance was not great. With streaming RPC, we can setup a single libp2p stream from the adding node to the destinations, and they can stream the blocks with a single block/put multipart-request directly into IPFS. We recommend using go-ipfs >= 0.12.0 for this.

These changes affect how cluster peers talk to each other and also how API endpoints that responded with array collections behave (they now stream json objects).

This release additionally includes the first version of the experimental IPFS Pinning Service API for IPFS Cluster. This API runs along the existing HTTP REST API and IPFS Proxy API and allows sending and querying pins from Cluster using standard Pinning-service clients (works well with go-ipfs's ipfs pin remote). Note that it does not support authentication nor tracking different requests for the same CID (request ID is the CID).

The full list of additional features and bug fixes can be found below.

List of changes

Features
Bug fixes
Other changes

Upgrading notices

As mentioned, all peers in the cluster should upgrade and things will heavily break otherwise.

Configuration changes

There are no breaking configuration changes. Other than that:

  • A pin_only_on_trusted_peers boolean option that defaults to false has been added to the cluster configuration section. When enabled, only trusted peers will be considered when allocating pins.
  • A new pinsvcapi section is now added to the api configuration section for newly-initialized configurations. When this section is present, the experimental Pinning Services API is launched. See the docs for the different options. Most of the code/options are similar to the restapi section as both share most of the code.
REST API
Streaming responses

The following endpoint responses have changed:

  • /allocations returned a json array of api.Pin object and now it will stream them.
  • /pins returned a json array of api.PinInfo objects and now it will stream them.
  • /recover returned a json array of api.PinInfo objects and now it will stream them.

Failures on streaming endpoints are captured in request Trailer headers (same as /add), in particular with a X-Stream-Error trailer. Note that the X-Stream-Error trailer may appear even no error happened (empty value in this case).

JSON-encoding of CIDs

As of v1.0.0, every "cid" as returned inside any REST API object will no longer encode as:

{ "/" : "<cid>" }

but instead just as "cid".

Add endpoint changes

There are two small backwards compatible changes to the /add endpoint:

  • A ?no-pin query option has been added. In this case, cluster will not pin the content after having added it.
  • The output objects returned when adding (i.e. the ones containing the CIDs of the files) now include an Allocations field, with an array of peer IDs corresponding to the peers on which the blocks were added.
Pin object changes

Pin objects (returned from /allocations, POST /pins etc). will not encode the Type as a human-readable string and not as a number, as previously happened.

PinInfo object changes

PinInfo/GlobalPinInfo objects (returned from /pins and /recover endpoitns), now include additional fields (which before were only accessible via /allocations):

  • allocations: an array of peer IDs indicating the pin allocations.
  • origins: the list of origins associated to this pin.
  • metadata: an object with pin metadata.
  • created: date when the pin was added to the cluster.
  • ipfs_peer_id: IPFS peer ID to which the object is pinned (when known).
  • ipfs_peer_addresses: IPFS addresses of the IPFS daemon to which the object is pinned (when known).
Pinning Services API

This API now exists. It does not support Authentication and is experimental.

IPFS Proxy API

The /add?pin=false call will no longer trigger a cluster pin followed by an unpin.

The /pin/ls?stream=true query option is now supported.

Go APIs

There have been many changes to different interfaces (i.e. to stream out collections over channels rather than return slices).

We have also taken the opportunity to get rid of pointers to objects in many places. This was a bad step, which makes cluster perform many more allocations that it should, and as a result causes more GC pressure. In any case, it was not a good Go development practice to use referenced types all around for objects that are not supposed to be mutated.

Other

The following metrics are now available in the Prometheus endpoint when enabled:

ipfscluster_pins
ipfscluster_pins_pin_queued
ipfscluster_pins_pin_error
ipfscluster_pins_pinning

v0.14.5 - 2022-02-16

This is a minor IPFS Cluster release. The main feature is the upgrade of the go-ds-crdt library which now supports resuming the processing of CRDT-DAGs that were not fully synced.

On first start on an updated node, the CRDT library will have to re-walk the full CRDT-DAG. This happens in the background.

For the full list of feature and bugfixes, see list below.

List of changes

Features
Bug fixes
Other changes

Upgrading notices

Configuration changes

Configuration is backwards compatible with previous versions.

The consensus/crdt section has a new option repair_interval which is set by default to 1h and controls how often we check if the crdt DAG needs to be reprocessed (i.e. when it becomes marked dirty due to an error). Setting it to 0 disables repairs.

The ipfs_connector/ipfshttp section has a new option informer_trigger_interval which defaults to 0 (disabled). This controls whether clusters issue a metrics update every certain number of pins (i.e. for fine-grain control of freespace after a pin happens).

The monitor/pubsubmon/failure_threshold option no longer has any effect.

REST API

The /pins (StatusAll) endpoint now takes a ?cid=cid1,cid2 option which allows to filter the resulting list to specific CIDs.

Go APIs

We added a LatestForPeer() method to the PeerMonitor interface which returns the latest metric of a certain type received by a peer.

Other

Before, adding content using the local=true option would add the blocks to the peer receiving the request and then allocate the pin normally (i.e. to the peers with most free space available, which may or not be the local peer). Now, "local add" requests will always allocate the pin to the local peer since it already has the content.

Before, we would send a freespace metric update every 10 pins. After: we don't do it anymore and relay on the normal metric interval, unless informer_trigger_interval is configured.

The CRDT library will create a database of processed DAG blocks during the first start on an upgraded node. This happens on the background and should only happen once. Peers with very large CRDT-DAGs, may experience increased disk usage during this time.


v0.14.4 - 2022-01-11

This is a minor IPFS Cluster release with additional performance improvements.

On one side, we have improved branch pruning when syncing CRDT dags. This should improve the time it takes for a peer to sync the pinset when joining a high-activity cluster, where branching happens often.

On the other side, we have improved how Cluster finds and re-triggers pinning operations for items that failed to pin previously, heavily reducing the pressure on the IPFS daemon and speeding up the operation.

List of changes

Features

No new features.

Bug fixes
Other changes

Upgrading notices

Configuration changes

No changes.

REST API

The /pins/recover (RecoverAll) endpoint now only returns items that have been re-queued for pinning (because they were in error). Before, it returned all items in the state (similar to the /pins endpoint, but at a huge perf impact with large pinsets).

Go APIs

No changes.

Other

ipfs-cluster-ctl recover only returns items that have been re-queued (see REST APIs above).


v0.14.3 - 2022-01-03

This is a minor IPFS Cluster release with some performance improvements and bug fixes.

First, we have improved the speed at which the pinset can be listed (around 3x). This is important for very large clusters with millions of items on the pinset. Cluster peers regularly check on all items in the pinset (i.e. to re-pin failed items or remove expired pins), so this means these operations will consume less resources and complete faster.

Second, we have added additional options to the state import command to provide more flexibility when migrating content to a new cluster. For example, allocations and replication factors for all pins can be replaced on import. One usecase is to convert a cluster with "replicate-everywhere" pins into one cluster with pins allocated to a particular set of peers (as a prior step to scaling up the cluster by adding more peers).

Among the bugs fixed, the worst was one causing errors when deserializing some pins from their JSON representation. This happened when pins had the Origins property set.

List of changes

Features
Bug fixes
Other changes

Upgrading notices

Configuration changes

No changes.

REST API

No changes.

Go APIs

No changes.

Other

ipfs-cluster-service state import has new rmin, rmax and allocations flags. See ipfs-cluster-service state import --help for more information.


v0.14.2 - 2021-12-09

This is a minor IPFS Cluster release focused on providing features for production Cluster deployments with very high pin ingestion rates.

It addresses two important questions from our users:

  • How to ensure that my pins are automatically pinned on my cluster peers around the world in a balanced fashion.
  • How to ensure that items that cannot be pinned do not delay the pinning of items that are available.

We address the first of the questions by introducing an improved allocator and user-defined "tag" metrics. Each cluster peer can now be tagged, and the allocator can be configured to pin items in a way that they are distributed among tags. For example, a cluster peer can tagged with region: us, availability-zone: us-west and so on. Assuming a cluster made of 6 peers, 2 per region, and one per availability zone, the allocator would ensure that a pin with replication factor = 3 lands in the 3 different regions and in the availability zones with most available space of the two.

The second question is addressed by enriching pin metadata. Pins will now store the time that they were added to the cluster. The pin tracker will additionally keep track of how many times an operation has been retried. Using these two items, we can prioritize pinning of items that are new and have not repeatedly failed to pin. The max age and max number of retries used to prioritize a pin can be controlled in the configuration.

Please see the information below for more details about how to make use and configure these new features.

List of changes

Features
Bug fixes
Other changes

Upgrading notices

Despite of the new features, cluster peers should behave exactly as before when using the previous configuration and should interact well with peers in the previous version. However, for the new features to take full effect, all peers should be upgraded to this release.

Configuration changes

The pintracker/stateless configuration sector gets 2 new options, which will take defaults when unset:

  • priority_pin_max_age, with a default of 24h, and
  • priority_pin_max_retries, with a default of 5.

A new informer type called "tags" now exists. By default, in has a subsection in the informer configuration section with the following defaults:

   "informer": {
     "disk": {...}
     },
     "tags": {
       "metric_ttl": "30s",
       "tags": {
         "group": "default"
       }
     }
   },

This enables the use of the "tags" informer. The tags configuration key in it allows to add user-defined tags to this peer. For every tag, a new metric will be broadcasted to other peers in the cluster carrying the tag information. By default, peers would broadcast a metric of type "tag:group" and value "default" (ipfs-cluster-ctl health metrics can be used to see what metrics a cluster peer knows about). These tags metrics can be used to setup advanced allocation strategies using the new "balanced" allocator described below.

A new allocator top level section with a balanced configuration sub-section can now be used to setup the new allocator. It has the following default on new configurations:

  "allocator": {
    "balanced": {
      "allocate_by": [
        "tag:group",
        "freespace"
      ]
    }
  },

When the allocator is NOT defined (legacy configurations), the allocate_by option is only set to ["freespace"], to keep backwards compatibility (the tags allocator with a "group:default" tag will not be present).

This asks the allocator to allocate pins first by the value of the "group" tag-metric, as produced by the tag informer, and then by the value of the "freespace" metric. Allocating solely by the "freespace" is the equivalent of the cluster behavior on previous versions. This default assumes the default informer/tags configuration section mentioned above is present.

REST API

The objects returned by the /pins endpoints ("GlobalPinInfo" types) now include an additional attempt_count property, that counts how many times the pin or unpin operation was retried, and a priority_pin boolean property, that indicates whether the ongoing pin operation was last queued in the priority queue or not.

The objects returned by the /allocations enpdpoints ("Pin" types) now include an additional timestamp property.

The objects returned by the /monitor/metrics/<metric> endpoint now include a weight property, which is used to sort metrics (before they were sorted by parsing the value as decimal number).

The REST API client will now support QUIC for libp2p requests whenever not using private networks.

Go APIs

There are no relevant changes other than the additional fields in the objects as mentioned by the section right above.

Other

Nothing.


v0.14.1 - 2021-08-16

This is an IPFS Cluster maintenance release addressing some issues and bringing a couple of tweaks. The main fix is an issue that would prevent cluster peers with very large pinsets (in the millions of objects) from fully starting quickly.

This release is fully compatible with the previous release.

List of changes

Features
Bug fixes
Other changes

Upgrading notices

Configuration changes

No changes. Configurations are fully backwards compatible.

REST API

Paths ending with a / (slash) were being automatically redirected to the path without the slash using a 301 code (permanent redirect). However, most clients do not respect the method name when following 301-redirects, thus a POST request to /allocations/ would become a GET request to /allocations.

We have now set these redirects to use 307 instead (temporary redirect). Clients do keep the HTTP method when following 307 redirects.

Go APIs

The parameters object to the RestAPI client WaitFor function now has a Limit field. This allows to return as soon as a number of peers have reached the target status. When unset, previous behavior should be maintained.

Other

Per the WaitFor modification above, ipfs-cluster-ctl now sets the limit to the replication-factor-min value on pin/add commands when using the --wait flag. These will potentially return earlier.


v0.14.0 - 2021-07-09

This IPFS Cluster release brings a few features to improve cluster operations at scale (pinsets over 100k items), along with some bug fixes.

This release is not fully compatible with previous ones. Nodes on different versions will be unable to parse metrics from each other (thus peers ls will not report peers on different versions) and the StatusAll RPC method (a.k.a ipfs-cluster-ctl status or /pins API endpoint) will not work. Hence the minor version bump. Please upgrade all of your cluster peers.

This release brings a few key improvements to the cluster state storage: badger will automatically perform garbage collection on regular intervals, resolving a long standing issue of badger using up to 100x the actual needed space. Badger GC will automatically be enabled with defaults, which will result in increased disk I/O if there is a lot to GC 15 minutes after starting the peer. Make sure to disable GC manually if increased disk I/O during GC may affect your service upon upgrade. In our tests the impact was soft enough to consider this a safe default, though in environments with very constrained disk I/O it will be surely noticed, at least in the first GC cycle, since the datastore was never GC'ed before.

Badger is the datastore we are more familiar with and the most scalable choice (chosen by both IPFS and Filecoin). However, it may be that badger behavior and GC-needs are not best suited or not preferred, or more downsides are discovered in the future. For those cases, we have added the option to run with a leveldb backend as an alternative. Level DB does not need GC and it will auto-compact. It should also scale pretty well for most cases, though we have not tested or compared against badger with very large pinsets. The backend can be configured during the daemon init, along with the consensus component using a new --datastore flag. Like the default Badger backend, the new LevelDB backend exposes all LevelDB internal configuration options.

Additionally, operators handling very large clusters may have noticed that checking status of pinning,queued items (ipfs-cluster-ctl status --filter pinning,queued) took very long as it listed and iterated on the full ipfs pinset. We have added some fixes so that we save the time when filtering for items that do not require listing the full state.

Finally, cluster pins now have an origins option, which allows submitters to provide hints for providers of the content. Cluster will instruct IPFS to connect to the origins of a pin before pinning. Note that for the moment ipfs will keep connected to those peers permanently.

Please read carefully through the notes below, as the release includes subtle changes in configuration, defaults and behaviors which may in some cases affect you (although probably will not).

List of changes

Features
Bug fixes
Other changes

Upgrading notices

Configuration changes

Configurations are fully backwards compatible.

The cluster.disable_repinning setting now defaults to true on new generated configurations.

The datastore.badger section now includes settings to control (and disable) automatic GC:

   "badger": {
      "gc_discard_ratio": 0.2,
      "gc_interval": "15m0s",
      "gc_sleep": "10s",
	  ...
   }

When not present, these settings take their defaults, so GC will automatically be enabled on nodes that upgrade keeping their previous configurations.

GC can be disabled by setting gc_interval to "0s". A GC cycle is made by multiple GC rounds. Setting gc_sleep to "0s" will result in a single GC round.

Finally, nodes initializing with --datastore leveldb will obtain a datastore.leveldb section (instead of a badger one). Configurations can only include one datastore section, either badger or leveldb. Currently we offer no way to convert states between the two datastore backends.

REST API

Pin options (POST /add and POST /pins endpoints) now take an origins query parameter as an additional pin option. It can be set to a comma-separated list of full peer multiaddresses to which IPFS can connect to fetch the content. Only the first 10 multiaddresses will be taken into account.

The response of POST /add?format=car endpoint when adding a CAR file (a single pin progress object) always had the "size" field set to 0. This is now set to the unixfs FileSize property, when the root of added CAR correspond to a unixfs node of type File. In any other case, it stays at 0.

The GET /pins endpoint reports pin status for all pins in the pinset by default and optionally takes a filter query param. Before, it would include a full GlobalPinInfo object for a pin as long as the status of the CID in one of the peers matched the filter, so the object could include statuses for other cluster peers for that CID which did not match the filter. Starting on this version, the returned statuses will be fully limited to those of the peers matching the filter.

On the same endpoint, a new unexpectedly_unpinned pin status has been added, which can also be used as a filter. Previously, pins in this state were reported as pin_error. Note the error filter does not match unexpectedly_unpinned status as it did before, which should be queried directly (or without any filter).

Go APIs

The PinTracker interface has been updated so that the StatusAll method takes a TrackerStatus filter. The stateless pintracker implementation has been updated accordingly.

Other

Docker containers now support IPFS_CLUSTER_DATASTORE to set the datastore type during initialization (similar to IPFS_CLUSTER_CONSENSUS).

Due to the deprecation of the multicodecs repository, we no longer serialize metrics by prepending the msgpack multicodec code to the bytes and instead encode the metrics directly. This means older peers will not know how to deserialize metrics from newer peers, and vice-versa. While peers will keep working (particularly follower peers will keep tracking content etc), peers will not include other peers with different versions in their "peerset and many operations that rely on this will not work as intended or show partial views.


v0.13.3 - 2021-05-14

IPFS Cluster v0.13.3 brings two new features: CAR file imports and crdt-commit batching.

The first one allows to upload CAR files directly to the Cluster using the existing Add endpoint with a new option set: /add?format=car. The endpoint remains fully backwards compatible. CAR files are a simple wrapper around a collection of IPFS blocks making up a DAG. Thus, this enables arbitrary DAG imports directly through the Cluster REST API, taking advantange of the rest of its features like basic-auth access control, libp2p endpoint and multipeer block-put when adding.

The second feature unlocks large escalability improvements for pin ingestion with the crdt "consensus" component. By default, each pin or unpin requests results in an insertion to the crdt-datastore-DAG that maintains and syncs the state between nodes, creating a new root. Batching allows to group multiple updates in a single crdt DAG-node. This reduces the number of broadcasts, the depth of the DAG, the breadth of the DAG and the syncing times when the Cluster is ingesting many pins, removing most of the overhead in the process. The batches are automatically committed when reaching a certain age or a certain size, both configurable.

Additionally, improvements to timeout behaviors have been introduced.

For more details, check the list below and the latest documentation on the website.

List of changes

Features
Bug fixes
Other changes

Upgrading notices

Configuration changes

The crdt section of the configuration now has a batching subsection which controls batching settings:

"batching": {
    "max_batch_size": 0,
    "max_batch_age": "0s"
}

An additional, hidden max_queue_size option exists, with default to 50000. The meanings of the options are documented on the reference (website) and the code.

Batching is disabled by default. To be enabled, both max_batch_size and max_batch_age need to be set to positive values.

The cluster section of the configuration has a new dial_peer_timeout option, which defaults to "3s". It controls the default dial timeout when libp2p is attempting to open a connection to a peer.

REST API

The /add endpoint now understands a new query parameter ?format=, which can be set to unixfs (default), or car (when uploading a CAR file). CAR files should have a single root. Additional parts in multipart uploads for CAR files are ignored.

Go APIs

The AddParams object that controls API options for the Add endpoint has been updated with the new Format option.

Other

Nothing.


v0.13.2 - 2021-04-06

IPFS Cluster v0.13.2 is a maintenance release addressing bugs and adding a couple of small features. It is fully compatible with the previous release.

List of changes

Features
Bug fixes
Other changes

Upgrading notices

Configuration changes

No configuration changes in this release.

REST API

The REST API server and clients will no longer negotiate the secio security. This transport was already the lowest priority one and should have not been used. This however, may break 3rd party clients which only supported secio.

Go APIs

Nothing.

Other

Nothing.


v0.13.1 - 2021-01-14

IPFS Cluster v0.13.1 is a maintenance release with some bugfixes and updated dependencies. It should be fully backwards compatible.

This release deprecates secio (as required by libp2p), but this was already the lowest priority security transport and tls would have been used by default. The new noise transport becomes the preferred option.

List of changes

Features
Bug fixes
Other changes

Upgrading notices

Configuration changes

The new default for ipfs_http.pin_timeout is 2m. This is the time that needs to pass for a pin operation to error and it starts counting from the last block pinned.

REST API

A new /health/alerts endpoint exists to support ipfs-cluster-ctl health alerts.

Go APIs

The definition of types.Alert has changed. This type was not exposed to the outside before. RPC endpoints affected are only used locally.

Other

Nothing.


v0.13.0 - 2020-05-19

IPFS Cluster v0.13.0 provides many improvements and bugfixes on multiple fronts.

First, this release takes advantange of all the major features that have landed in libp2p and IPFS lands (via ipfs-lite) during the last few months, including the dual-DHT and faster block exchange with Bitswap. On the downside, QUIC support for private networks has been temporally dropped, which means we cannot use the transport for Cluster peers anymore. We have disabled QUIC for the time being until private network support is re-added.

Secondly, go-ds-crdt has received major improvements since the last version, resolving some bugs and increasing performance. Because of this, cluster peers in CRDT mode running older versions will be unable to process updates sent by peers running the newer versions. This means, for example, that followers on v0.12.1 and earlier will be unable to receive updates from trusted peers on v0.13.0 and later. However, peers running v0.13.0 will still understand updates sent from older peers.

Finally, we have resolved some bugs and added a few very useful features, which are detailed in the list below. We recommend everyone to upgrade as soon as possible for a swifter experience with IPFS Cluster.

List of changes

Features
Bug fixes
Other changes

Upgrading notices

Configuration changes
  • The default options in the datastore/badger/badger_options have changed and should reduce memory usage significantly:
    • truncate is set to true.
    • value_log_loading_mode is set to 0 (FileIO).
    • max_table_size is set to 16777216.
  • api/ipfsproxy/listen_multiaddress, api/rest/http_listen_multiaddress and api/rest/libp2p_listen_multiaddress now support an array of multiaddresses rather than a single one (a single one still works). This allows, for example, listening on both IPv6 and IPv4 interfaces.
REST API

The POST /pins/{hash} endpoint (pin add) now supports a mode query parameter than can be set to recursive or direct. The responses including Pin objects (GET /allocations, pin ls) include a mode field set accordingly.

The IPFS proxy /pin/add endpoint now supports recursive=false for direct pins.

The /pins endpoint now return GlobalPinInfo objects that include a name field for the pin name. The same objects do not embed redundant information anymore for each peer in the peer_map: cid and peer are omitted.

Go APIs

The ipfscluster.IPFSConnector component signature for PinLsCid has changed and receives a full api.Pin object, rather than a Cid. The RPC endpoint has changed accordingly, but since this is a private endpoint, it does not affect interoperability between peers.

The api.GlobalPinInfo type now maps every peer to a new api.PinInfoShort type, that does not include any redundant information (Cid, Peer), as the PinInfo type did. The Cid is available as a top-level field. The Peer corresponds to the map key. A new Name top-level field contains the Pin Name.

The api.PinInfo file includes also a new Name field.

Other

From this release, IPFS Cluster peers running in different minor versions will remain compatible at the RPC layer (before, all cluster peers had to be running on precisely the same minor version to be able to communicate). This means that v0.13.0 peers are still compatible with v0.12.x peers (with the caveat for CRDT-peers mentioned at the top). ipfs-cluster-ctl --enc=json id shows information about the RPC protocol used.

Since the QUIC libp2p transport does not support private networks at this point, it has been disabled, even though we keep the QUIC endpoint among the default listeners.


v0.12.1 - 2019-12-24

IPFS Cluster v0.12.1 is a maintenance release fixing issues on ipfs-cluster-follow.

List of changes

Bug fixes

v0.12.0 - 2019-12-20

IPFS Cluster v0.12.0 brings many useful features and makes it very easy to create and participate on collaborative clusters.

The new ipfs-cluster-follow command provides a very simple way of joining one or several clusters as a follower (a peer without permissions to pin/unpin anything). ipfs-cluster-follow peers are initialize using a configuration "template" distributed over IPFS or HTTP, which is then optimized and secured.

ipfs-cluster-follow is limited in scope and attempts to be very straightforward to use. ipfs-cluster-service continues to offer power users the full set of options to running peers of all kinds (followers or not).

We have additionally added many new features: pin with an expiration date, the ability to trigger garbage collection on IPFS daemons, improvements on NAT-traversal and connectivity etc.

Users planning to setup public collaborative clusters should upgrade to this release, which improves the user experience and comes with documentation on how to setup and join these clusters (https://ipfscluster.io/documentation/collaborative).

List of changes

Features
Bug fixes
Other changes

Upgrading notices

Configuration changes
  • cluster section:
    • A new peer_addresses key allows specifying additional peer addresses in the configuration (similar to the peerstore file). These are treated as libp2p bootstrap addreses (do not mix with Raft bootstrap process). This setting is mostly useful for CRDT collaborative clusters, as template configurations can be distributed including bootstrap peers (usually the same as trusted peers). The values are the full multiaddress of these peers: /ip4/x.x.x.x/tcp/1234/p2p/Qmxxx....
    • listen_multiaddress can now be set to be an array providing multiple listen multiaddresses, the new defaults being /tcp/9096 and /udp/9096/quic.
    • enable_relay_hop (true by default), lets the cluster peer act as a relay for other cluster peers behind NATs. This is only for the Cluster network. As a reminder, while this setting is problematic on IPFS (due to the amount of traffic the HOP peers start relaying), the cluster-peers networks are smaller and do not move huge amounts of content around.
    • The ipfs_sync_interval option disappears as the stateless tracker does not keep a state that can lose synchronization with IPFS.
  • ipfshttp section:
    • A new repogc_timeout key specifies the timeout for garbage collection operations on IPFS. It is set to 24h by default.
REST API

The pin/add and add endpoints support two new query parameters to indicate pin expirations: expire-at (with an expected value in RFC3339 format) and expire-in (with an expected value in Go's time format, i.e. 12h). expire-at has preference.

A new /ipfs/gc endpoint has been added to trigger GC in the IPFS daemons attached to Cluster peers. It supports the local parameter to limit the operation to the local peer.

Go APIs

There are few changes to Go APIs. The RepoGC and RepoGCLocal methods have been added, the mappintracker module has been removed and the stateless module has changed the signature of the constructor.

Other

The IPFS Proxy now intercepts the /repo/gc endpoint and triggers a cluster-wide GC operation.

The ipfs-cluster-follow application is an easy to use way to run one or several cluster peers in follower mode using remote configuration templates. It is fully independent from ipfs-cluster-service and ipfs-cluster-ctl and acts as both a peer (run subcommand) and a client (list subcommand). The purpose is to facilitate IPFS Cluster usage without having to deal with the configuration and flags etc.

That said, the configuration layout and folder is the same for both ipfs-cluster-service and ipfs-cluster-follow and they can be run one in place of the other. In the same way, remote-source configurations usually used for ipfs-cluster-follow can be replaced with local ones usually used by ipfs-cluster-service.

The removal of the map pintracker has resulted in a simplification of some operations. StateSync (regularly run every state_sync_interval) does not trigger repinnings now, but only checks for pin expirations. RecoverAllLocal (regularly run every pin_recover_interval) will now trigger repinnings when necessary (i.e. when things that were expected to be on IPFS are not). On very large pinsets, this operation can trigger a memory spike as the full recursive pinset from IPFS is requested and loaded on memory (before this happened on StateSync).


v0.11.0 - 2019-09-13

Summary

IPFS Cluster v0.11.0 is the biggest release in the project's history. Its main feature is the introduction of the new CRDT "consensus" component. Leveraging Pubsub, Bitswap and the DHT and using CRDTs, cluster peers can track the global pinset without needing to be online or worrying about the rest of the peers as it happens with the original Raft approach.

The CRDT component brings a lots of features around it, like RPC authorization, which effectively lets cluster peers run in clusters where only a trusted subset of nodes can access peer endpoints and made modifications to the pinsets.

We have additionally taken lots of steps to improve configuration management of peers, separating the peer identity from the rest of the configuration and allowing to use remote configurations fetched from an HTTP url (which may well be the local IPFS gateway). This allows cluster administrators to provide the configurations needed for any peers to join a cluster as followers.

The CRDT arrival incorporates a large number of improvements in peerset management, bootstrapping, connection management and auto-recovery of peers after network disconnections. We have improved the peer monitoring system, added support for efficient Pin-Update-based pinning, reworked timeout control for pinning and fixed a number of annoying bugs.

This release is mostly backwards compatible with the previous one and clusters should keep working with the same configurations, but users should have a look to the sections below and read the updated documentation, as a number of changes have been introduced to support both consensus components.

Consensus selection happens during initialization of the configuration (see configuration changes below). Migration of the pinset is necessary by doing state export (with Raft configured), followed by state import (with CRDT configured). Note that all peers should be configured with the same consensus type.

List of changes

Features
Bug fixes
Other changes

Upgrading notices

Configuration changes

This release introduces a number of backwards-compatible configuration changes:

  • The service.json file no longer includes ID and PrivateKey, which are now part of an identity.json file. However things should work as before if they do. Running ipfs-cluster-service daemon on a older configuration will automatically write an identity.json file with the old credentials so that things do not break when the compatibility hack is removed.

  • The service.json can use a new single top-level source field which can be set to an HTTP url pointing to a full service.json. When present, this will be read and used when starting the daemon. ipfs-cluster-service init http://url produces this type of "remote configuration" file.

  • cluster section:

    • A new, hidden follower_mode option has been introduced in the main cluster configuration section. When set, the cluster peer will provide clear errors when pinning or unpinning. This is a UI feature. The capacity of a cluster peer to pin/unpin depends on whether it is trusted by other peers, not on settin this hidden option.
    • A new pin_recover_interval option to controls how often pins in error states are retried.
    • A new mdns_interval controls the time between mDNS broadcasts to discover other peers in the network. Setting it to 0 disables mDNS altogether (default is 10 seconds).
    • A new connection_manager object can be used to limit the number of connections kept by the libp2p host:
"connection_manager": {
    "high_water": 400,
    "low_water": 100,
    "grace_period": "2m0s"
},
  • consensus section:

    • Only one configuration object is allowed inside the consensus section, and it must be either the crdt or the raft one. The presence of one or another is used to autoselect the consensus component to be used when running the daemon or performing ipfs-cluster-service state operations. ipfs-cluster-service init receives an optional --consensus flag to select which one to produce. By default it is the crdt.
  • ipfs_connector/ipfshttp section:

    • The pin_timeout in the ipfshttp section is now starting from the last block received. Thus it allows more flexibility for things which are pinning very slowly, but still pinning.
    • The pin_method option has been removed, as go-ipfs does not do a pin-global-lock anymore. Therefore pin add will be called directly, can be called multiple times in parallel and should be faster than the deprecated refs -r way.
    • The ipfshttp section has a new (hidden) unpin_disable option (boolean). The component will refuse to unpin anything from IPFS when enabled. It can be used as a failsafe option to make sure cluster peers never unpin content.
  • datastore section:

    • The configuration has a new datastore/badger section, which is relevant when using the crdt consensus component. It allows full control of the Badger configuration, which is particuarly important when running on systems with low memory:
  "datastore": {
    "badger": {
      "badger_options": {
        "dir": "",
        "value_dir": "",
        "sync_writes": true,
        "table_loading_mode": 2,
        "value_log_loading_mode": 2,
        "num_versions_to_keep": 1,
        "max_table_size": 67108864,
        "level_size_multiplier": 10,
        "max_levels": 7,
        "value_threshold": 32,
        "num_memtables": 5,
        "num_level_zero_tables": 5,
        "num_level_zero_tables_stall": 10,
        "level_one_size": 268435456,
        "value_log_file_size": 1073741823,
        "value_log_max_entries": 1000000,
        "num_compactors": 2,
        "compact_l_0_on_close": true,
        "read_only": false,
        "truncate": false
      }
    }
    }
  • pin_tracker/maptracker section:

    • The max_pin_queue_size parameter has been hidden for default configurations and the default has been set to 1000000.
  • api/restapi section:

    • A new http_log_file options allows to redirect the REST API logging to a file. Otherwise, it is logged as part of the regular log. Lines follow the Apache Common Log Format (CLF).
REST API

The POST /pins/{cid} and DELETE /pins/{cid} now returns a pin object with 200 Success rather than an empty 204 Accepted response.

Using an unexistent route will now correctly return a JSON object along with the 404 HTTP code, rather than text.

Go APIs

There have been some changes to Go APIs. Applications integrating Cluster directly will be affected by the new signatures of Pin/Unpin:

  • The Pin and Unpin methods now return an object of api.Pin type, along with an error.
  • The Pin method takes a CID and PinOptions rather than an api.Pin object wrapping those.
  • A new PinUpdate method has been introduced.

Additionally:

  • The Consensus Component interface has changed to accommodate peer-trust operations.
  • The IPFSConnector Component interface Pin method has changed to take an api.Pin type.
Other
  • The IPFS Proxy now hijacks the /api/v0/pin/update and makes a Cluster PinUpdate.
  • ipfs-cluster-service init now takes a --consensus flag to select between crdt (default) and raft. Depending on the values, the generated configuration will have the relevant sections for each.
  • The Dockerfiles have been updated to:
    • Support the IPFS_CLUSTER_CONSENSUS flag to determine which consensus to use for the automatic init.
    • No longer use IPFS_API environment variable to do a sed replacement on the config, as CLUSTER_IPFSHTTP_NODEMULTIADDRESS is the canonical one to use.
    • No longer use sed replacement to set the APIs listen IPs to 0.0.0.0 automatically, as this can be achieved with environment variables (CLUSTER_RESTAPI_HTTPLISTENMULTIADDRESS and CLUSTER_IPFSPROXY_LISTENMULTIADDRESS) and can be dangerous for containers running in net=host mode.
    • The docker-compose.yml has been updated and simplified to launch a CRDT 3-peer TEST cluster
  • Cluster now uses /p2p/ instead of /ipfs/ for libp2p multiaddresses by default, but both protocol IDs are equivalent and interchangeable.
  • Pinning an already existing pin will re-submit it to the consensus layer in all cases, meaning that pins in error states will start pinning again (before, sometimes this was only possible using recover). Recover stays as a broadcast/sync operation to trigger pinning on errored items. As a reminder, pin is consensus/async operation.

v0.10.1 - 2019-04-10

Summary

This release is a maintenance release with a number of bug fixes and a couple of small features.

List of changes

Features
Bug fixes

Upgrading notices

Configuration changes

There are no configuration changes on this release.

REST API

The /version endpoint now returns a version object with lowercase version key.

Go APIs

There are no changes to the Go APIs.

Other

Since we have switched to Go modules for dependency management, gx is no longer used and the maintenance of Gx dependencies has been dropped. The Makefile has been updated accordinly, but now a simple go install ./cmd/... works.


v0.10.0 - 2019-03-07

Summary

As we get ready to introduce a new CRDT-based "consensus" component to replace Raft, IPFS Cluster 0.10.0 prepares the ground with substantial under-the-hood changes. many performance improvements and a few very useful features.

First of all, this release requires users to run state upgrade (or start their daemons with ipfs-cluster-service daemon --upgrade). This is the last upgrade in this fashion as we turn to go-datastore-based storage. The next release of IPFS Cluster will not understand or be able to upgrade anything below 0.10.0.

Secondly, we have made some changes to internal types that should greatly improve performance a lot, particularly calls involving large collections of items (pin ls or status). There are also changes on how the state is serialized, avoiding unnecessary in-memory copies. We have also upgraded the dependency stack, incorporating many fixes from libp2p.

Thirdly, our new great features:

  • ipfs-cluster-ctl pin add/rm now supports IPFS paths (/ipfs/Qmxx.../..., /ipns/Qmxx.../..., /ipld/Qm.../...) which are resolved automatically before pinning.
  • All our configuration values can now be set via environment variables, and these will be reflected when initializing a new configuration file.
  • Pins can now specify a list of "priority allocations". This allows to pin items to specific Cluster peers, overriding the default allocation policy.
  • Finally, the REST API supports adding custom metadata entries as key=value (we will soon add support in ipfs-cluster-ctl). Metadata can be added as query arguments to the Pin or PinPath endpoints: POST /pins/<cid-or-path>?meta-key1=value1&meta-key2=value2...

Note that on this release we have also removed a lot of backwards-compatibility code for things older than version 0.8.0, which kept things working but printed respective warnings. If you're upgrading from an old release, consider comparing your configuration with the new default one.

List of changes

Features
Bug fixes

Upgrading notices

This release needs an state upgrade before starting the Cluster daemon. Run ipfs-cluster-service state upgrade or run it as ipfs-cluster-service daemon --upgrade. We recommend backing up the ~/.ipfs-cluster folder or exporting the pinset with ipfs-cluster-service state export.

Configuration changes

Configurations now respects environment variables for all sections. They are in the form:

CLUSTER_COMPONENTNAME_KEYNAMEWITHOUTSPACES=value

Environment variables will override service.json configuration options when defined and the Cluster peer is started. ipfs-cluster-service init will reflect the value of any existing environment variables in the new service.json file.

REST API

The main breaking change to the REST API corresponds to the JSON representation of CIDs in response objects:

  • Before: "cid": "Qm...."
  • Now: "cid": { "/": "Qm...."}

The new CID encoding is the default as defined by the cid library. Unfortunately, there is no good solution to keep the previous representation without copying all the objects (an innefficient technique we just removed). The new CID encoding is otherwise aligned with the rest of the stack.

The API also gets two new "Path" endpoints:

  • POST /pins/<ipfs|ipns|ipld>/<path>/... and
  • DELETE /pins/<ipfs|ipns|ipld>/<path>/...

Thus, it is equivalent to pin a CID with POST /pins/<cid> (as before) or with POST /pins/ipfs/<cid>.

The calls will however fail when a non-compliant IPFS path is provided: POST /pins/<cid>/my/path will fail because all paths must start with the /ipfs, /ipns or /ipld components.

Go APIs

This release introduces lots of changes to the Go APIs, including the Go REST API client, as we have started returning pointers to objects rather than the objects directly. The Pin will now take api.PinOptions instead of different arguments corresponding to the options. It is aligned with the new PinPath and UnpinPath.

Other

As pointed above, 0.10.0's state migration is a required step to be able to use future version of IPFS Cluster.


v0.9.0 - 2019-02-18

Summary

IPFS Cluster version 0.9.0 comes with one big new feature, OpenCensus support! This allows for the collection of distributed traces and metrics from the IPFS Cluster application as well as supporting libraries. Currently, we support the use of Jaeger as the tracing backend and Prometheus as the metrics backend. Support for other OpenCensus backends will be added as requested by the community.

List of changes

Features
Bug Fixes

No bugs were fixed from the previous release.

Deprecated

Upgrading notices

Configuration changes

No changes to the existing configuration.

There are two new configuration sections with this release:

tracing section

The tracing section configures the use of Jaeger as a tracing backend.

    "tracing": {
      "enable_tracing": false,
      "jaeger_agent_endpoint": "/ip4/0.0.0.0/udp/6831",
      "sampling_prob": 0.3,
      "service_name": "cluster-daemon"
    }
metrics section

The metrics section configures the use of Prometheus as a metrics collector.

    "metrics": {
      "enable_stats": false,
      "prometheus_endpoint": "/ip4/0.0.0.0/tcp/8888",
      "reporting_interval": "2s"
    }
REST API

No changes to the REST API.

Go APIs

The Go APIs had the minor change of having a context.Context parameter added as the first argument to those that didn't already have it. This was to enable the proporgation of tracing and metric values.

The following is a list of interfaces and their methods that were affected by this change:

  • Component
    • Shutdown
  • Consensus
    • Ready
    • LogPin
    • LogUnpin
    • AddPeer
    • RmPeer
    • State
    • Leader
    • WaitForSync
    • Clean
    • Peers
  • IpfsConnector
    • ID
    • ConnectSwarm
    • SwarmPeers
    • RepoStat
    • BlockPut
    • BlockGet
  • Peered
    • AddPeer
    • RmPeer
  • PinTracker
    • Track
    • Untrack
    • StatusAll
    • Status
    • SyncAll
    • Sync
    • RecoverAll
    • Recover
  • Informer
    • GetMetric
  • PinAllocator
    • Allocate
  • PeerMonitor
    • LogMetric
    • PublishMetric
    • LatestMetrics
  • state.State
    • Add
    • Rm
    • List
    • Has
    • Get
    • Migrate
  • rest.Client
    • ID
    • Peers
    • PeerAdd
    • PeerRm
    • Add
    • AddMultiFile
    • Pin
    • Unpin
    • Allocations
    • Allocation
    • Status
    • StatusAll
    • Sync
    • SyncAll
    • Recover
    • RecoverAll
    • Version
    • IPFS
    • GetConnectGraph
    • Metrics

These interface changes were also made in the respective implementations. All export methods of the Cluster type also had these changes made.

Other

No other things.


v0.8.0 - 2019-01-16

Summary

IPFS Cluster version 0.8.0 comes with a few useful features and some bugfixes. A significant amount of work has been put to correctly handle CORS in both the REST API and the IPFS Proxy endpoint, fixing some long-standing issues (we hope once are for all).

There has also been heavy work under the hood to separate the IPFS HTTP Connector (the HTTP client to the IPFS daemon) from the IPFS proxy, which is essentially an additional Cluster API. Check the configuration changes section below for more information about how this affects the configuration file.

Finally we have some useful small features:

  • The ipfs-cluster-ctl status --filter option allows to just list those items which are still pinning or queued or error etc. You can combine multiple filters. This translates to a new filter query parameter in the /pins API endpoint.
  • The stream-channels=false query parameter for the /add endpoint will let the API buffer the output when adding and return a valid JSON array once done, making this API endpoint behave like a regular, non-streaming one. ipfs-cluster-ctl add --no-stream acts similarly, but buffering on the client side. Note that this will cause in-memory buffering of potentially very large responses when the number of added files is very large, but should be perfectly fine for regular usage.
  • The ipfs-cluster-ctl add --quieter flag now applies to the JSON output too, allowing the user to just get the last added entry JSON object when adding a file, which is always the root hash.

List of changes

Features
Bug fixes

Upgrading notices

This release comes with some configuration changes that are important to notice, even though the peers will start with the same configurations as before.

Configuration changes
ipfsproxy section

This version introduces a separate ipfsproxy API component. This is reflected in the service.json configuration, which now includes a new ipfsproxy subsection under the api section. By default it looks like:

    "ipfsproxy": {
      "node_multiaddress": "/ip4/127.0.0.1/tcp/5001",
      "listen_multiaddress": "/ip4/127.0.0.1/tcp/9095",
      "read_timeout": "0s",
      "read_header_timeout": "5s",
      "write_timeout": "0s",
      "idle_timeout": "1m0s"
   }

We have however added the necessary safeguards to keep backwards compatibility for this release. If the ipfsproxy section is empty, it will be picked up from the ipfshttp section as before. An ugly warning will be printed in this case.

Based on the above, the ipfshttp configuration section loses the proxy-related options. Note that node_multiaddress stays in both component configurations and should likely be the same in most cases, but you can now potentially proxy requests to a different daemon than the one used by the cluster peer.

Additional hidden configuration options to manage custom header extraction from the IPFS daemon (for power users) have been added to the ipfsproxy section but are not shown by default when initializing empty configurations. See the documentation for more details.

restapi section

The introduction of proper CORS handling in the restapi component introduces a number of new keys:

      "cors_allowed_origins": [
        "*"
      ],
      "cors_allowed_methods": [
        "GET"
      ],
      "cors_allowed_headers": [],
      "cors_exposed_headers": [
        "Content-Type",
        "X-Stream-Output",
        "X-Chunked-Output",
        "X-Content-Length"
      ],
      "cors_allow_credentials": true,
      "cors_max_age": "0s"

Note that CORS will be essentially unconfigured when these keys are not defined.

The headers key, which was used before to add some CORS related headers manually, takes a new empty default. We recommend emptying headers from any CORS-related value.

REST API

The REST API is fully backwards compatible:

  • The GET /pins endpoint takes a new ?filter=<filter> option. See ipfs-cluster-ctl status --help for acceptable values.
  • The POST /add endpoint accepts a new ?stream-channels=<true|false> option. By default it is set to true.
Go APIs

The signature for the StatusAll method in the REST client module has changed to include a filter parameter.

There may have been other minimal changes to internal exported Go APIs, but should not affect users.

Other

Proxy requests which are handled by the Cluster peer (/pin/ls, /pin/add, /pin/rm, /repo/stat and /add) will now attempt to fully mimic ipfs responses to the header level. This is done by triggering CORS pre-flight for every hijacked request along with an occasional regular request to /version to extract other headers (and possibly custom ones).

The practical result is that the proxy now behaves correctly when dropped instead of IPFS into CORS-aware contexts (like the browser).


v0.7.0 - 2018-11-01

Summary

IPFS Cluster version 0.7.0 is a maintenance release that includes a few bugfixes and some small features.

Note that the REST API response format for the /add endpoint has changed. Thus all clients need to be upgraded to deal with the new format. The rest/api/client has been accordingly updated.

List of changes

Features
Bug fixes

Upgrading notices

Configuration changes

The configurations from previous versions are compatible, but a new headers key has been added to the restapi section. By default it gets CORS headers which will allow read-only interaction from any origin.

Additionally, all fields from the main cluster configuration section can now be overwrriten with environment variables. i.e. CLUSTER_SECRET, or CLUSTER_DISABLEREPINNING.

REST API

The /add endpoint stream now returns different objects, in line with the rest of the API types.

Before:

type AddedOutput struct {
	Error
	Name  string
	Hash  string `json:",omitempty"`
	Bytes int64  `json:",omitempty"`
	Size  string `json:",omitempty"`
}

Now:

type AddedOutput struct {
	Name  string `json:"name"`
	Cid   string `json:"cid,omitempty"`
	Bytes uint64 `json:"bytes,omitempty"`
	Size  uint64 `json:"size,omitempty"`
}

The /add endpoint no longer reports errors as part of an AddedOutput object, but instead it uses trailer headers (same as go-ipfs). They are handled in the client.

Go APIs

The AddedOutput object has changed, thus the api/rest/client from older versions will not work with this one.

Other

No other things.


v0.6.0 - 2018-10-03

Summary

IPFS version 0.6.0 is a new minor release of IPFS Cluster.

We have increased the minor release number to signal changes to the Go APIs after upgrading to the new cid package, but, other than that, this release does not include any major changes.

It brings a number of small fixes and features of which we can highlight two useful ones:

  • the first is the support for multiple cluster daemon versions in the same cluster, as long as they share the same major/minor release. That means, all releases in the 0.6 series (0.6.0, 0.6.1 and so on...) will be able to speak among each others, allowing partial cluster upgrades.
  • the second is the inclusion of a PeerName key in the status (PinInfo) objects. ipfs-cluster-status will now show peer names instead of peer IDs, making it easy to identify the status for each peer.

Many thanks to all the contributors to this release: @lanzafame, @meiqimichelle, @kishansagathiya, @cannium, @jglukasik and @mike-ngu.

List of changes

Features
Bugfixes

Upgrading notices

Configuration changes

There are no changes to the configuration file on this release.

REST API

There are no changes to the REST API.

Go APIs

We have upgraded to the new version of the cid package. This means all *cid.Cid arguments are now cid.Cid.

Other

We are now using go-1.11 to build and test cluster. We recommend using this version as well when building from source.


v0.5.0 - 2018-08-23

Summary

IPFS Cluster version 0.5.0 is a minor release which includes a major feature: adding content to IPFS directly through Cluster.

This functionality is provided by ipfs-cluster-ctl add and by the API endpoint /add. The upload format (multipart) is similar to the IPFS /add endpoint, as well as the options (chunker, layout...). Cluster add generates the same DAG as ipfs add would, but it sends the added blocks directly to their allocations, pinning them on completion. The pin happens very quickly, as content is already locally available in the allocated peers.

The release also includes most of the needed code for the Sharding feature, but it is not yet usable/enabled, pending features from go-ipfs.

The 0.5.0 release additionally includes a new experimental PinTracker implementation: the stateless pin tracker. The stateless pin tracker relies on the IPFS pinset and the cluster state to keep track of pins, rather than keeping an in-memory copy of the cluster pinset, thus reducing the memory usage when having huge pinsets. It can be enabled with ipfs-cluster-service daemon --pintracker stateless.

The last major feature is the use of a DHT as routing layer for cluster peers. This means that peers should be able to discover each others as long as they are connected to one cluster peer. This simplifies the setup requirements for starting a cluster and helps avoiding situations which make the cluster unhealthy.

This release requires a state upgrade migration. It can be performed with ipfs-cluster-service state upgrade or simply launching the daemon with ipfs-cluster-service daemon --upgrade.

List of changes

Features
Bugfixes

Upgrading notices

Configuration files

IMPORTANT: 0s is the new default for the read_timeout and write_timeout values in the restapi configuration section, as well as proxy_read_timeout and proxy_write_timeout options in the ipfshttp section. Adding files to cluster (via the REST api or the proxy) is likely to timeout otherwise.

The peerstore file (in the configuration folder), no longer requires listing the multiaddresses for all cluster peers when initializing the cluster with a fixed peerset. It only requires the multiaddresses for one other cluster peer. The rest will be inferred using the DHT. The peerstore file is updated only on clean shutdown, and will store all known multiaddresses, even if not pertaining to cluster peers.

The new stateless PinTracker implementation uses a new configuration subsection in the pin_tracker key. This is only generated with ipfs-cluster-service init. When not present, a default configuration will be used (and a warning printed).

The state_sync_interval default has been increased to 10 minutes, as frequent syncing is not needed with the improvements in the PinTracker. Users are welcome to update this setting.

REST API

The /add endpoint has been added. The replication_factor_min and replication_factor_max options (in POST allocations/<cid>) have been deprecated and subsititued for replication-min and replication-max, although backwards comaptibility is kept.

Keep Alive has been disabled for the HTTP servers, as a bug in Go's HTTP client implementation may result adding corrupted content (and getting corrupted DAGs). However, while the libp2p API endpoint also suffers this, it will only close libp2p streams. Thus the performance impact on the libp2p-http endpoint should be minimal.

Go APIs

The Config.PeerAddr key in the rest/client module is deprecated. APIAddr should be used for both HTTP and LibP2P API endpoints. The type of address is automatically detected.

The IPFSConnector Pin call now receives an integer instead of a Recursive flag. It indicates the maximum depth to which something should be pinned. The only supported value is -1 (meaning recursive). BlockGet and BlockPut calls have been added to the IPFSConnector component.

Other

As noted above, upgrade to state format version 5 is needed before starting the cluster service.


v0.4.0 - 2018-05-30

Summary

The IPFS Cluster version 0.4.0 includes breaking changes and a considerable number of new features causing them. The documentation (particularly that affecting the configuration and startup of peers) has been updated accordingly in https://ipfscluster.io . Be sure to also read it if you are upgrading.

There are four main developments in this release:

  • Refactorings around the consensus component, removing dependencies to the main component and allowing separate initialization: this has prompted to re-approach how we handle the peerset, the peer addresses and the peer's startup when using bootstrap. We have gained finer control of Raft, which has allowed us to provide a clearer configuration and a better start up procedure, specially when bootstrapping. The configuration file no longer mutates while cluster is running.
  • Improvements to the pintracker: our pin tracker is now able to cancel ongoing pins when receiving an unpin request for the same CID, and vice-versa. It will also optimize multiple pin requests (by only queuing and triggering them once) and can now report whether an item is pinning (a request to ipfs is ongoing) vs. pin-queued (waiting for a worker to perform the request to ipfs).
  • Broadcasting of monitoring metrics using PubSub: we have added a new monitor implementation that uses PubSub (rather than RPC broadcasting). With the upcoming improvements to PubSub this means that we can do efficient broadcasting of metrics while at the same time not requiring peers to have RPC permissions, which is preparing the ground for collaborative clusters.
  • We have launched the IPFS Cluster website: https://ipfscluster.io . We moved most of the documentation over there, expanded it and updated it.

List of changes

Features
Bugsfixes:

Upgrading notices

Configuration file

This release introduces breaking changes to the configuration file. An error will be displayed if ipfs-cluster-service is started with an old configuration file. We recommend re-initing the configuration file altogether.

  • The peers and bootstrap keys have been removed from the main section of the configuration
  • You might need to provide Peer multiaddresses in a text file named peerstore, in your ~/.ipfs-cluster folder (one per line). This allows your peers how to contact other peers.
  • A disable_repinning option has been added to the main configuration section. Defaults to false.
  • A init_peerset has been added to the raft configuration section. It should be used to define the starting set of peers when a cluster starts for the first time and is not bootstrapping to an existing running peer (otherwise it is ignored). The value is an array of peer IDs.
  • A backups_rotate option has been added to the raft section and specifies how many copies of the Raft state to keep as backups when the state is cleaned up.
  • An ipfs_request_timeout option has been introduced to the ipfshttp configuration section, and controls the timeout of general requests to the ipfs daemon. Defaults to 5 minutes.
  • A pin_timeout option has been introduced to the ipfshttp section, it controls the timeout for Pin requests to ipfs. Defaults to 24 hours.
  • An unpin_timeout option has been introduced to the ipfshttp section. it controls the timeout for Unpin requests to ipfs. Defaults to 3h.
  • Both pinning_timeout and unpinning_timeout options have been removed from the maptracker section.
  • A monitor/pubsubmon section configures the new PubSub monitoring component. The section is identical to the existing monbasic, its only option being check_interval (defaults to 15 seconds).

The ipfs-cluster-data folder has been renamed to raft. Upon ipfs-cluster-service daemon start, the renaming will happen automatically if it exists. Otherwise it will be created with the new name.

REST API

There are no changes to REST APIs in this release.

Go APIs

Several component APIs have changed: Consensus, PeerMonitor and IPFSConnector have added new methods or changed methods signatures.

Other

Calling ipfs-cluster-service without subcommands no longer runs the peer. It is necessary to call ipfs-cluster-service daemon. Several daemon-specific flags have been made subcommand flags: --bootstrap and --alloc.

The --bootstrap flag can now take a list of comma-separated multiaddresses. Using --bootstrap will automatically run state clean.

The ipfs-cluster-ctl no longer has a peers add subcommand. Peers should not be added this way, but rather bootstrapped to an existing running peer.


v0.3.5 - 2018-03-29

This release comes full with new features. The biggest ones are the support for parallel pinning (using refs -r rather than pin add to pin things in IPFS), and the exposing of the http endpoints through libp2p. This allows users to securely interact with the HTTP API without having to setup SSL certificates.

There are no breaking API changes and all configurations should be backwards compatible. The api/rest/client provides a new IPFS() method.

We recommend updating the service.json configurations to include all the new configuration options:

  • The pin_method option has been added to the ipfshttp section. It supports refs and pin (default) values. Use refs for parallel pinning, but only if you don't run automatic GC on your ipfs nodes.
  • The concurrent_pins option has been added to the maptracker section. Only useful with refs option in pin_method.
  • The listen_multiaddress option in the restapi section should be renamed to http_listen_multiaddress.

This release will require a state upgrade. Run ipfs-cluster-service state upgrade in all your peers, or start cluster with ipfs-cluster-service daemon --upgrade.


v0.3.4 - 2018-02-20

This release fixes the pre-built binaries.


v0.3.3 - 2018-02-12

This release includes additional ipfs-cluster-service state subcommands and the connectivity graph feature.

APIs have not changed in this release. The /health/graph endpoint has been added.


v0.3.2 - 2018-01-25

This release includes a number of bufixes regarding the upgrade and import of state, along with two important features:

These release is compatible with previous versions of ipfs-cluster on the API level, with the exception of the ipfs-cluster-service version command, which returns x.x.x-shortcommit rather than ipfs-cluster-service version 0.3.1. The former output is still available as ipfs-cluster-service --version.

The replication_factor option is deprecated, but still supported and will serve as a shortcut to set both replication_factor_min and replication_factor_max to the same value. This affects the configuration file, the REST API and the ipfs-cluster-ctl pin add command.


v0.3.1 - 2017-12-11

This release includes changes around the consensus state management, so that upgrades can be performed when the internal format changes. It also comes with several features and changes to support a live deployment and integration with IPFS pin-bot, including a REST API client for Go.

This release should stay backwards compatible with the previous one. Nevertheless, some REST API endpoints take the local flag, and matching new Go public functions have been added (RecoverAllLocal, SyncAllLocal...).


v0.3.0 - 2017-11-15

This release introduces Raft 1.0.0 and incorporates deep changes to the management of the cluster peerset.

Bugfixes:

This releases introduces some changes affecting the configuration file and some breaking changes affecting go and the REST APIs:

  • The consensus.raft section of the configuration has new options but should be backwards compatible.
  • The Consensus component interface has changed, LogAddPeer and LogRmPeer have been replaced by AddPeer and RmPeer. It additionally provides Clean and Peers methods. The consensus/raft implementation has been updated accordingly.
  • The api.ID (used in REST API among others) object key ClusterPeers key is now a list of peer IDs, and not a list of multiaddresses as before. The object includes a new key ClusterPeersAddresses which includes the multiaddresses.
  • Note that --bootstrap and --leave flags when calling ipfs-cluster-service will be stored permanently in the configuration (see ipfs-cluster/ipfs-cluster#235).

v0.2.1 - 2017-10-26

This is a maintenance release with some important bugfixes.

The fix for 32-bit architectures has required a change in the IPFSConnector interface (FreeSpace() and Reposize() return uint64 now). The current implementation by the ipfshttp module has changed accordingly.


v0.2.0 - 2017-10-23

This release introduces some breaking changes affecting configuration files and go integrations:

  • Config: The old configuration format is no longer valid and cluster will fail to start from it. Configuration file needs to be re-initialized with ipfs-cluster-service init.
  • Go: The restapi component has been renamed to rest and some of its public methods have been renamed.
  • Go: Initializers (New<Component>(...)) for most components have changed to accept a Config object. Some initializers have been removed.

Note, when adding changelog entries, write links to issues as @<issuenumber> and then replace them with links with the following command:

sed -i -r 's/@([0-9]+)/[ipfs\/ipfs-cluster#\1](https:\/\/github.com\/ipfs\/ipfs-cluster\/issues\/\1)/g' CHANGELOG.md