Document not found (404)
+This URL is invalid, sorry. Please use the navigation bar or search to continue.
+ +diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000000..f17311098f --- /dev/null +++ b/.nojekyll @@ -0,0 +1 @@ +This file makes sure that Github Pages doesn't process mdBook's output. diff --git a/404.html b/404.html new file mode 100644 index 0000000000..962befa69a --- /dev/null +++ b/404.html @@ -0,0 +1,228 @@ + + +
+ + +This URL is invalid, sorry. Please use the navigation bar or search to continue.
+ +Android version release, minor fixes
+No more typescript or react native. Backend is completely in Rust, frontend is in native.
+Number of dependencies was greatly reduced; no npm/yarn/nodejs/cocoapods, etc. All dependencies are handled by:
+Rust libraries were moved back into the repository. Crypto functions are imported from Substrate. All logic and most of storage is written in Rust. An important hack here is that rust/signer
crate has 2 versions of Cargo.toml for android and iOS architectures, as target library features could not be adjusted by normal means.
Frontend for both iOS and Android re-written in native frameworks. Thus, standard out-of-the-box build scripts could be used for building once Rust libraries are built and linked
+Secrets are stored in devices' encrypted storage and some effort is made to prevent them leaking in system memory. Thus, all is as safe as the phone is - the same credentials used for unlocking the phone are used to unlock seeds. User is responsible to keep them adequate.
+Transactions content is shown before signing; no hash signing is allowed, but signing messages is possible.
+The Vault now logs all operations it performs. It it important to remember that this is not log of account operations, but log of device history. This history could be cleared if needed, but not modified by other means. Detected presence of network connection is also logged.
+Much requested feature that makes Vault automatically increment numbered seeds on creation.
+All network data updates now could be performed through scanning QR codes. Whenever some update is needed, most probably you should just scan some QR video. Don't worry about skipped frames, it's fountain code so you only need enough frames.
+All updates could be signed, and signing key will be trusted on first use, so Vault device should be linked to single source of authority on correct metadata.
+Keys could be used only in one network. Need to re-use key in another network? Just create key with the same derivation path in that network to allow re-use and it will work.
+ +Vault is an app for an air-gapped device, it turns an offline device — usually a smartphone — into a secure hardware wallet. Vault offers you a way to securely generate, store, manage and use your blockchain credentials.
+Vault is optimized for the highest security requirements. If you already manage many accounts on multiple networks, Vault is great for you. If you have little experience with blockchain networks but still want good security affordances, you might find the learning curve steep. We strive to make Vault as intuitive as possible; get in touch via signer@parity.io or GitHub Issues if you can help us get there!
+Communication happens through scanning and generating QR codes. Scanned with Vault input-QRs interact with keys stored in Vault to, generate response-QRs on behalf of those keys. Usually, input-QR is a blockchain transaction, and a response-QR is a signature for this transaction. There are tried and true cryptographic algorithms that power these QR codes, as well as some smart engineering that make your dedicated device safe to use.
+Vault is a safe way to use your keys. However, that alone won't be enough to keep your keys secure. Devices break and get lost. This is why we always recommend backing up your seed phrases and derivation paths on paper. We are such big fans of paper backups that we even support a special tool to power your paper backup game by splitting your backups into shards called Banana Split.
+The Vault does not interact with a network. The app itself does not have a way to check if an app or an account you're interacting with is malicious. +If you use Vault with PolkadotJS Browser Extension, PolkadotJS Apps, or Signer Component Browser Extension they will rely on a community-driven curated list of potentially less-than-honest operators: https://polkadot.js.org/phishing/# to prevent you from interacting with certain sites and addresses. However, there are no limitations on the use of Vault with other tools.
+Yes. In Vault, you should add a key for an address on Westend network and request test tokens for that address, see the step-by-step guide on Polkadot Network Wiki.
+You can use test tokens in the same way you would use value-bearing tokens.
+For example with PolkadotJS Apps you can create a transaction on behalf of your account, generate a signature with Vault and submit it to the network. All of this without keys ever leaving your offline device.
+From-the-shelf Polkadot Vault supports Polkadot, Kusama, and Westend networks. But it's not limited to these networks. More experienced users can generate metadata for any network to expand the capability of Polkadot Vault.
+Parity verifies and publishes recent metadata versions on Metadata Update Portal. With off-the-shelf Vault you can scan one of the multipart QR-"movies" same way you scan transaction QR:
+in Vault open scanner, scan the QR for the respective network and accept new metadata.
Currently, Metadata Update Portal follows Polkadot, Kusama, and Westend network metadata updates. Parity is open to collaboration with participants of other networks and is currently exploring safe and more decentralized ways of publishing verified metadata.
+If you want to update networks that you've added manually, please follow the Add Metadata steps in Add New Network guide.
+It's a safety feature. Substrate-based blockchain networks can be updated and otherwise changed; without recent metadata version of a network Vault won't be able to parse a transaction correctly, and you won't be able to read it and verify what you sign. Given that Vault is an app for an air-gapped device, you have to update the network version by using camera.
+Parity verifies and publishes network specs on Metadata Update Portal. To add one of the listed networks, in Metadata Update Portal click "Chain Specs", scan the network specs QR same way you scan transaction QR: in Vault open scanner, scan the QR and accept new network spec. Then scan the multipart QR-"movie" containing recent metadata for this network.
+Yes. Follow the Add New Network step-by-step guide.
+Currently, the process requires you to have rust, subkey and parity-signer repository on your machine.
+polkadot{.js}
apps or extension to Polkadot Vault?Yes. Keys are compatible between polkadot{.js}
and Polkadot Vault, except for the keys generated with Ledger (BIP39
). To import seed keys into Polkadot Vault, you need to know:
polkadot{.js}
are seed keys.In Polkadot Vault go to Keys, then press "Plus" icon in the top right of the screen, select "Recover seed", enter display name to identify your seed, press "Next", enter the seed phrase. Done, you've got your seed key imported!
+If you are importing a derived key select the seed from which your key is derived, select account's network, press "Plus" icon next to "Derived keys", enter your derivation path.
A seed key is a single key pair generated from a seed phrase. You can “grow” as many derived keys from a single seed by adding derivation paths to your seed phrase.
+Learn more about types of derivation paths on substrate.io.
+Derivation path is sensitive information, but knowing the derivation path is not enough to recover a key. Derived keys cannot be backed up without both of the ingredients: seed phrase (can be shared between multiple keys) and a derivation path (unique for each of the keys “grown” from that seed).
+The main reason to use derived keys is how easy it is to back up (and restore from a backup) a derivation path compared to seed phrase.
+An identicon is a visual hash of a public key — a unique picture generated from your public key. The same public key should have the same identicon regardless of the application. It is a good tool to distinguish quickly between keys. However, when interacting with keys, i.g. verifying a recipient of a transaction, do not rely only on identicons, it is better to check the full public address.
+Due to security considerations, you cannot rename a seed. Please back up the seed and derived keys, remove it and add the seed again with a new name instead.
+ +Polkadot Vault is built to be used offline. The mobile device used to run the app will hold important information that needs to be kept securely stored. It is therefore advised to:
+The app is available in beta for Android and iOS :
+ +Please double check carefully the origin of the app, and make sure that the company distributing it is Parity Technologies. Usual security advice apply to this air-gapped wallet:
+Once Polkadot Vault is installed, your device should never go online. This would put your private keys at threat. To update, you will need to :
+v4.0
choosing an identity > click the user icon at the top right > “Show Recovery Phrase”v2.2
tapping an account > 3 dots menu at the top right > “Backup Recovery Phrase”v2.0
tapping an account > tap on the account address > “Backup Recovery Phrase”None, it's as simple as that. The Polkadot Vault Mobile Android and iOS apps do not send any sort of data to Parity Technologies or any partner and work completely offline once installed.
+ +First and foremost, make sure you have the latest Rust installed in your system. Nothing will work without Rust.
+If you get errors like cargo: feature X is required
, it most likely means you have an old version of Rust. Update it by running rustup update stable
.
1. You probably already have Xcode installed if you are reading this. If not, go get it.
+2. Compile the core Rust library first:
+cd scripts && ./build.sh ios
+
+3. Open the NativeSigner.xcodeproj
project from the ios
folder in your Xcode and click Run (Cmd+R
).
4. The first time you start the app, you will need to put your device into Airplane Mode. In the iOS simulator, you can do this by turning off WiFi on your Mac (yes, this is an official apple-recommended way).
+However, we strongly recommend that you use a real device for development, as some important parts (e.g. camera) may not work in the simulator.
+1. Download Android Studio.
+2. Open the project from the android
directory.
3. Install NDK. Go to File -> Project Structure -> SDK Location
. Next to the "Android NDK location" section, click "Download Android NDK" button.
We highly recommend you to update all existing plugins and SDK's for Kotlin, Gradle, etc even if you just downloaded a fresh Android Studio. It's always a good idea to restart Android Studio after that. This can save you many hours on Stackoverflow trying to fix random errors like "NDK not found".
+4. Connect your device or create a virtual one. Open Tools -> Device Manager
and create a new phone simulator with the latest Android.
5. (macOS) Specify path to python
in local.properties
.
rust.pythonCommand=python3
6. Run the project (Ctrl+R
). It should build the Rust core library automatically.
Vault repository contains 3 tools that are part of Vault ecosystem
+generate_message
network data management toolqr_reader_pc
qr scanner app for PCGreater Vault ecosystem:
+One common reason for this is inconsistency in uniffi
version - make sure that installed version matches one stated in Cargo.toml
This is a known issue, does not seem to be solvable at the moment. Please use 2 machines, as we do.
+ +This document provides an interpretation of the UOS format used by Polkadot Vault. The upstream version of the published format has diverged significantly from the actual implementation, so this document represents the current state of the UOS format that is compatible with Polkadot Vault. It only applies to networks compatible with Polkadot Vault, i.e. Substrate-based networks. The document also describes special payloads used to maintain a Polkadot Vault instance.
+Therefore, this document effectively describes the input and output format for QR codes used by Polkadot Vault.
+The Vault receives information over an air-gap as QR codes. These codes are read as u8
vectors and must always be parsed by the Vault before use.
QR codes can contain information that a user wants to sign with one of the Vault keys, or they may contain update information to ensure smooth operation of the Vault without the need for a reset or connection to the network.
+V13
and below)QR code envelope has the following structure:
+QR code prefix | content | ending spacer | padding |
---|---|---|---|
4 bits | byte-aligned content | 4 bits | remainder |
QR code prefix always starts with 0x4
symbol indicating "raw" encoding.
Subsequent 2 bytes encode content length. Using this number, QR code parser can +instantly extract content and disregard the rest of QR code.
+Actual content is shifted by half-byte, otherwise it is a normal byte sequence.
+The information transferred through QR channel into Vault is always enveloped
+in multiframe packages (although minimal number of multiframe packages is 1).
+There are two standards for the multiframe: RaptorQ
erasure coding and legacy
+non-erasure multiframe. The type of envelope is determined by the first bit of
+the QR code data: 0
indicates legacy multiframe, 1
indicates RaptorQ
RaptorQ (RFC6330
) is
+a variable rate (fountain) erasure code protocol with reference implementation
+in Rust
Wrapping content in RaptorQ
protocol allows for arbitrary amounts of data to be
+transferred reliably within reasonable time. It is recommended to wrap all
+payloads into this type of envelope.
Each QR code in RaptorQ
encoded multipart payload contains following parts:
bytes [0..4] | bytes [4..] |
---|---|
0x80000000 || payload_size | RaptorQ serialized packet |
payload_size
MUST contain payload size in bytes, represented as
+big-endian 32-bit unsigned integer.payload_size
MUST NOT exceed 7FFFFFFF
payload_size
MUST be identical in all codes encoding the payloadpayload_size
and RaptorQ serialized packet
MUST be stored by the Cold
+Vault, in no particular order, until their amount is sufficient to decode
+the payload.Once sufficient number of frames is collected, they could be processed into +single payload and treated as data vector ("QR code content").
+In real implementation, the Polkadot Vault ecosystem generalized all payloads as +multipart messages.
+bytes position | [0] | [1..3] | [3..5] | [5..] |
---|---|---|---|---|
content | 00 | frame_count | frame_index | data |
frame
MUST the number of current frame, '0000' represented as
+big-endian 16-bit unsigned integer.frame_count
MUST the total number of frames, represented as big-endian
+16-bit unsigned integer.part_data
MUST be stored by the Cold Vault, ordered by frame
number,
+until all frames are scanned.Once all frames are combined, the part_data
must be concatenated into a
+single binary blob and treated as data vector ("QR code content").
Every QR code content starts with a prelude [0x53, 0x<encryption code>, 0x<payload code>]
.
0x53
is always expected and indicates Substrate-related content.
<encryption code>
for signables indicates encryption algorithm that will be
+used to generate the signature:
0x00 |
+ Ed25519 | +
0x01 |
+ Sr25519 | +
0x02 |
+ Ecdsa | +
<encryption code>
for updates indicates encryption algorithm that was used to
+sign the update:
0x00 |
+ Ed25519 | +
0x01 |
+ Sr25519 | +
0x02 |
+ Ecdsa | +
0xff |
+ unsigned | +
Derivations import and testing are always unsigned, with <encryption code>
+always 0xff
.
Vault supports following <payload code>
variants:
0x00 |
+ legacy mortal transaction | +
0x02 |
+ transaction (both mortal and immortal) | +
0x03 |
+ message | +
0x04 |
+ bulk transactions | +
0x80 |
+ load metadata update | +
0x81 |
+ load types update | +
0xc1 |
+ add specs update | +
0xde |
+ derivations import | +
Note: old UOS specified 0x00
as mortal transaction and 0x02
as immortal one,
+but currently both mortal and immortal transactions from polkadot-js are 0x02
.
Further processing is done based on the payload type.
+Transaction has the following structure:
+prelude | public key | SCALE-encoded call data | SCALE-encoded extensions | network genesis hash | +
Public key is the key that can sign the transaction. Its length depends on the
+<encryption code>
declared in transaction prelude:
Encryption | Public key length, bytes |
---|---|
Ed25519 | 32 |
Sr25519 | 32 |
Ecdsa | 33 |
Call data is Vec<u8>
representation of transaction content. Call data must be
+parsed by Vault prior to signature generation and becomes a part of signed
+blob. Within transaction, the call data is SCALE-encoded, i.e. effectively is
+prefixed with compact of its length in bytes.
Extensions contain data additional to the call data, and also are part of a +signed blob. Typical extensions are Era, Nonce, metadata version, etc. +Extensions content and order, in principle, can vary between the networks and +metadata versions.
+Network genesis hash determines the network in which the transaction is created. +At the moment genesis hash is fixed-length 32 bytes.
+Thus, the transaction structure could also be represented as:
+prelude | public key | compact of call data length | call data | SCALE-encoded extensions | network genesis hash | +
Bold-marked transaction pieces are used in the blob for which the signature is
+produced. If the blob is short, 257 bytes or below, the signature is produced
+for it as is. For blobs longer than 257 bytes, 32 byte hash (blake2_256
) is
+signed instead. This is inherited from earlier Vault versions, and is currently
+compatible with polkadot-js.
Cut the QR data and get:
+u8
from prelude)u8
immediately after the prelude)u8
at the end)If the data length is insufficient, Vault produces an error and suggests to +load non-damaged transaction.
+Search the Vault database for the network specs (from the network genesis +hash and encryption).
+If the network specs are not found, Vault shows:
+Search the Vault database for the address key (from the transaction author +public key and encryption). Vault will try to interpret and display the +transaction in any case. Signing will be possible only if the parsing is +successful and the address key is known to Vault and is extended to the network +in question.
+Address key not found. Signing not possible. Output shows:
+Address key is found, but it is not extended to the network used. Signing +not possible. Output shows:
+Address key is found and is extended to the network used. Vault will +proceed to try and interpret the call and extensions. Detailed author +information will be shown regardless of the parsing outcome. +The signing will be allowed only if the parsing is successful.
+Separate the call and extensions. Call is prefixed by its length compact, +the compact is cut off, the part with length that was indicated in the compact +goes into call data, the part that remains goes into extensions data.
+If no compact is found or the length is insufficient, Vault produces an +error that call and extensions could not be separated.
+Get the metadata set from the Vault database, by the network name from the +network specs. Metadata is used to interpret extensions and then the call +itself.
+If there are no metadata entries for the network at all, Vault produces an +error and asks to load the metadata.
+RuntimeMetadata
versions supported by Vault are V12
, V13
, and V14
.
+The crucial feature of the V14
is that the metadata contains the description
+of the types used in the call and extensions production. V12
and V13
are
+legacy versions and provide only text identifiers for the types, and in order to
+use them, the supplemental types information is needed.
Process the extensions.
+Vault already knows in which network the transaction was made, but does not +yet know the metadata version. Metadata version must be one of the signable +extensions. At the same time, the extensions and their order are recorded in the +network metadata. Thus, all metadata entries from the set are checked, from +newest to oldest, in an attempt to find metadata that both decodes the +extensions and has a version that matches the metadata version decoded from the +extensions.
+If processing extensions with a single metadata entry results in an error, +the next metadata entry is tried. The errors would be displayed to user only if +all attempts with existing metadata have failed.
+Typically, the extensions are quite stable in between the metadata versions +and in between the networks, however, they can be and sometimes are different.
+In legacy metadata (RuntimeMetadata
version being V12
and V13
)
+extensions have identifiers only, and in Vault the extensions for V12
and
+V13
are hardcoded as:
Era
eraCompact(u64)
nonceCompact(u128)
tipu32
metadata versionu32
tx versionH256
genesis hashH256
block hashIf the extensions could not be decoded as the standard set or not all +extensions blob is used, the Vault rejects this metadata version and adds error +into the error set.
+Metadata V14
has extensions with both identifiers and properly described
+types, and Vault decodes extensions as they are recorded in the metadata. For
+this,
+ExtrinsicMetadata
+part of the metadata
+RuntimeMetadataV14
+is used. Vector signed_extensions
in ExtrinsicMetadata
is scanned twice,
+first for types in ty
of the
+SignedExtensionMetadata
+and then for types in additional_signed
of the SignedExtensionMetadata
. The
+types, when resolved through the types database from the metadata, allow to cut
+correct length blobs from the whole SCALE-encoded extensions blob and decode
+them properly.
If any of these small decodings fails, the metadata version gets rejected by +the Vault and an error is added to the error set. Same happens if after all +extensions are scanned, some part of extensions blob remains unused.
+There are some special extensions that must be treated separately. The
+identifier
in SignedExtensionMetadata
and ident
segment of the type
+Path
+are used to trigger types interpretation as specially treated extensions. Each
+identifier
is encountered twice, once for ty
scan, and once for
+additional_signed
scan. In some cases only one of those types has non-empty
+content, in some cases it is both. To distinguish the two, the type-associated
+path is used, which points to where the type is defined in Substrate code.
+Type-associated path has priority over the identifier.
Path triggers:
+| Path | Type is interpreted as |
+| :- | :- |
+| Era
| Era
|
+| CheckNonce
| Nonce
|
+| ChargeTransactionPayment
| tip, gets displayed as balance with decimals and unit corresponding to the network specs |
Identifier triggers, are used if the path trigger was not activated:
+| Identifier | Type, if not empty and if there is no path trigger, is interpreted as | Note |
+| :- | :- | :- |
+| CheckSpecVersion
| metadata version | gets checked with the metatada version from the metadata |
+| CheckTxVersion
| tx version | |
+| CheckGenesis
| network genesis hash | must match the genesis hash that was cut from the tail of the transaction |
+| CheckMortality
| block hash | must match the genesis hash if the transaction is immortal; Era
has same identifier, but is distinguished by the path |
+| CheckNonce
| nonce | |
+| ChargeTransactionPayment
| tip, gets displayed as balance with decimals and unit corresponding to the network specs |
If the extension is not a special case, it is displayed as normal parser +output and does not participate in deciding if the transaction could be signed.
+After all extensions are processed, the decoding must yield following +extensions:
+Era
Nonce
<- this is not so currently, fix itBlockHash
GenesisHash
<- this is not so currently, fix itIf the extension set is different, this results in Vault error for this +particular metadata version, this error goes into error set.
+The extensions in the metadata are checked on the metadata loading step,
+long before any transactions are even produced. Metadata with incomplete
+extensions causes a warning on load_metadata
update generation step, and
+another one when an update with such metadata gets loaded into Vault.
+Nevertheless, such metadata loading into Vault is allowed, as there could be
+other uses for metadata except signable transaction signing. Probably.
If the metadata version in extensions does not match the metadata version +of the metadata used, this results in Vault error for this particular metadata +version, this error goes into error set.
+If the extensions are completely decoded, with correct set of the special +extensions and the metadata version from the extensions match the metadata +version of the metadata used, the extensions are considered correctly parsed, +and Vault can proceed to the call decoding.
+If all metadata entries from the Vault database were tested and no suitable +solution is found, Vault produces an error stating that all attempts to decode +extensions have failed. This could be used by variety of reasons (see above), +but so far the most common one observed was users having the metadata in Vault +not up-to-date with the metadata on chain. Thus, the error must have a +recommendation to update the metadata first.
+Process the call data.
+After the metadata with correct version is established, it is used to parse
+the call data itself. Each call begins with u8
pallet index, this is the
+decoding entry point.
For V14
metadata the correct pallet is found in the set of available ones
+in pallets
field of
+RuntimeMetadataV14
,
+by index
field in corresponding
+PalletMetadata
.
+The calls
field of this PalletMetadata
, if it is Some(_)
, contains
+PalletCallMetadata
+that provides the available calls enum described in types
registry of the
+RuntimeMetadataV14
. For each type in the registry, including this calls enum,
+encoded data size is determined, and the decoding is done according to the type.
For V12
and V13
metadata the correct pallet is also found by scanning
+the available pallets and searching for correct pallet index. Then the call is
+found using the call index (second u8
of the call data). Each call has
+associated set of argument names and argument types, however, the argument type
+is just a text identifier. The type definitions are not in the metadata and
+transactions decoding requires supplemental types information. By default, the
+Vault contains types information that was constructed for Westend when Westend
+was still using V13
metadata and it was so far reasonably sufficient for
+simple transactions parsing. If the Vault does not find the type information in
+the database and has to decode the transaction using V12
or V13
+metadata, error is produced, indicating that there are no types. Elsewise, for
+each encountered argument type the encoded data size is determined, and the
+decoding is done according to the argument type.
There are types requiring special display:
+Calls in V14
parsing are distinguished by Call
in ident
segment of the
+type Path
.
+Calls in V12
and V13
metadata are distinguished by any element of the set
+of calls type identifiers in string argument type.
At the moment the numbers that should be displayed as balance in
+transactions with V14
metadata are determined by the type name type_name
of
+the corresponding
+Field
+being:
Balance
T::Balance
BalanceOf<T>
ExtendedBalance
BalanceOf<T, I>
DepositBalance
PalletBalanceOf<T>
Similar identifiers are used in V12
and V13
, the checked value is the
+string argument type itself.
There could be other instances when the number should be displayed as
+balance. However, sometimes the balance is not the balance in the units
+in the network specs, for example in the assets
pallet. See issue
+#1050 and comments
+there for details.
If no errors were encountered while parsing and all call data was used in +the process, the transaction is considered parsed and is displayed to the user, +either ready for signing (if all other checks have passed) or as read-only.
+If the user chooses to sign the transaction, the Vault produces QR code with +signature, that should be read back into the hot side. As soon as the signature +QR code is generated, the Vault considers the transaction signed.
+All signed transactions are entered in the history log, and could be seen +and decoded again from the history log. Transactions not signed by the user do +not go in the history log.
+If the key used for the transaction is passworded, user has three attempts +to enter the password correctly. Each incorrect password entry is reflected in +the history.
+In the time interval between Vault displaying the parsed transaction and +the user approving it, the transaction details needed to generate the signature +and history log details are temporarily stored in the database. The temporary +storage gets cleared each time before and after use. Vault extracts the stored +transaction data only if the database checksum stored in navigator state is +same as the current checksum of the database. If the password is entered +incorrectly, the database is updated with "wrong password" history entry, and +the checksum in the state gets updated accordingly. Eventually, all transaction +info can and will be moved into state itself and temporary storage will not be +used.
+Alice makes transfer to Bob in Westend network.
+Transaction:
+530102d43593c715fdd31c61141abd04a99fd6822c8558854ccde39a5684e7a56da27da40403008eaf04151687736326c9fea17e25fc5287613693c912909cb226aa4794f26a480700e8764817b501b8003223000005000000e143f23803ac50e8f6f8e62695d1ce9e4e1d68aa36c1cd2cfd15340213f3423e538a7d7a0ac17eb6dd004578cb8e238c384a10f57c999a3fa1200409cd9b3f33e143f23803ac50e8f6f8e62695d1ce9e4e1d68aa36c1cd2cfd15340213f3423e
Part | Meaning | Byte position |
---|---|---|
53 | Substrate-related content | 0 |
01 | Sr25519 encryption algorithm | 1 |
02 | Transaction | 2 |
d435..a27d 1 | Alice public key | 3..=34 |
a404..4817 2 | SCALE-encoded call data | 35..=76 |
a4 | Compact call data length, 41 | 35 |
0403..4817 3 | Call data | 36..=76 |
04 | Pallet index 4 in metadata, entry point for decoding | 36 |
b501..3f33 4 | Extensions | 77..=153 |
e143..423e 5 | Westend genesis hash | 154..=185 |
d43593c715fdd31c61141abd04a99fd6822c8558854ccde39a5684e7a56da27d
a40403008eaf04151687736326c9fea17e25fc5287613693c912909cb226aa4794f26a480700e8764817
0403008eaf04151687736326c9fea17e25fc5287613693c912909cb226aa4794f26a480700e8764817
b501b8003223000005000000e143f23803ac50e8f6f8e62695d1ce9e4e1d68aa36c1cd2cfd15340213f3423e538a7d7a0ac17eb6dd004578cb8e238c384a10f57c999a3fa1200409cd9b3f33
e143f23803ac50e8f6f8e62695d1ce9e4e1d68aa36c1cd2cfd15340213f3423e
Call part | Meaning |
---|---|
04 | Pallet index 4 (Balances ) in metadata, entry point for decoding |
03 | Method index 3 in pallet 4 (transfer_keep_alive ), search in metadata what the method contains. Here it is MultiAddress for transfer destination and Compact(u128) balance. |
00 | Enum variant in MultiAddress , AccountId |
8eaf..6a48 6 | Associated AccountId data, Bob public key |
0700e8764817 | Compact(u128) balance. Amount paid: 100000000000 or, with Westend decimals and unit, 100.000000000 mWND. |
8eaf04151687736326c9fea17e25fc5287613693c912909cb226aa4794f26a48
Extensions part | Meaning |
---|---|
b501 | Era: phase 27, period 64 |
b8 | Nonce: 46 |
00 | Tip: 0 pWND |
32230000 | Metadata version: 9010 |
05000000 | Tx version: 5 |
e143..423e 7 | Westend genesis hash |
538a..3f33 8 | Block hash |
e143f23803ac50e8f6f8e62695d1ce9e4e1d68aa36c1cd2cfd15340213f3423e
538a7d7a0ac17eb6dd004578cb8e238c384a10f57c999a3fa1200409cd9b3f33
Message has the following structure:
+prelude | +public key | +[u8] slice |
+ network genesis hash | +
[u8]
slice is represented as String if all bytes are valid UTF-8. If not all
+bytes are valid UTF-8, the Vault produces an error.
It is critical that the message payloads are always clearly distinguishable from +the transaction payloads, i.e. it is never possible to trick user to sign +transaction posing as a message.
+Current proposal is to enable message signing only with Sr25519 encryption +algorithm, with designated signing context, different from the signing context +used for transactions signing.
+Bulk transactions is a SCALE-encoded TransactionBulk
structure that consists of concatenated Vec<u8>
transactions.
Bulking is a way to sign multiple translations at once and reduce the number of QR codes to scan.
+Bulk transactions are processed in exactly the same way as single transactions.
+Update has following general structure:
+prelude | verifier public key (if signed) | update payload | signature (if signed) | reserved tail | +
Note that the verifier public key
and signature
parts appear only in signed
+uploads. Preludes [0x53, 0xff, 0x<payload code>]
are followed by the update
+payload.
Every time user receives an unsigned update, the Vault displays a warning that +the update is not verified. Generally, the use of unsigned updates is +discouraged.
+For update signing it is recommended to use a dedicated key, not used for +transactions. This way, if the signed data was not really the update data, but +something else posing as the update data, the signature produced could not do +any damage.
+Encryption | Public key length, bytes | Signature length, bytes |
---|---|---|
Ed25519 | 32 | 64 |
Sr25519 | 32 | 64 |
Ecdsa | 33 | 65 |
no encryption | 0 | 0 |
reserved tail
currently is not used and is expected to be empty. It could be
+used later if the multisignatures are introduced for the updates. Expecting
+reserved tail
in update processing is done to keep code continuity in case
+multisignatures introduction ever happens.
Because of the reserved tail
, the update payload
length has to be always
+exactly declared, so that the update payload
part could be cut correctly from
+the update.
Detailed description of the update payloads and form in which they are used in
+update itself and for generating update signature, could be found in Rust module
+definitions::qr_transfers
.
add_specs
update payload, payload code c1
Introduces a new network to Vault, i.e. adds network specs to the Vault +database.
+Update payload is ContentAddSpecs
in to_transfer()
form, i.e. double
+SCALE-encoded NetworkSpecsToSend
(second SCALE is to have the exact payload
+length).
Payload signature is generated for SCALE-encoded NetworkSpecsToSend
.
Network specs are stored in dedicated SPECSTREE
tree of the Vault database.
+Network specs identifier is NetworkSpecsKey
, a key built from encryption used
+by the network and the network genesis hash. There could be networks with
+multiple encryption algorithms supported, thus the encryption is part of the
+key.
Some elements of the network specs could be slightly different for networks with +the same genesis hash and different encryptions. There are:
+Invariant specs, identical between all different encryptions:
+The reason is that the network name is and the base58 prefix can be a part +of the network metadata, and the network metadata is not encryption-specific.
+Specs static for given encryption, that should not change over time once set:
+To replace these, the user would need to remove the network and add it +again, i.e. it won't be possible to do by accident.
+Flexible display-related and convenience specs, that can change and could be +changed by simply loading new ones over the old ones:
+//<network_name>
)load_metadata
update payload, payload code 80
Loads metadata for a network already known to Vault, i.e. for a network with +network specs in the Vault database.
+Update payload is ContentLoadMeta
in to_transfer()
form, and consists of
+concatenated SCALE-encoded metadata Vec<u8>
and network genesis hash (H256,
+always 32 bytes).
Same blob is used to generate the signature.
+Network metadata is stored in dedicated METATREE
tree of the Vault database.
+Network metadata identifier in is MetaKey
, a key built from the network name
+and network metadata version.
Network metadata that can get into Vault and can be used by Vault only if it +complies with following requirements:
+b"meta"
preludeb"meta"
prelude is decodable as RuntimeMetadata
RuntimeMetadata
version of the metadata is V12
, V13
or V14
System
palletVersion
constant in System
palletVersion
is decodable as RuntimeVersion
u16
or u8
Additionally, if the metadata V14
is received, its associated extensions will
+be scanned and user will be warned if the extensions are incompatible with
+transactions signing.
Also in case of the metadata V14
the type of the encoded data stored in the
+Version
constant is also stored in the metadata types registry and in
+principle could be different from RuntimeVersion
above. At the moment, the
+type of the Version
is hardcoded, and any other types would not be processed
+and would get rejected with an error.
load_types
update payload, payload code 81
Load types information.
+Type information is needed to decode transactions made in networks with metadata
+RuntimeMetadata version V12
or V13
.
Most of the networks are already using RuntimeMetadata version V14, which has +types information incorporated in the metadata itself.
+The load_types
update is expected to become obsolete soon.
Update payload is ContentLoadTypes
in to_transfer()
, i.e. double
+SCALE-encoded Vec<TypeEntry>
(second SCALE is to have the exact payload
+length).
Payload signature is generated for SCALE-encoded Vec<TypeEntry>
.
Types information is stored in SETTREE
tree of the Vault database, under key
+TYPES
.
Vault can accept both verified and non-verified updates, however, information +once verified can not be replaced or updated by a weaker verifier without full +Vault reset.
+A verifier could be Some(_)
with corresponding public key inside or None
.
+All verifiers for the data follow trust on first use principle.
Vault uses:
+General verifier information is stored in SETTREE
tree of the Vault database,
+under key GENERALVERIFIER
. General verifier is always set to a value, be it
+Some(_)
or None
. Removing the general verifier means setting it to None
.
+If no general verifier entry is found in the database, the database is
+considered corrupted and the Vault must be reset.
Network verifier information is stored in dedicated VERIFIERS
tree of the
+Vault database. Network verifier identifier is VerifierKey
, a key built from
+the network genesis hash. Same network verifier is used for network specs with
+any encryption algorithm and for network metadata. Network verifier could be
+valid or invalid. Valid network verifier could be general or custom. Verifiers
+installed as a result of an update are always valid. Invalid network verifier
+blocks the use of the network unless the Vault is reset, it appears if user
+marks custom verifier as no longer trusted.
Updating verifier could cause some data verified by the old verifier to be +removed, to avoid confusion regarding which verifier has signed the data +currently stored in the database. The data removed is called "hold", and user +receives a warning if accepting new update would cause hold data to be removed.
+General verifier is the strongest and the most reliable verifier known to the +Vault. General verifier could sign all kinds of updates. By default the Vault +uses Parity-associated key as general verifier, but users can remove it and set +their own. There could be only one general verifier at any time.
+General verifier could be removed only by complete wipe of the Vault, through
+Remove general certificate
button in the Settings. This will reset the Vault
+database to the default content and set the general verifier as None
, that
+will be updated to the first verifier encountered by the Vault.
Expected usage for this is that the user removes old general verifier and +immediately afterwards loads an update from the preferred source, thus setting +the general verifier to the user-preferred value.
+General verifier can be updated from None
to Some(_)
by accepting a verified
+update. This would result in removing "general hold", i.e.:
General verifier could not be changed from Some(_)
to another, different
+Some(_)
by simply accepting updates.
Note that if the general verifier is None
, none of the custom verifiers could
+be Some(_)
. Similarly, if the verifier is recorded as custom in the database,
+its value can not be the same as the value of the general verifier. If found,
+those situations indicate the database corruption.
Custom verifiers could be used for network information that was verified, but +not with the general verifier. There could be as many as needed custom verifiers +at any time. Custom verifier is considered weaker than the general verifier.
+Custom verifier set to None
could be updated to:
Some(_)
Custom verifier set to Some(_)
could be updated to general verifier.
These verifier updates can be done by accepting an update signed by a new +verifier.
+Any of the custom network verifier updates would result in removing "hold", i.e. +all network specs entries (for all encryption algorithms on file) and all +network metadata entries.
+Cut the QR data and get:
+u8
from prelude)0xff
)
+update verifier public key, its length matching the encryption (32 or
+33 u8
immediately after the prelude)If the data length is insufficient, Vault produces an error and suggests to +load non-damaged update.
+Using the payload type from the prelude, determine the update payload length +and cut payload from the concatenated verifier signature and reserved tail.
+If the data length is insufficient, Vault produces an error and suggests to +load non-damaged update.
+(only if the update is signed, i.e. the encryption is not 0xff
)
+Cut verifier signature, its length matching the encryption (64 or 65 u8
+immediately after the update payload). Remaining data is reserved tail,
+currently it is not used.
If the data length is insufficient, Vault produces an error and suggests to +load non-damaged update.
+Verify the signature for the payload. If this fails, Vault produces an error +indicating that the update has invalid signature.
+add_specs
processing sequenceUpdate payload is transformed into ContentAddSpecs
and the incoming
+NetworkSpecsToSend
are retrieved, or the Vault produces an error indicating
+that the add_specs
payload is damaged.
Vault checks that there is no change in invariant specs occurring.
+If there are entries in the SPECSTREE
of the Vault database with same
+genesis hash as in newly received specs (the encryption not necessarily
+matches), the Vault checks that the name and base58 prefix in the received
+specs are same as in the specs already in the Vault database.
Vault checks the verifier entry for the received genesis hash.
+If there are no entries, i.e. the network is altogether new to the Vault, +the specs could be added into the database. During the same database transaction +the network verifier is set up:
+| add_specs
update verification | General verifier in Vault database | Action |
+| :- | :- | :- |
+| unverified, 0xff
update encryption code | None
or Some(_)
| (1) set network verifier to custom, None
(regardless of the general verifier); (2) add specs |
+| verified by a
| None
| (1) set network verifier to general; (2) set general verifier to Some(a)
, process the general hold; (3) add specs |
+| verified by a
| Some(b)
| (1) set network verifier to custom, Some(a)
; (2) add specs |
+| verified by a
| Some(a)
| (1) set network verifier to general; (2) add specs |
If there are entries, i.e. the network was known to the Vault at some +point after the last Vault reset, the network verifier in the database and the +verifier of the update are compared. The specs could be added in the database if
+Note that if the exactly same specs as already in the database are received +with updated verifier and the user accepts the update, the verifier will get +updated and the specs will stay in the database.
+| add_specs
update verification | Network verifier in Vault database | General verifier in Vault database | Action |
+| :- | :- | :- | :- |
+| unverified, 0xff
update encryption code | custom, None
| None
| accept specs if good |
+| unverified, 0xff
update encryption code | custom, None
| Some(a)
| accept specs if good |
+| unverified, 0xff
update encryption code | general | None
| accept specs if good |
+| unverified, 0xff
update encryption code | general | Some(a)
| error: update should have been signed by a
|
+| verified by a
| custom, None
| None
| (1) change network verifier to general, process the network hold; (2) set general verifier to Some(a)
, process the general hold; (3) accept specs if good |
+| verified by a
| custom, None
| Some(a)
| (1) change network verifier to general, process the network hold; (2) accept specs if good |
+| verified by a
| custom, None
| Some(b)
| (1) change network verifier to custom, Some(a)
, process the network hold; (2) accept specs if good |
+| verified by a
| custom, Some(a)
| Some(b)
| accept specs if good |
+| verified by a
| custom, Some(b)
| Some(a)
| (1) change network verifier to general, process the network hold; (2) accept specs if good |
+| verified by a
| custom, Some(b)
| Some(c)
| error: update should have been signed by b
or c
|
Before the NetworkSpecsToSend
are added in the SPECSTREE
, they get
+transformed into NetworkSpecs
, and have the order
field (display order in
+Vault network lists) added. Each new network specs entry gets added in the end
+of the list.
load_meta
processing sequenceUpdate payload is transformed into ContentLoadMeta
, from which the metadata
+and the genesis hash are retrieved, or the Vault produces an error indicating
+that the load_metadata
payload is damaged.
Vault checks that the received metadata fulfills all Vault metadata +requirements outlined above. Otherwise an +error is produced indicating that the received metadata is invalid.
+Incoming MetaValues
are produced, that contain network name, network
+metadata version and optional base58 prefix (if it is recorded in the metadata).
Network genesis hash is used to generate VerifierKey
and check if the
+network has an established network verifier in the Vault database. If there
+is no network verifier associated with genesis hash, an error is produced,
+indicating that the network metadata could be loaded only for networks
+introduced to Vault.
SPECSTREE
tree of the Vault database is scanned in search of entries with
+genesis hash matching the one received in payload.
Vault accepts load_metadata
updates only for the networks that have at
+least one network specs entry in the database.
Note that if the verifier in step (3) above is found, it not necessarily +means that the specs are found (for example, if a network verified by general +verifier was removed by user).
+If the specs are found, the Vault checks that the network name and, if +present, base58 prefix from the received metadata match the ones in network +specs from the database. If the values do not match, the Vault produces an +error.
+Vault compares the verifier of the received update and the verifier for the
+network from the database. The update verifier must be exactly the same as the
+verifier already in the database. If there is mismatch, Vault produces an
+error, indication that the load_metadata
update for the network must be signed
+by the specified verifier (general or custom) or unsigned.
If the update has passed all checks above, the Vault searches for the
+metadata entry in the METATREE
of the Vault database, using network name and
+version from update to produce MetaKey
.
If the key is not found in the database, the metadata could be added.
+If the key is found in the database and metadata is exactly the same, +the Vault produces an error indicating that the metadata is already in the +database. This is expected to be quite common outcome.
+If the key is found in the database and the metadata is different, the +Vault produces an error. Metadata must be not acceptable. This situation can +occur if there was a silent metadata update or if the metadata is corrupted.
+load_types
processing sequenceUpdate payload is transformed into ContentLoadTypes
, from which the types
+description vector Vec<TypeEntry>
is retrieved, or the Vault produces an
+error indicating that the load_types
payload is damaged.
load_types
updates must be signed by the general verifier.
| load_types
update verification | General verifier in Vault database | Action |
+| :- | :- | :- |
+| unverified, 0xff
update encryption code | None
| load types if the types are not yet in the database |
+| verified by a
| None
| (1) set general verifier to Some(a)
, process the general hold; (2) load types, warn if the types are the same as before |
+| verified by a
| Some(b)
| reject types, error indicates that load_types
requires general verifier signature |
+| verified by a
| Some(a)
| load types if the types are not yet in the database |
If the load_types
verifier is same as the general verifier in the database
+and the types are same as the types in the database, the Vault produces an
+error indicating that the types are already known.
Each time the types are loaded, the Vault produces a warning. load_types
+is rare and quite unexpected operation.
de
Derivations import has the following structure:
+prelude | derivations import payload | +
Derivations import payload is a SCALE-encoded ExportAddrs
structure.
It does not contain any private keys or seed phrases.
+ExportAddrs
structure holds the following information about each key:
ss58
address of the derived key (h160
for ethereum based chains)When processing derivations import, all data after prelude is transformed into
+ExportAddrs
. Network genesis hash, encryption and derivations set are
+derived from it, or the Vault produces a warning indicating that the derivation
+import payload is corrupted.
Vault checks that the network for which the derivations are imported has +network specs in the Vault database. If not, a warning is produced.
+Vault checks that the derivation set contains only valid derivations. If any +derivation is unsuitable, a warning is produced indicating this.
+If the user accepts the derivations import, Vault generates a key for each valid +derivation.
+If one of the derived keys already exists, it gets ignored, i.e. no error is +produced.
+If there are two derivations with identical path within the payload, only one +derived key is created.
+ +On top level, Vault consists of following parts:
+There are 3 actual endpoints in rust
folder: signer
, which is source of
+library used for Vault itself; generate_message
, which is used to update
+Vault repo with new built-in network information and to generate
+over-the-airgap updates; and qr_reader_pc
which is a minimalistic app to parse
+qr codes that we had to write since there was no reasonably working alternative.
Sub-folders of the rust
folder:
constants
— constant values defined for the whole workspace.db_handling
— all database-related operations for Vault and
+generate_message
tool. Most of the business logic is contained here.defaults
— built-in and test data for databasedefinitions
— objects used across the workspace are defined herefiles
— contains test files and is used for build and update generation
+processes. Most contents are gitignored.generate_message
— tool to generate over-the-airgap updates and maintain
+network info database on hot sidenavigator
— navigation for Vault app; it is realized in rust to unify app
+behavior across the platformsparser
- parses signable transactions. This is internal logic for
+transaction_parsing
that is used when signable transaction is identified, but
+it could be used as a standalone lib for the same purpose.printing_balance
— small lib to render tokens with proper unitsqr_reader_pc
— small standalone PC app to parse QR codes in Vault
+ecosystem. Also is capable of parsing multiframe payloads (theoretically, in
+practice it is not feasible due to PC webcam low performance)qr_reader_phone
— logic to parse QR payloads in Vaultqrcode_rtx
— multiframe erasure-encoded payload generator for signer update
+QR animation.qrcode_static
— generation of static qr codes used all over the workspacesigner
— FFI interface crate to generate bindings that bridge native code
+and rust backendtransaction_parsing
— high-level parser for all QR payloads sent into Vaulttransaction_signing
— all operations that could be performed when user
+accepts payload parsed with transaction_parsing
For interfacing rust code and native interface we use
+uniffi framework. It is a framework
+intended to aid building cross-platform software in Rust especially for the
+cases of re-using components written in Rust in the smartphone application
+development contexts. Other than Vault itself one of the most notable users of
+the uniffi
framework are the Mozilla Application Services
uniffi
framework provides a way for the developer to define a clear and a
+typesafe FFI
interface between components written in Rust
and languages such
+as Kotlin
and Swift
. This approach leads to a much more robust architecture
+than implementing a homegrown FFI with, say, passing JSON-serialized data back
+and forth between Kotlin
and Rust
code. Here is why.
Suppose the application needs to pass a following structure through FFI from
+Kotlin
to Rust
or back:
#[derive(Serialize, Deserialize)]
+ struct Address { street:String, city: String, }
+This would mean that on the Kotlin
side of the FFI there would have to be some
+way of turning this type from JSON into a Kotlin
type. It may be some sort of
+scheme or even a manual JSON value-by-key data extraction.
Now suppose this struct is changed by adding and removing some fields:
+#[derive(Serialize, Deserialize)]
+ struct Address { country: String, city: String, index: usize, }
+After this change on a Rust-side the developer would have to remember to
+reflect these changes on the Kotlin
and Swift
sides and if that is not done
+there is a chance that it will not be caught in build-time by CI. It is quite
+hard to remember everything and having a guarantee that such things would be
+caught at compile time is much better than not having this sort of guarantee.
+One of the things uniffi
solves is exactly this: it provides compile-time
+guarantees of typesafety.
The other concern with the JSON serialization approach is performance. As long
+as small objects are transferred back and forth it is no trouble encoding them
+into strings. But suppose the application requires transferring bigger blobs of
+binary data such as png
images or even some metadata files. Using JSON would
+force the developer to encode such blobs as Strings
before passing them into
+FFI and decoding them back into binary blobs on the other side of the FFI.
+uniffi
helps to avoid this also.
Native frontends are made separately for each supported platform. To keep things +uniform, interfaces are made as simple as possible and as much code is written +in unified Rust component, as possible. Yet, platform-specific functions, +including runtime management and threading, are also accessed through native +framework. The structure of native frontend follows modern (2022) reactive +design pattern of View-Action-Model triad. Thus, all backend is located in data +model section, along with few native business logic components.
+It is important to note, that native navigation is not used, as due to +subtle differences in its seemingly uniform design across platforms. Navigation +is instead implemented on Rust side and, as an additional advantage, is tested +there at lower computational cost for CI pipelines.
+For storage of all data except secrets, a sled database is used. Choice of db +was based on its lightweightness, reliability, portability.
+ +Vault has the following systems:
+These systems are located in different parts the app and some of them rely on +hot-side infrastructure. The general design goal was to isolate as much as +possible in easily maintainable Rust code and keep only necessary functions in +native side. Currently, those include:
+Keypairs used in Vault are generated from secret seed phrase, derivation path +and optional secret password, in accordance with specifications described in +subkey manual using code imported directly from substrate codebase for best +conformance.
+Secret seed phrase is stored as a string in devices original KMS. It is +symmetrically encrypted with a strong key that either is stored in a +hardware-protected keyring or uses biometric data (in case of legacy android +devices without strongbox system). Secrets access is managed by operating +system's built-in authorization interface. Authorization is required for +creation of seeds, access to seeds and removal of seeds. One particular special +case is addition of the first seed on iOS platform, that does not trigger +authorization mechanism as the storage is empty at this moment; this is in +agreement with iOS key management system design and potentially leads to a +threat of attacker replacing a single key by adding it to empty device; this +attack is countered by authorization on seed removal.
+Thus, secret seeds source of truth is KMS. To synchronize the rest of the app,
+list of seed identifiers is sent to backend on app startup and on all events
+related to changes in this list by calling update_seed_names(Vec<String>)
.
Random seed generator and seed recovery tools are implemented in Rust. These +are the only 2 cases where seed originates not in KMS.
+The most complex part of key management is storage of derivation strings and +public keys. Improper handling here may lead to user's loss of control over +their assets.
+Key records are stored as strings in database associated with secret seed +identifiers, crypto algorithm, and list of allowed networks. Public key and its +cryptographic algorithm are used to deterministically generate database record +key - thus by design distinct key entries directly correspond to addresses on +chain.
+Creation of new records requires generation of public keys through derivation +process, thus secret seed should be queried - so adding items to this database +requires authentication.
+Substrate keys could be natively used across all networks supporting their +crypto algorithm. This may lead to accidental re-use of keys; thus it is not +forbidden by the app, but networks are isolated unless user explicitly expresses +desire to enable key in given network. From user side, it is abstracted into +creation of independent addresses; however, real implementation stores addresses +with public keys as storage keys and thus does not distinguish between networks. +To isolate networks, each key stores a field with a list of allowed networks, +and when user "creates" address with the same pubkey as already existing one, it +is just another network added to the list of networks.
+Keys could be imported through QR code created by generate_message
tool
+(instructions). A
+plaintext newline-separated list of derivations should be supplied to the tool
+along with network identifier; the import thus is bound to certain network,
+however, it is not bound to any particular seed - user can select any of
+created seeds and, after authorization, create keys with given paths. Bulk
+import of password-protected keys is forbidden at the moment.
Optional password (part of derivation path after ///
) is never stored, only
+addresses that have password in their derivation path are marked. Thus, password
+is queried every time it is needed with a tool separate from OS authentication
+interface, but together with authentication screen, as password is always used
+with a secret seed phrase.
All memory handles by native framework relies on native framework's memory
+protection mechanisms (JVM virtualization and Swift isolation and garbage
+collection). However, when secrets are processed in Rust, no inherent designed
+memory safety features are available. To prevent secrets remaining in memory
+after their use, zeroize
library is used. Also, describe string destruction
+protocol or fix it
Every payload to be signed is first extracted from transfer payload in agreement +with UOS specification and polkadot-js implementation. Only payloads that could +be parsed and visualized somehow could be signed to avoid blind signing - thus +on parser error no signable payload is produced and signing procedure is not +initiated.
+When signable payload is ready, it is stored in TRANSACTION tree while +user makes decision on whether to sign it. While in storage, database checksum +is monitored for changes.
+Signing uses private key generated from KMS-protected secret seed phrase, +derivation string and optional password. Signing operation itself is imported +directly from substrate codebase as dependency.
+Signing event or its failure is logged and signature wrapped in UOS format is +presented as a qr static image on the phone.
+Transaction parsing process is described in UOS format documentation
+Signable transaction is decomposed into hierarchical cards for clarity. All
+possible scale-decodable types are assigned to generalized visualization
+patterns ("transaction cards") with some types having special visualizations
+(balance
formatted with proper decimals and units, identicons added to
+identities, etc.). Each card is assigned order
and indent
that allow the
+cards to be shown in a lazy view environment. Thus, any networks that have
+minimal metadata requirements should be decodable and visualizable.
Some cards also include documentation entries fetched from metadata. Those could +be expanded in UI on touch.
+Thus, user has opportunity to read the whole transaction before signing.
+Transactions are encoded in accordance to UOS standard in QR codes. QR codes can
+be sent into Vault - through static frames or dynamic multiframe animations -
+and back - only as static frames. QR codes are decoded through native image
+recognition system and decoded through rust backend; output QR codes are
+generated in png format by backend. There are 2 formats of multiframe QR codes:
+legacy multiframe and raptorq
multiframe. Legacy multiframe format requires
+all frames in animation to be collected and is thus unpractical for larger
+payloads. RaptorQ multiframe format allows any subset of frames to be collected
+and thus allows large payloads to be transferred effortlessly.
Fast multiframe transfer works efficiently at 30 fps. Typical large payloads +contain up to 200 frames at current state of networks. This can be theoretically +performed in under 10 seconds; practically this works in under 1 minute.
+Vault can download new networks and metadata updates from QR data. To prevent +malicious updates from compromising security, a system of certificates is +implemented.
+Updates could be generated by any user; they can also be distributed in signed form to delegate validity check job to trusted parties. These trusted parties should sign metadata with their asymmetric key - certificate - and they become verifiers once their update is uploaded to Vault. There are 2 tiers of certificates - "general" and "custom", with the first allowing more comfortable use of Vault at cost of only one general verifier allowed.
+Rules about verifier certificates are designed around simplicity of security protocol: one trusted party becomes main source of trust and updates generated by it are just accepted. If that party does not have all required updates available, other party can be added as custom verifier. That verifier is not allowed to change specs at will and suspicious activity by custom verifier would interfere with network usage thus stopping user from doing potentially harmful stuff. This allows less strenuous security policy on user side.
+It is important to note that certificates could not be effectively revoked considering airgapped nature of the app, thus it is recommended to keep their keys on airgapped Vault devices if updates signed by these certificates are distributed publicly.
+ +An additional security feature is network detector. When the app is on, it runs +in the background (on low-priority thread) and attempts to monitor the network +availability. This detector is implemented differently on different platforms +and has different features and limitations; however, it does not and could not +provide full connectivity monitoring and proper maintaining of airgap is +dependent on user. Vault device should always be kept in airplane mode and all +other connectivity should be disabled.
+The basic idea of network detection alertness is that when network connectivity +is detected, 3 things happen:
+When network connectivity is lost, only visual indication changes. To restore +clean state of Vault, user should acknowledge safety alert by pressing on +shield icon, reading and accepting the warning. Upon acknowledging, it is logged +in history, visual indication changes to green and all normal Vault functions +are restored.
+Airplane mode detection in iOS is forbidden and may lead to expulsion of the app +from the App Store. Thus, detector relies on probing network interfaces. If any +network interface is up, network alert is triggered.
+Network detector is triggered directly by airplane mode change event.
+Other possible network connectivity methods are not monitored. Even though it is +possible to add detectors for them, accessing their status will require the app +to request corresponding permissions form OS, thus reducing app's isolation and +decreasing overall security - first, by increasing chance of leak in breach +event, and second, by making corrupt fake app that leaks information through +network appear more normal. Furthermore, there is information that network might +be connected through cable in some devices in airplane mode; there was no +research on what debugging through cable is capable of for devices in airplane +mode. Thus, network detector is a convenience too and should not be relied on as +sole source of security; user is responsible for device isolation.
+All events that happen in Vault are logged by backend in history tree of +database. From user interface, all events are presented in chronological order +on log screen. On the same screen, history checksum could be seen and custom +text entries could be added to database. Checksum uses time added to history +records in computation and is therefore impractical to forge.
+Events presented on log screen are colored to distinguish "normal" and +"dangerous" events. Shown records give minimal important information about the +event. On click, detailed info screen is shown, where all events happened at the +same time are presented in detail (including transactions, that are decoded for +review if metadata is still available).
+Log could also be erased for privacy; erasure event is logged and becomes the +first event in recorded history.
+Vault can sign network and metadata updates that could be used for other
+signers. User can select any update component present in Vault and any key
+available for any network and generate a qr code which, upon decoding, can be
+used by generate_message
or similar tool to generate over-the-airgap update.
+See detailed documentation
This feature was designed for elegance, but it is quite useful to maintain +update signing key for large update distribution centers, for it allows to +securely store secret certificate key that could not be practically revoked if +compromised.
+User interface is organized through View-Action-DataModel abstraction.
+Vault visual representation is abstracted in 3 visual layers placed on top of
+each other: screen
, modal
and alert
. This structure is mostly an
+adaptation of iOS design guidelines, as android native UI is much flexible and
+it is easier to adopt it to iOS design patterns than vice versa. Up to one of
+each component could be presented simultaneously. Screen component is always
+present in the app, but sometimes it is fully or partially blocked by other
+components.
Modals and alerts are dismissed on goBack
action, screens have complex
+navigation rules. Modals require user to take action and interrupt flow. Alerts
+are used for short information interruptions, like error messages or
+confirmations.
In addition to these, header bar is always present on screen and footer bar is +presented in some cases. Footer bar always has same structure and only allows +navigation to one of navigation roots. Top bar might contain back button, screen +name, and extra menu button; status indicator is always shown on top bar.
+Almost all actions available to user are in fact handled by single operation -
+action()
backend function, that is called through pushButton
native
+interface. In native side, this operation is debounced by time. On rust side,
+actions are performed on static mutex storing app state; on blocked mutex
+actions is ignored, as well as impossible actions that are not allowed in
+current state of navigation. Thus, state of the app is protected against
+undefined concurrency effects by hardware button-like behavior of action()
.
Most actions lead to change of shown combination of screen, modal and alert; but +some actions - for example, those involving keyboard input - alter contents of a +UI component. In most cases, all parameters of UI components are passed as +states (more or less similar concept on all platforms) and frontend framework +detects updates and seamlessly performs proper rendering.
+Action accepts 3 parameters: action type (enum), action data (&str
), secret data
+(&str
). Secret data is used to transfer secret information and care is taken to
+always properly zeroize its contents; on contrary, action data could contain
+large strings and is optimized normally.
Data model as seen by native UI consists of 3 parts: secret seed content, +network detection state and screen contents. Secret seed content consists of +list of seed names that are used as handles to fetch secret material from secure +storage. Network detection state is a 4-state variable that describes current +network detection state (safe state, network is currently detected, network was +detected before, error state). The rest of data model is a black box in Rust.
+From Rust side, model is generated by navigation
crate. The state of the app
+is stored in lazy static State
object and sufficient information required for
+View rendering is generated into ActionResult
object that is sent into native
+layer on each action update.
Polkadot Vault is a mobile application that allows any smartphone to act as an air-gapped crypto wallet. This is also known as "cold storage".
+You can create accounts in Substrate-based networks, sign messages/transactions, and transfer funds to and from these accounts without any sort of connectivity enabled on the device.
+You must turn off or even physically remove the smartphone's Wifi, Mobile Network, and Bluetooth to ensure that the mobile phone containing these accounts will not be exposed to any online threat. Switching to airplane mode suffices in many cases.
+Disabling the mobile phone's networking abilities is a requirement for the app to be used as intended, check our wiki for more details.
+Have a look at the tutorial on our wiki to learn how to use Polkadot Vault together with Polkadot-js app.
+Any data transfer from or to the app happens using QR code. By doing so, the most sensitive piece of information, the private keys, will never leave the phone. The Polkadot Vault mobile app can be used to store any Substrate account, this includes Polkadot (DOT) and Kusama (KSM) networks.
+Currently Vault is available only for iOS. Android version is coming soon.
+These tutorials and docs are heavily outdated at the moment, please use them as references or help improving
+If you are upgrading from older version of Vault, please see changelog and upgrading Vault
+Please note that the Vault app is an advanced tool designed for maximum security and complex features. In many use cases, more user-friendly tools would be sufficient.
+ +Older versions of this app could be useful for development, however, they are not safe for use in production. They are available at following branches:
+Polkadot-Vault is GPL 3.0 licensed.
+ +