The saas service name, like 'aliyun.email'/'aws.ses'/'...'
+ If your system uses multiple IVR services at the same time,
+ you can specify which service to use with this field.
The saas service name, like 'aliyun.email'/'aws.ses'/'...'
+ If your system uses multiple IVR services at the same time,
+ you can specify which service to use with this field.
id identifies the event. Producers MUST ensure that source + id
+is unique for each distinct event. If a duplicate event is re-sent
+(e.g. due to a network error) it MAY have the same id.
source identifies the context in which an event happened.
+Often this will include information such as the type of the
+event source, the organization publishing the event or the process
+that produced the event. The exact syntax and semantics behind
+the data encoded in the URI is defined by the event producer.
The content of configuration item
+Empty if the configuration is not set, including the case that the configuration is changed from value-set to value-not-set.
Subscribes update event for given keys.
+If true, when any configuration item in this request is updated, app will receive event by OnConfigurationEvent() of app callback
The metadata passing to output binding components
+Common metadata property:
+- ttlInSeconds : the time to live in seconds for the message.
+If set in the binding definition will cause all messages to
+have a default time to live. The message ttl overrides any value
+in the binding definition.
(optional) The entity tag which represents the specific version of data.
+The exact ETag format is defined by the corresponding data store. Layotto runtime only treats ETags as opaque strings.
Required. The type of operation to be executed.
+Legal values include:
+"upsert" represents an update or create operation
+"delete" represents a delete operation
Required. lock_owner indicate the identifier of lock owner.
+You can generate a uuid as lock_owner.For example,in golang:
+req.LockOwner = uuid.New().String()
+This field is per request,not per process,so it is different for each request,
+which aims to prevent multi-thread in the same process trying the same lock concurrently.
+The reason why we don't make it automatically generated is:
+1. If it is automatically generated,there must be a 'my_lock_owner_id' field in the response.
+This name is so weird that we think it is inappropriate to put it into the api spec
+2. If we change the field 'my_lock_owner_id' in the response to 'lock_owner',which means the current lock owner of this lock,
+we find that in some lock services users can't get the current lock owner.Actually users don't need it at all.
+3. When reentrant lock is needed,the existing lock_owner is required to identify client and check "whether this client can reenter this lock".
+So this field in the request shouldn't be removed.
(default) WEAK means a "best effort" incrementing service.But there is no strict guarantee of global monotonically increasing.
+The next id is "probably" greater than current id.
+
+
+
+
STRONG
+
1
+
STRONG means a strict guarantee of global monotonically increasing.
+The next id "must" be greater than current id.
+
+
+
+
+
+
StateOptions.StateConcurrency
+
Enum describing the supported concurrency for state.
The API server uses Optimized Concurrency Control (OCC) with ETags.
When an ETag is associated with an save or delete request, the store shall allow the update only if the attached ETag matches with the latest ETag in the database.
But when ETag is missing in the write requests, the state store shall handle the requests in the specified strategy(e.g. a last-write-wins fashion).
+
+
+
Name
Number
Description
+
+
+
+
+
CONCURRENCY_UNSPECIFIED
+
0
+
Concurrency state is unspecified
+
+
+
+
CONCURRENCY_FIRST_WRITE
+
1
+
First write wins
+
+
+
+
CONCURRENCY_LAST_WRITE
+
2
+
Last write wins
+
+
+
+
+
+
StateOptions.StateConsistency
+
Enum describing the supported consistency for state.
+
+
+
Name
Number
Description
+
+
+
+
+
CONSISTENCY_UNSPECIFIED
+
0
+
Consistency state is unspecified
+
+
+
+
CONSISTENCY_EVENTUAL
+
1
+
The API server assumes data stores are eventually consistent by default.A state store should:
+- For read requests, the state store can return data from any of the replicas
+- For write request, the state store should asynchronously replicate updates to configured quorum after acknowledging the update request.
+
+
+
+
CONSISTENCY_STRONG
+
2
+
When a strong consistency hint is attached, a state store should:
+- For read requests, the state store should return the most up-to-date data consistently across replicas.
+- For write/delete requests, the state store should synchronisely replicate updated data to configured quorum before completing the write request.
The HEAD action retrieves metadata from an object without returning the object itself.
+Refer https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html
Object Tagging API
+Sets the supplied tag-set to an object that already exists in a bucket.
+Refer https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html
Object Multipart Operation API
+Initiates a multipart upload and returns an upload ID.
+Refer https://docs.aws.amazon.com/zh_cn/AmazonS3/latest/API/API_CreateMultipartUpload.html
A presigned URL gives you access to the object identified in the URL, provided that the creator of the presigned URL has permissions to access that object.
+Refer https://docs.aws.amazon.com/AmazonS3/latest/userguide/PresignedUrlUploadObject.html
Specifies that CSV field values may contain quoted record delimiters and such
+records should be allowed. Default value is FALSE. Setting this value to TRUE
+may lower performance.
A single character used to indicate that a row should be ignored when the
+character is present at the start of that row. You can specify any character to
+indicate a comment line.
Describes the first line of input. Valid values are:
+
+* NONE: First line is not
+a header.
+
+* IGNORE: First line is a header, but you can't use the header values
+to indicate the column in an expression. You can use column position (such as
+_1, _2, …) to indicate the column (SELECT s._1 FROM OBJECT s).
+
+* Use: First
+line is a header, and you can use the header value to identify a column in an
+expression (SELECT "name" FROM OBJECT).
A single character used for escaping when the field delimiter is part of the
+value. For example, if the value is a, b, Amazon S3 wraps this field value in
+quotation marks, as follows: " a , b ". Type: String Default: " Ancestors: CSV
A single character used for escaping the quotation mark character inside an
+already escaped value. For example, the value """ a , b """ is parsed as " a , b
+".
A single character used for escaping when the field delimiter is part of the
+value. For example, if the value is a, b, Amazon S3 wraps this field value in
+quotation marks, as follows: " a , b ".
Indicates whether to use quotation marks around output fields.
+
+* ALWAYS: Always
+use quotation marks for output fields.
+
+* ASNEEDED: Use quotation marks for
+output fields when needed.
If the object expiration is configured, this will contain the expiration date
+(expiry-date) and rule ID (rule-id). The value of rule-id is URL-encoded.
If present, specifies the ID of the Amazon Web Services Key Management Service
+(Amazon Web Services KMS) symmetric customer managed key that was used for the
+object.
The tag-set for the object destination object this value must be used in
+conjunction with the TaggingDirective. The tag-set must be encoded as URL Query
+parameters.
Specifies whether Amazon S3 should use an S3 Bucket Key for object encryption
+with server-side encryption using AWS KMS (SSE-KMS). Setting this header to true
+causes Amazon S3 to use an S3 Bucket Key for object encryption with SSE-KMS.
+Specifying this header with a PUT action doesn’t affect bucket-level settings
+for S3 Bucket Key.
Specifies what content encodings have been applied to the object and thus what
+decoding mechanisms must be applied to obtain the media-type referenced by the
+Content-Type header field.
The account ID of the expected bucket owner. If the bucket is owned by a
+different account, the request fails with the HTTP status code 403 Forbidden
+(access denied).
If the bucket has a lifecycle rule configured with an action to abort incomplete
+multipart uploads and the prefix in the lifecycle rule matches the object name
+in the request, the response includes this header
If server-side encryption with a customer-provided encryption key was requested,
+the response will include this header confirming the encryption algorithm used.
If server-side encryption with a customer-provided encryption key was requested,
+the response will include this header to provide round-trip message integrity
+verification of the customer-provided encryption key.
If present, specifies the Amazon Web Services KMS Encryption Context to use for
+object encryption. The value of this header is a base64-encoded UTF-8 string
+holding JSON with the encryption context key-value pairs.
If present, specifies the ID of the Amazon Web Services Key Management Service
+(Amazon Web Services KMS) symmetric customer managed key that was used for the
+object.
The account ID of the expected bucket owner. If the bucket is owned by a
+different account, the request fails with the HTTP status code 403 Forbidden
+(access denied).
Specifies whether the versioned object that was permanently deleted was (true)
+or was not (false) a delete marker. In a simple DELETE, this header indicates
+whether (true) or not (false) a delete marker was created.
The version ID of the delete marker created as a result of the DELETE operation.
+If you delete a specific object version, the value returned by this header is
+the version ID of the object version deleted.
Part number of the object being read. This is a positive integer between 1 and
+10,000. Effectively performs a 'ranged' GET request for the part specified.
+Useful for downloading just a part of an object.
Specifies the customer-provided encryption key for Amazon S3 used to encrypt the
+data. This value is used to decrypt the object when recovering it and must match
+the one used when storing the data. The key must be appropriate for use with the
+algorithm specified in the x-amz-server-side-encryption-customer-algorithm header
Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321
+Amazon S3 uses this header for a message integrity check to ensure that the
+encryption key was transmitted without error.
Specifies what content encodings have been applied to the object and thus what
+decoding mechanisms must be applied to obtain the media-type referenced by the
+Content-Type header field.
If the object expiration is configured (see PUT Bucket lifecycle), the response
+includes this header. It includes the expiry-date and rule-id key-value pairs
+providing object expiration information. The value of the rule-id is
+URL-encoded.
The count of parts this object has. This value is only returned if you specify
+partNumber in your request and the object was uploaded as a multipart upload.
The account ID of the expected bucket owner. If the bucket is owned by a
+different account, the request fails with the HTTP status code 403 Forbidden
+(access denied).
Part number of the object being read. This is a positive integer between 1 and
+10,000. Effectively performs a 'ranged' HEAD request for the part specified.
+Useful querying about the size of the part and the number of parts in this
+object.
Character you use to group keys. All keys that contain the same string between
+the prefix, if specified, and the first occurrence of the delimiter after the
+prefix are grouped under a single result element, CommonPrefixes. If you don't
+specify the prefix parameter, then the substring starts at the beginning of the
+key. The keys that are grouped under CommonPrefixes result element are not
+returned elsewhere in the response.
Requests Amazon S3 to encode the object keys in the response and specifies the
+encoding method to use. An object key may contain any Unicode character;
Together with upload-id-marker, this parameter specifies the multipart upload
+after which listing should begin. If upload-id-marker is not specified, only the
+keys lexicographically greater than the specified key-marker will be included in
+the list. If upload-id-marker is specified, any multipart uploads for a key
+equal to the key-marker might also be included, provided those multipart uploads
+have upload IDs lexicographically greater than the specified upload-id-marker.
Sets the maximum number of multipart uploads, from 1 to 1,000, to return in the
+response body. 1,000 is the maximum number of uploads that can be returned in a
+response.
Lists in-progress uploads only for those keys that begin with the specified
+prefix. You can use prefixes to separate a bucket into different grouping of
+keys. (You can think of using prefix to make groups in the same way you'd use a
+folder in a file system.)
Together with key-marker, specifies the multipart upload after which listing
+should begin. If key-marker is not specified, the upload-id-marker parameter is
+ignored. Otherwise, any multipart uploads for a key equal to the key-marker
+might be included in the list only if they have an upload ID lexicographically
+greater than the specified upload-id-marker.
Indicates whether the returned list of multipart uploads is truncated. A value
+of true indicates that the list was truncated. The list can be truncated if the
+number of multipart uploads exceeds the limit allowed or specified by max
+uploads.
A delimiter is a character that you specify to group keys. All keys that contain
+the same string between the prefix and the first occurrence of the delimiter are
+grouped under a single result element in CommonPrefixes. These groups are
+counted as one result against the max-keys limitation. These keys are not
+returned elsewhere in the response.
Requests Amazon S3 to encode the object keys in the response and specifies the
+encoding method to use. An object key may contain any Unicode character;
Sets the maximum number of keys returned in the response. By default the action
+returns up to 1,000 key names. The response might contain fewer keys but will
+never contain more. If additional keys satisfy the search criteria, but were not
+returned because max-keys was exceeded, the response contains true. To return
+the additional keys, see key-marker and version-id-marker.
Use this parameter to select only those keys that begin with the specified
+prefix. You can use prefixes to separate a bucket into different groupings of
+keys. (You can think of using prefix to make groups in the same way you'd use a
+folder in a file system.) You can use prefix with delimiter to roll up numerous
+objects into a single result under CommonPrefixes.
When the number of responses exceeds the value of MaxKeys, NextVersionIdMarker
+specifies the first object version not returned that satisfies the search
+criteria.
Requests Amazon S3 to encode the object keys in the response and specifies the
+encoding method to use. An object key may contain any Unicode character;
+however, XML 1.0 parser cannot parse some characters, such as characters with an
+ASCII value from 0 to 10. For characters that are not supported in XML 1.0, you
+can add this parameter to request that Amazon S3 encode the keys in the
+response.
The account ID of the expected bucket owner. If the bucket is owned by a
+different account, the request fails with the HTTP status code 403 Forbidden
+(access denied).
Sets the maximum number of keys returned in the response. By default the action
+returns up to 1,000 key names. The response might contain fewer keys but will
+never contain more.
Causes keys that contain the same string between the prefix and the first
+occurrence of the delimiter to be rolled up into a single result element in the
+CommonPrefixes collection. These rolled-up keys are not returned elsewhere in
+the response. Each rolled-up result counts as only one return against the
+MaxKeys value.
When response is truncated (the IsTruncated element value in the response is
+true), you can use the key name in this field as marker in the subsequent
+request to get next set of objects.
When a list is truncated, this element specifies the last part in the list, as
+well as the value to use for the part-number-marker request parameter in a
+subsequent request.
Indicates whether the returned list of parts is truncated. A true value
+indicates that the list was truncated. A list can be truncated if the number of
+parts exceeds the limit returned in the MaxParts element.
Specifies presentational information for the object. For more information, see
+http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html#sec19.5.1
+(http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html#sec19.5.1).
Specifies what content encodings have been applied to the object and thus what
+decoding mechanisms must be applied to obtain the media-type referenced by the
+Content-Type header field. For more information, see
+http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.11
+(http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.11).
The date and time at which the object is no longer cacheable. For more
+information, see http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.21
+(http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.21).
If server-side encryption with a customer-provided encryption key was requested,
+the response will include this header confirming the encryption algorithm used.
If server-side encryption with a customer-provided encryption key was requested,
+the response will include this header to provide round-trip message integrity
+verification of the customer-provided encryption key.
If present, specifies the ID of the Amazon Web Services Key Management Service
+(Amazon Web Services KMS) symmetric customer managed key that was used for the
+object.
The send status metadata returned from SMS service.
+Includes `PhoneNumber`.
+`PhoneNumber`, is the phone number SMS send to. Supported by tencentcloud.
+
-# Layotto (L8): To be the next layer of OSI layer 7
-
-
-[![Layotto Env Pipeline 🌊](https://github.com/mosn/layotto/actions/workflows/proto-checker.yml/badge.svg)](https://github.com/mosn/layotto/actions/workflows/proto-checker.yml)
+[![Layotto Env Pipeline 🌊](https://github.com/mosn/layotto/actions/workflows/quickstart-checker.yml/badge.svg)](https://github.com/mosn/layotto/actions/workflows/quickstart-checker.yml)
[![Layotto Dev Pipeline 🌊](https://github.com/mosn/layotto/actions/workflows/layotto-ci.yml/badge.svg)](https://github.com/mosn/layotto/actions/workflows/layotto-ci.yml)
[![GoDoc](https://godoc.org/mosn.io/layotto?status.svg)](https://godoc.org/mosn.io/layotto)
@@ -10,6 +10,7 @@
[![codecov](https://codecov.io/gh/mosn/layotto/branch/main/graph/badge.svg?token=10RxwSV6Sz)](https://codecov.io/gh/mosn/layotto)
[![Average time to resolve an issue](http://isitmaintained.com/badge/resolution/mosn/layotto.svg)](http://isitmaintained.com/project/mosn/layotto "Average time to resolve an issue")
+
Layotto(/leɪˈɒtəʊ/) is an application runtime developed using Golang, which provides various distributed capabilities for applications, such as state management, configuration management, and event pub/sub capabilities to simplify application development.
@@ -19,7 +20,7 @@ Layotto is built on the open source data plane [MOSN](https://github.com/mosn/mo
Layotto aims to combine [Multi-Runtime](https://www.infoq.com/articles/multi-runtime-microservice-architecture/) with Service Mesh into one sidecar. No matter which product you are using as the Service Mesh data plane (e.g. MOSN,Envoy or any other product), you can always attach Layotto to it and add Multi-Runtime capabilities without adding new sidecars.
-For example, by adding Runtime capabilities to MOSN, a Layotto process can both [serve as the data plane of istio](/docs/start/istio/) and provide various Runtime APIs (such as Configuration API, Pub/Sub API, etc.)
+For example, by adding Runtime capabilities to MOSN, a Layotto process can both [serve as the data plane of istio](en/start/istio/) and provide various Runtime APIs (such as Configuration API, Pub/Sub API, etc.)
In addition, we were surprised to find that a sidecar can do much more than that. We are trying to make Layotto even the runtime container of FaaS (Function as a service) with the magic power of [WebAssembly](https://en.wikipedia.org/wiki/WebAssembly) .
@@ -27,7 +28,7 @@ In addition, we were surprised to find that a sidecar can do much more than that
- Service Communication
- Service Governance.Such as traffic hijacking and observation, service rate limiting, etc
-- [As the data plane of istio](/docs/tart/istio/)
+- [As the data plane of istio](en/start/istio/)
- Configuration management
- State management
- Event publish and subscribe
@@ -46,7 +47,7 @@ Layotto provides sdk in various languages. The sdk interacts with Layotto throug
### Get started with Layotto
-You can try the quickstart demos below to get started with Layotto. In addition, you can experience the [online laboratory](/docs/start/lab)
+You can try the quickstart demos below to get started with Layotto. In addition, you can experience the [online laboratory](en/start/lab)
### API
@@ -66,7 +67,7 @@ You can try the quickstart demos below to get started with Layotto. In addition,
| feature | status | quick start | desc |
| ------- | :----: | :----------------------------------------------------: | -------------------------- |
-| istio | ✅ | [demo](/docs/start/istio/) | As the data plane of istio |
+| istio | ✅ | [demo](en/start/istio/) | As the data plane of istio |
### Extendability
@@ -121,9 +122,9 @@ Layotto enriches the CNCF CLOUD N
### Contact Us
-| Platform | Link |
-| :----------------------------------------------------- |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| 💬 [DingTalk](https://www.dingtalk.com/en) (preferred) | Search the group number: 31912621 or scan the QR code below |
+| Platform | Link |
+| :----------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| 💬 [DingTalk](https://www.dingtalk.com/en) (preferred) | Search the group number: 31912621 or scan the QR code below |
[comment]: <> (| 💬 [Wechat](https://www.wechat.com/en/) | Scan the QR code below and she will invite you into the wechat group )
@@ -134,15 +135,15 @@ Layotto enriches the CNCF CLOUD N
As a programming enthusiast , have you ever felt that you want to participate in the development of an open source project, but don't know where to start?
In order to help everyone better participate in open source projects, our community will regularly publish community tasks to help everyone learn by doing!
-[Document Contribution Guide](/docs/development/contributing-doc)
+[Document Contribution Guide](en/development/contributing-doc.md)
-[Component Development Guide](/docs/development/developing-component)
+[Component Development Guide](en/development/developing-component.md)
-[Layotto Github Workflows](/docs/development/github-workflows)
+[Layotto Github Workflows](en/development/github-workflows.md)
-[Layotto Commands Guide](/docs/development/commands)
+[Layotto Commands Guide](en/development/commands.md)
-[Layotto contributor guide](/docs/development/CONTRIBUTING)
+[Layotto contributor guide](en/development/CONTRIBUTING.md)
## Contributors
@@ -154,17 +155,17 @@ Thank y'all!
## Design Documents
-[Actuator Design Doc](/docs/design/actuator/actuator-design-doc)
+[Actuator Design Doc](en/design/actuator/actuator-design-doc.md)
-[Configuration API with Apollo](/docs/design/configuration/configuration-api-with-apollo)
+[Configuration API with Apollo](en/design/configuration/configuration-api-with-apollo.md)
-[Pubsub API and Compability with Dapr Component](/docs/design/pubsub/pubsub-api-and-compability-with-dapr-component)
+[Pubsub API and Compability with Dapr Component](en/design/pubsub/pubsub-api-and-compability-with-dapr-component.md)
-[RPC Design Doc](/docs/design/rpc/rpc_design_document)
+[RPC Design Doc](en/design/rpc/rpc-design-doc.md)
-[Distributed Lock API Design](/docs/design/lock/lock-api-design)
+[Distributed Lock API Design](en/design/lock/lock-api-design.md)
-[FaaS Design](/docs/design/faas/faas-poc-design)
+[FaaS Design](en/design/faas/faas-poc-design.md)
## FAQ
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/api_reference/README.md b/docs/en/api_reference/README.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/api_reference/README.md
rename to docs/en/api_reference/README.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/api_reference/appcallback_v1.md b/docs/en/api_reference/appcallback_v1.md
similarity index 99%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/api_reference/appcallback_v1.md
rename to docs/en/api_reference/appcallback_v1.md
index f072783564..54894d7c90 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/api_reference/appcallback_v1.md
+++ b/docs/en/api_reference/appcallback_v1.md
@@ -1,4 +1,7 @@
+
+
+
# appcallback.proto
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/api_reference/comment_spec_of_proto.md b/docs/en/api_reference/comment_spec_of_proto.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/api_reference/comment_spec_of_proto.md
rename to docs/en/api_reference/comment_spec_of_proto.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/api_reference/how_to_generate_api_doc.md b/docs/en/api_reference/how_to_generate_api_doc.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/api_reference/how_to_generate_api_doc.md
rename to docs/en/api_reference/how_to_generate_api_doc.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/api_reference/oss_v1.md b/docs/en/api_reference/oss_v1.md
similarity index 99%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/api_reference/oss_v1.md
rename to docs/en/api_reference/oss_v1.md
index 5e4ae4206c..98da1ebce9 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/api_reference/oss_v1.md
+++ b/docs/en/api_reference/oss_v1.md
@@ -1,4 +1,7 @@
+
+
+
# oss.proto
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/api_reference/runtime_v1.md b/docs/en/api_reference/runtime_v1.md
similarity index 99%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/api_reference/runtime_v1.md
rename to docs/en/api_reference/runtime_v1.md
index e481cac24b..787e1605e3 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/api_reference/runtime_v1.md
+++ b/docs/en/api_reference/runtime_v1.md
@@ -1,4 +1,7 @@
+
+
+
# runtime.proto
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/building_blocks/actuator/actuator.md b/docs/en/building_blocks/actuator/actuator.md
similarity index 89%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/building_blocks/actuator/actuator.md
rename to docs/en/building_blocks/actuator/actuator.md
index 719e8fab81..3ebb8f2918 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/building_blocks/actuator/actuator.md
+++ b/docs/en/building_blocks/actuator/actuator.md
@@ -49,7 +49,7 @@ var (
)
```
-Note: By default, the API will only return the health status of Layotto. If you want the API to also return the health status of the App, you need to develop a plugin that calls back the App. You can refer to [Actuator's design document](design/actuator/actuator-design-doc.md), or contact us directly to provide you with a detailed explanation.
+Note: By default, the API will only return the health status of Layotto. If you want the API to also return the health status of the App, you need to develop a plugin that calls back the App. You can refer to [Actuator's design document](en/design/actuator/actuator-design-doc.md), or contact us directly to provide you with a detailed explanation.
### /actuator/health/readiness
Used to check the health status of Layotto and app. The health status can be used to determine "Do we need to temporarily cut off the traffic and make sure no user visit this machine"
@@ -75,7 +75,7 @@ GET,no parameters.
}
```
-Note: By default, the API will only return the health status of Layotto. If you want the API to also return the health status of the App, you need to develop a plugin that calls back the App. You can refer to [Actuator's design document](design/actuator/actuator-design-doc.md), or contact us directly to provide you with a detailed explanation.
+Note: By default, the API will only return the health status of Layotto. If you want the API to also return the health status of the App, you need to develop a plugin that calls back the App. You can refer to [Actuator's design document](en/design/actuator/actuator-design-doc.md), or contact us directly to provide you with a detailed explanation.
## 2. Query runtime metadata API
@@ -107,7 +107,7 @@ We can add more information in the future:
Actuator adopts a plug-in architecture, you can also add your own plug-ins as needed, and let the API return the runtime metadata you care about.
-Note: By default, the API will only return Layotto's runtime metadata. If you want the API to also return the App's runtime metadata, you need to develop a plugin that calls back the App. You can refer to [Actuator's design document](design/actuator/actuator-design-doc.md), or contact us directly to provide you with a detailed explanation.
+Note: By default, the API will only return Layotto's runtime metadata. If you want the API to also return the App's runtime metadata, you need to develop a plugin that calls back the App. You can refer to [Actuator's design document](en/design/actuator/actuator-design-doc.md), or contact us directly to provide you with a detailed explanation.
## 3. Explanation for API path
@@ -139,4 +139,4 @@ The paths registered by default are:
```
## 4. API usage example
-See [Quick start document](start/actuator/start.md)
\ No newline at end of file
+See [Quick start document](en/start/actuator/start.md)
\ No newline at end of file
diff --git a/docs/docs/building_blocks/configuration/reference.md b/docs/en/building_blocks/configuration/reference.md
similarity index 100%
rename from docs/docs/building_blocks/configuration/reference.md
rename to docs/en/building_blocks/configuration/reference.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/building_blocks/file/file.md b/docs/en/building_blocks/file/file.md
similarity index 98%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/building_blocks/file/file.md
rename to docs/en/building_blocks/file/file.md
index ec14296c73..6bc4f676dd 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/building_blocks/file/file.md
+++ b/docs/en/building_blocks/file/file.md
@@ -7,7 +7,7 @@ File api is used to implement file operations. Applications can perform CRUD ope
## How to use File API
You can call the File API through grpc. The API is defined in [runtime.proto](https://github.com/mosn/layotto/blob/main/spec/proto/runtime/v1/runtime.proto).
-The component needs to be configured before use. Different components should have own configuration.For OSS detail configuration items, see [OSS Component Document](component_specs/file/oss.md)
+The component needs to be configured before use. Different components should have own configuration.For OSS detail configuration items, see [OSS Component Document](en/component_specs/file/oss.md)
### Example
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/building_blocks/lock/reference.md b/docs/en/building_blocks/lock/reference.md
similarity index 96%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/building_blocks/lock/reference.md
rename to docs/en/building_blocks/lock/reference.md
index eb33755df8..39b14a6301 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/building_blocks/lock/reference.md
+++ b/docs/en/building_blocks/lock/reference.md
@@ -5,10 +5,10 @@ The distributed lock API is based on a certain storage system (such as Etcd, Zoo
## How to use distributed lock API
You can call the distributed lock API through grpc. The API is defined in [runtime.proto](https://github.com/mosn/layotto/blob/main/spec/proto/runtime/v1/runtime.proto).
-The component needs to be configured before use. For detailed configuration instructions, see [Distributed Lock Component Document](component_specs/lock/common.md)
+The component needs to be configured before use. For detailed configuration instructions, see [Distributed Lock Component Document](en/component_specs/lock/common.md)
### Example
-Layotto client sdk encapsulates the logic of grpc calling. For an example of using sdk to call distributed lock API, please refer to [Quick Start: Using Distributed Lock API](start/lock/start.md)
+Layotto client sdk encapsulates the logic of grpc calling. For an example of using sdk to call distributed lock API, please refer to [Quick Start: Using Distributed Lock API](en/start/lock/start.md)
### TryLock
@@ -76,4 +76,4 @@ req.LockOwner = uuid.New().String()
To avoid inconsistencies between the documentation and the code, please refer to [proto file](https://github.com/mosn/layotto/blob/main/spec/proto/runtime/v1/runtime.proto) for detailed input parameters and return values
## Why is the distributed lock API designed like this
-If you are interested in the implementation principle and design logic, you can refer to [Distributed Lock API Design Document](design/lock/lock-api-design)
+If you are interested in the implementation principle and design logic, you can refer to [Distributed Lock API Design Document](en/design/lock/lock-api-design)
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/building_blocks/pubsub/reference.md b/docs/en/building_blocks/pubsub/reference.md
similarity index 97%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/building_blocks/pubsub/reference.md
rename to docs/en/building_blocks/pubsub/reference.md
index 3845bc2802..f37b18ccd5 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/building_blocks/pubsub/reference.md
+++ b/docs/en/building_blocks/pubsub/reference.md
@@ -19,10 +19,10 @@ Using Pub/Sub API can help you avoid the trouble of maintaining multilingual SDK
## How to use Pub/Sub API
You can call Pub/Sub API through grpc. The API is defined in [runtime.proto](https://github.com/mosn/layotto/blob/main/spec/proto/runtime/v1/runtime.proto).
-The component needs to be configured before use. For detailed configuration instructions, see [publish/subscribe component documentation](component_specs/pubsub/common.md)
+The component needs to be configured before use. For detailed configuration instructions, see [publish/subscribe component documentation](zh/component_specs/pubsub/common.md)
### Example
-Layotto client sdk encapsulates the logic of grpc call. For examples of using sdk to call Pub/Sub API, please refer to [Quick Start: Use Pub/Sub API](start/pubsub/start.md)
+Layotto client sdk encapsulates the logic of grpc call. For examples of using sdk to call Pub/Sub API, please refer to [Quick Start: Use Pub/Sub API](en/start/pubsub/start.md)
### PublishEvent
Used to publish events to the specified topic
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/building_blocks/rpc/reference.md b/docs/en/building_blocks/rpc/reference.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/building_blocks/rpc/reference.md
rename to docs/en/building_blocks/rpc/reference.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/building_blocks/sequencer/reference.md b/docs/en/building_blocks/sequencer/reference.md
similarity index 96%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/building_blocks/sequencer/reference.md
rename to docs/en/building_blocks/sequencer/reference.md
index ed61f35857..1a1d342022 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/building_blocks/sequencer/reference.md
+++ b/docs/en/building_blocks/sequencer/reference.md
@@ -38,9 +38,9 @@ When you want the generated id incremental without any regression, it is recomme
## How to use Sequencer API
You can call the Sequencer API through grpc. The API is defined in [runtime.proto](https://github.com/mosn/layotto/blob/main/spec/proto/runtime/v1/runtime.proto).
-Layotto client sdk encapsulates the logic of grpc calling. For an example of using sdk to call Sequencer API, please refer to [Quick Start: Use Sequencer API](start/sequencer/start.md)
+Layotto client sdk encapsulates the logic of grpc calling. For an example of using sdk to call Sequencer API, please refer to [Quick Start: Use Sequencer API](en/start/sequencer/start.md)
-The components need to be configured before use. For detailed configuration options, see [Sequencer component document](component_specs/sequencer/common.md)
+The components need to be configured before use. For detailed configuration options, see [Sequencer component document](en/component_specs/sequencer/common.md)
### Get next unique id
```protobuf
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/building_blocks/state/reference.md b/docs/en/building_blocks/state/reference.md
similarity index 98%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/building_blocks/state/reference.md
rename to docs/en/building_blocks/state/reference.md
index ce12fc8275..e92c9ac80c 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/building_blocks/state/reference.md
+++ b/docs/en/building_blocks/state/reference.md
@@ -20,10 +20,10 @@ Using State API can help you avoid the trouble of maintaining multilingual SDKs.
## How to use State API
You can call the State API through grpc. The API is defined in [runtime.proto](https://github.com/mosn/layotto/blob/main/spec/proto/runtime/v1/runtime.proto).
-The component needs to be configured before use. For detailed configuration items, see [State Component Document](component_specs/state/common.md)
+The component needs to be configured before use. For detailed configuration items, see [State Component Document](en/component_specs/state/common.md)
### Example
-Layotto client sdk encapsulates the logic of grpc call. For examples of using sdk to call State API, please refer to [Quick Start: Use State API](start/state/start.md)
+Layotto client sdk encapsulates the logic of grpc call. For examples of using sdk to call State API, please refer to [Quick Start: Use State API](en/start/state/start.md)
### Save state
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/configuration/apollo.md b/docs/en/component_specs/configuration/apollo.md
similarity index 96%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/configuration/apollo.md
rename to docs/en/component_specs/configuration/apollo.md
index bbe0556dc5..2e8b223476 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/configuration/apollo.md
+++ b/docs/en/component_specs/configuration/apollo.md
@@ -2,7 +2,7 @@
## Configuration item description
Example: configs/config_apollo.json
-![img.png](/img/configuration/apollo/img.png)
+![img.png](../../../img/configuration/apollo/img.png)
| Field | Required | Description |
| --- | --- | --- |
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/configuration/etcd.md b/docs/en/component_specs/configuration/etcd.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/configuration/etcd.md
rename to docs/en/component_specs/configuration/etcd.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/configuration/nacos.md b/docs/en/component_specs/configuration/nacos.md
similarity index 90%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/configuration/nacos.md
rename to docs/en/component_specs/configuration/nacos.md
index d280e210e3..9bf87427c4 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/configuration/nacos.md
+++ b/docs/en/component_specs/configuration/nacos.md
@@ -6,7 +6,7 @@ Example: configs/config_nacos.json
| Field | Required | Description |
|-----------------------|----------|---------------------------------------------------------------------------------------------------------------------------------------|
-| address | Y | Nacos server address in `` format. By default, `/nacos` is used as the URL context, and `http` is used as the connection protocol. If the ACM connection field is not specified in the configuration file, this field is **required**. Multiple address addresses can be provided for connection. |
+| address | Y | Nacos server address in format. By default, `/nacos` is used as the URL context, and `http` is used as the connection protocol. If the ACM connection field is not specified in the configuration file, this field is **required**. Multiple address addresses can be provided for connection. |
| timeout | N | Timeout for connecting to the Nacos server, in seconds. Default is 10s. |
| metadata.namespace_id | N | Nacos namespace for isolating configuration files at the namespace level. Default is empty, which represents using the "default" namespace in Nacos. |
| metadata.username | N | Username for Nacos service authentication verification. |
@@ -27,7 +27,7 @@ The `app_id` field needs to be configured, and it is a required field. It repres
However, it is not added in the configuration of the Nacos ConfigStore component but in additional configuration items, making it convenient for other components that need to reuse `app_id`.
-![img.png](/img/configuration/nacos/img.png)
+![img.png](../../../img/configuration/nacos/img.png)
## How to Start Nacos
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/file/oss.md b/docs/en/component_specs/file/oss.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/file/oss.md
rename to docs/en/component_specs/file/oss.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/file/qiniu_oss.md b/docs/en/component_specs/file/qiniu_oss.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/file/qiniu_oss.md
rename to docs/en/component_specs/file/qiniu_oss.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/file/tencentcloud_oss.md b/docs/en/component_specs/file/tencentcloud_oss.md
similarity index 95%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/file/tencentcloud_oss.md
rename to docs/en/component_specs/file/tencentcloud_oss.md
index 11f07b0d88..ed1988b9bc 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/file/tencentcloud_oss.md
+++ b/docs/en/component_specs/file/tencentcloud_oss.md
@@ -19,7 +19,7 @@ Example:configs/config_file_tencentcloud_oss.json
visit https://console.cloud.tencent.com/cos/bucket to create bucket
-![](/img/file/create_tencent_oss_bucket.png)
+![](../../../img/file/create_tencent_oss_bucket.png)
3.Create AK and SK
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/lock/common.md b/docs/en/component_specs/lock/common.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/lock/common.md
rename to docs/en/component_specs/lock/common.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/lock/consul.md b/docs/en/component_specs/lock/consul.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/lock/consul.md
rename to docs/en/component_specs/lock/consul.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/lock/etcd.md b/docs/en/component_specs/lock/etcd.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/lock/etcd.md
rename to docs/en/component_specs/lock/etcd.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/lock/mongo.md b/docs/en/component_specs/lock/mongo.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/lock/mongo.md
rename to docs/en/component_specs/lock/mongo.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/lock/redis.md b/docs/en/component_specs/lock/redis.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/lock/redis.md
rename to docs/en/component_specs/lock/redis.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/lock/zookeeper.md b/docs/en/component_specs/lock/zookeeper.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/lock/zookeeper.md
rename to docs/en/component_specs/lock/zookeeper.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/overview.md b/docs/en/component_specs/overview.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/overview.md
rename to docs/en/component_specs/overview.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/pubsub/common.md b/docs/en/component_specs/pubsub/common.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/pubsub/common.md
rename to docs/en/component_specs/pubsub/common.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/pubsub/others.md b/docs/en/component_specs/pubsub/others.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/pubsub/others.md
rename to docs/en/component_specs/pubsub/others.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/pubsub/redis.md b/docs/en/component_specs/pubsub/redis.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/pubsub/redis.md
rename to docs/en/component_specs/pubsub/redis.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/secret/common.md b/docs/en/component_specs/secret/common.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/secret/common.md
rename to docs/en/component_specs/secret/common.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/sequencer/common.md b/docs/en/component_specs/sequencer/common.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/sequencer/common.md
rename to docs/en/component_specs/sequencer/common.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/sequencer/etcd.md b/docs/en/component_specs/sequencer/etcd.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/sequencer/etcd.md
rename to docs/en/component_specs/sequencer/etcd.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/sequencer/in-memory.md b/docs/en/component_specs/sequencer/in-memory.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/sequencer/in-memory.md
rename to docs/en/component_specs/sequencer/in-memory.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/sequencer/mongo.md b/docs/en/component_specs/sequencer/mongo.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/sequencer/mongo.md
rename to docs/en/component_specs/sequencer/mongo.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/sequencer/mysql.md b/docs/en/component_specs/sequencer/mysql.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/sequencer/mysql.md
rename to docs/en/component_specs/sequencer/mysql.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/sequencer/redis.md b/docs/en/component_specs/sequencer/redis.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/sequencer/redis.md
rename to docs/en/component_specs/sequencer/redis.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/sequencer/snowflake.md b/docs/en/component_specs/sequencer/snowflake.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/sequencer/snowflake.md
rename to docs/en/component_specs/sequencer/snowflake.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/sequencer/zookeeper.md b/docs/en/component_specs/sequencer/zookeeper.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/sequencer/zookeeper.md
rename to docs/en/component_specs/sequencer/zookeeper.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/state/common.md b/docs/en/component_specs/state/common.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/state/common.md
rename to docs/en/component_specs/state/common.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/state/others.md b/docs/en/component_specs/state/others.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/state/others.md
rename to docs/en/component_specs/state/others.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/state/redis.md b/docs/en/component_specs/state/redis.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/component_specs/state/redis.md
rename to docs/en/component_specs/state/redis.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/configuration/overview.md b/docs/en/configuration/overview.md
similarity index 93%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/configuration/overview.md
rename to docs/en/configuration/overview.md
index 6d2a1ae8ef..a6211878ed 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/configuration/overview.md
+++ b/docs/en/configuration/overview.md
@@ -3,7 +3,7 @@ Example: configs/config_apollo.json
Currently, Layotto uses a MOSN layer 4 filter to integrate with MOSN and run on MOSN, so the configuration file used by Layotto is actually a MOSN configuration file.
-![img.png](/img/configuration/layotto/img.png)
+![img.png](../../img/configuration/layotto/img.png)
As shown in the example above, most of the configurations are MOSN configuration items, please refer to [MOSN configuration instructions](https://mosn.io/docs/configuration/);
@@ -36,4 +36,4 @@ The configuration item in `grpc_config` is Layotto's component configuration, th
}
```
-As for what to fill in each ``, what is each ``, and which `"": ""` configuration items can be configured with the components, you can refer to [Component specs](component_specs/overview) .
+As for what to fill in each ``, what is each ``, and which `"": ""` configuration items can be configured with the components, you can refer to [Component specs](en/component_specs/overview) .
diff --git a/docs/en/design/actuator/actuator-design-doc.md b/docs/en/design/actuator/actuator-design-doc.md
new file mode 100644
index 0000000000..819a2e3f80
--- /dev/null
+++ b/docs/en/design/actuator/actuator-design-doc.md
@@ -0,0 +1,298 @@
+# Actuator's Design Document
+# 1. Product Design
+## 1.1. Requirements
+
+- Health check
+
+Users can check the health status of both Layotto internal components and the applications behind Layotto through the Actuator API,
+
+- Query runtime metadata
+
+Through the Actuator API, Layotto's own metadata information (such as version, git information) and metadata information of business applications can be obtained (such as the list of subscribed topics, such as application version information)
+
+- Support integration into open source infrastructure, including:
+ - Can be integrated into k8s health check
+ - Can be integrated into a monitoring system, such as Prometheus+Grafana
+ - If necessary, the service registry can remove nodes based on the health check results
+ - We can build dashboard projects or GUI tools based on this API in the future to troubleshoot problems.
+
+- Similar to the functions of Spring Boot Actuator, there will be more room for imagination in the future: Monitoring, Metrics, Auditing, and more.
+
+## 1.2. Explanation
+
+**Q: What is the value? People use the health check API for what?**
+
+1. For troubleshooting.Developers can access the Actuator API directly to query runtime metadata or reasons for unavailability, or using a dashboard page/GUI tool based on the API.
+
+2. For monitoring by the monitoring system;
+
+3. For automated operation provided by the infrastructure.For example,the deployment system can use the health check results to determine the deployment progress, whether should it continue or not; the service registry can remove abnormal nodes based on health checks;the k8s can kill and recreate containers based on health checks.
+
+
+**Q: It seems that there is no need to return detailed information,since a status code is enough? Who will use the detailed runtime information?**
+
+1. We can build a dashboard page or GUI tool based on the detailed information in the future to troubleshoot problems;
+
+Similar to the spring boot community wrote a 'spring boot admin' web page based on the spring boot actuator
+
+2. With these information,we can integrate Layotto into monitoring system like Prometheus+Grafana
+
+Similar to Spring Boot Actuator being integrated with Prometheus+Grafana
+
+**Q: Should we add some admin API to control the ability, such as "turn off the ability of specific components inside Layotto"**
+
+A: Don't do it. Switching some components off will leave the app in a partial failure state, which can lead to uncertainty.
+However, we can consider adding debug capabilities in the future, such as mock, packet capture and package modification, etc.
+
+
+**Q: Does the health check API do permission control?**
+
+A: Don't do it for now, add a hook if we receive feedbacks in the future.
+
+
+# 2. High-level design
+
+## 2.1. Overview
+
+Develop the http API first, because the health check function of the open source infrastructure basically supports http (such as k8s, prometheus), while grpc is not supported.
+
+In order to reuse filter capabilities such as the authentication filter of MOSN, Actuator runs on MOSN using a layer-7 filter as a glue layer.
+
+Specifically, add a new listener and a new stream_filter to MOSN. This filter is responsible for http request processing and calling Actuator.
+
+Endpoint is an abstract inside Actuator. After a new request arrives at the server, Actuator will entrust the corresponding Endpoint to process it. Endpoint supports on-demand expansion and injection into Actuator:
+
+![img.png](../../../img/actuator/abstract.png)
+
+## 2.2. Http API design
+
+### 2.2.1. Explanation for API path
+
+The path adopts restful style. After different Endpoints are registered in Actuator, the path is:
+
+```
+/actuator/{endpoint_name}/{params}
+```
+
+For example:
+
+```
+/actuator/health/liveness
+```
+
+The 'health' element in the path above identifies the Endpoint name is health, and 'liveness' is the parameter passed to the health Endpoint.
+
+Multiple parameters can be passed, such as /actuator/xxxendpoint/a/b/c/d, and the semantics of the parameters are determined by each Endpoint.
+
+
+The paths registered by default are:
+
+```
+/actuator/health/liveness
+
+/actuator/health/readiness
+
+/actuator/info
+```
+
+### 2.2.2. Health Endpoint
+#### /actuator/health/liveness
+GET
+
+```json
+// http://localhost:8080/actuator/health/liveness
+// HTTP/1.1 200 OK
+
+{
+ "status": "UP",
+ "components": {
+ "livenessProbe": {
+ "status": "UP",
+ "details":{
+
+ }
+ }
+ }
+}
+```
+
+Return field description:
+
+HTTP status code 200 means success, other (status code above 400) means failure.
+
+There are three types of status fields:
+
+```go
+var (
+ // INIT means it is starting
+ INIT = Status("INIT")
+ // UP means it is healthy
+ UP = Status("UP")
+ // DOWN means it is unhealthy
+ DOWN = Status("DOWN")
+)
+```
+
+#### /actuator/health/readiness
+GET
+
+```json
+// http://localhost:8080/actuator/health/readiness
+// HTTP/1.1 503 SERVICE UNAVAILABLE
+
+{
+ "status": "DOWN",
+ "components": {
+ "readinessProbe": {
+ "status": "DOWN"
+ }
+ }
+}
+```
+
+### 2.2.3. Info Endpoint
+
+#### /actuator/info
+
+GET
+
+```json
+// http://localhost:8080/actuator/health/liveness
+// HTTP/1.1 200 OK
+
+{
+ "app" : {
+ "version" : "1.0.0",
+ "name" : "Layotto"
+ }
+}
+```
+
+**Q: What are the runtime metadata requirements?**
+
+Phase 1:
+
+- version number
+
+We can add more information in the future:
+
+- Callback app
+- Runtime configuration parameters
+
+
+**Q: Is it mandatory for components to implement the health check API?**
+
+Temporarily not mandatory
+
+## 2.3. Schema of config data
+
+![img.png](../../../img/actuator/actuator_config.png)
+
+A new listener is added for handling actuator requests. A new stream_filter called actuator_filter is added for http request processing(see below)
+
+## 2.4. Internal structure and request processing flow
+
+![img.png](../../../img/actuator/actuator_process.png)
+
+explanation:
+
+### 2.4.1. When requests arrive
+
+The request arrives at the mosn, enters Layotto through the stream filter, and calls the Actuator.
+
+The http protocol implementation class (struct) of the stream filter layer is DispatchFilter, which is responsible for dispatching requests and calling Actuator according to the http path:
+
+```go
+
+type DispatchFilter struct {
+ handler api.StreamReceiverFilterHandler
+}
+
+func (dis *DispatchFilter) SetReceiveFilterHandler(handler api.StreamReceiverFilterHandler) {
+ dis.handler = handler
+}
+
+func (dis *DispatchFilter) OnDestroy() {}
+
+func (dis *DispatchFilter) OnReceive(ctx context.Context, headers api.HeaderMap, buf buffer.IoBuffer, trailers api.HeaderMap) api.StreamFilterStatus {
+}
+```
+
+The protocol layer is decoupled from Actuator. If the API of other protocols is needed in the future, the stream filter of this protocol can be implemented.
+
+### 2.4.2. Requests will be assigned to Endpoints inside Actuator
+
+Drawing lessons from the design of spring boot: Actuator abstracts the concept of Endpoint, which supports on-demand expansion and injection of Endpoint.
+
+Built-in Endpoints are health Endpoint and info Endpoint.
+
+```go
+type Actuator struct {
+ endpointRegistry map[string]Endpoint
+}
+
+func (act *Actuator) GetEndpoint(name string) (endpoint Endpoint, ok bool) {
+ e, ok := act.endpointRegistry[name]
+ return e, ok
+}
+
+func (act *Actuator) AddEndpoint(name string, ep Endpoint) {
+ act.endpointRegistry[name] = ep
+}
+
+```
+
+When a new request arrives at Actuator,it will be assigned to the corresponding Endpoint according to the path.
+
+For example, /actuator/health/readiness will be assigned to health.Endpoint
+
+### 2.4.3. health.Endpoint collect information from all the implementation of health.Indicator
+
+The components that need to report health check information should implement the Indicator interface and inject it into health.Endpoint:
+
+```go
+type Indicator interface {
+ Report() Health
+}
+```
+
+When a new request arrives,health.Endpoint will collect information from all the implementation of health.Indicator
+
+### 2.4.4. info.Endpoint collect information from all the implementation of info.Contributor
+
+Components that need to report runtime information should implement the Contributor interface and inject it into info.Endpoint:
+
+```go
+type Contributor interface {
+ GetInfo() (info interface{}, err error)
+}
+```
+
+When the path '/actuator/info' is visited,Actuator dispatch the request to info.Endpoint.
+
+info.Endpoint will collect information from all the implementation of info.Contributor
+
+# 3. Detailed design
+
+## 3.1. When will health state change
+### 3.1.1. runtime_startup Indicator
+
+- SetStarted
+
+![img.png](../../../img/actuator/set_started.png)
+
+- SetUnhealthy
+
+When the startup fails:
+
+![img.png](../../../img/actuator/img.png)
+
+When server stops:
+
+![img_1.png](../../../img/actuator/img_1.png)
+
+### 3.1.2. apollo's Indicator
+
+When failing in initializing the component:
+
+![img_2.png](../../../img/actuator/img_2.png)
\ No newline at end of file
diff --git a/docs/en/design/configuration/configuration-api-with-apollo.md b/docs/en/design/configuration/configuration-api-with-apollo.md
new file mode 100644
index 0000000000..046880c1da
--- /dev/null
+++ b/docs/en/design/configuration/configuration-api-with-apollo.md
@@ -0,0 +1,121 @@
+# Implement Configuration API with ctripcorp/apollo
+## Goals
+Implement [Configuration API(level-2)](https://github.com/dapr/dapr/issues/2988) with ctripcorp/apollo.
+
+## Schema
+
+### From Configuration API to apollo schema
+
+The mapping rule is:
+
+| Params in Configuration API | apollo schema |
+| --------------------------- | -------------------------------------------------------------------------- |
+| app_id | //ignore it when querying or subscribing,use app_id in config file instead |
+| group | namespace |
+| label | //append to key. So the actual key stored in apollo will be 'key@$label' |
+| tag | // put into another configuration item with json format |
+
+The actual key stored in apollo will be `key@$label` and the value will be raw value.
+
+Tags will be stored in a special namespace `sidecar_config_tags`,
+
+with key=`group@$key@$label` and value=
+
+```json
+{
+ "tag1": "tag1value",
+ "tag2": "tag2value"
+}
+```
+
+
+**Q: Why not store value and tags in a single configuration item to reduce times of queries,like:**
+
+```json
+{
+ "value":"raw value",
+ "tags":{
+ "tag1":"tag1value",
+ "tag2":"tag2value"
+ }
+}
+```
+
+A: Legacy systems using apollo can't migrate to our sidecar if we design like this.
+
+### How to migrate legacy systems
+
+1. Get and subscribe APIs are compatible.Users can easily put legacy systems onto our sidecar if they don't use save/delete APIs.Just keep `label` field blank in config.json,and the sidecar will use the raw key instead of `key@$label` to interact with apollo.
+
+2. Save/delete APIs might be incompatible.The sidecar use fixed `cluster` field configurated in config.json and fixed `env` field in code,which means users can't pass `cluster` and `env` field as a parameter for save/delete API when they want to change some configuration items with other appid.
+
+### config.json for sidecar
+
+```json
+{
+ "config_store": {
+ "type": "apollo",
+ "address": [
+ "http://106.54.227.205:8080"
+ ],
+ "metadata": {
+ "app_id": "testApplication_yang",
+ "cluster": "dev",
+ "namespace_name": "dubbo,product.joe",
+ "is_backup_config": true,
+ "secret": "6ce3ff7e96a24335a9634fe9abca6d51"
+ }
+ }
+}
+```
+
+
+## API
+
+### Which Go SDK for apollo should I Use?
+
+We are using the official maintained [sdk](https://github.com/apolloconfig/agollo), the others sdks you can find in this [repo](https://www.apolloconfig.com/#/zh/usage/third-party-sdks-user-guide).
+
+Some problems with the sdk:
+1. Users must declare all namespaces in AppConfig before connecting to the server and constructing a client,like:
+
+```go
+ c := &config.AppConfig{
+ AppID: "testApplication_yang",
+ Cluster: "dev",
+ IP: "http://106.54.227.205:8080",
+ NamespaceName: "dubbo", // declare before connecting to the server
+ IsBackupConfig: true,
+ Secret: "6ce3ff7e96a24335a9634fe9abca6d51",
+ }
+ client,err:=agollo.StartWithConfig(func() (*config.AppConfig, error) {
+ return c, nil
+ })
+```
+
+2. Nowhere to configurate `env` field.
+
+3. No save/delete API.
+
+4. No bulk query API.
+
+5. No sure about the concurrent safety.
+
+6. Nowhere to configurate or use [Apollo Meta Server](https://www.apolloconfig.com/#/zh/usage/java-sdk-user-guide?id=_122-apollo-meta-server)
+
+7. Not sure about the consistency property between local cache and backend database.
+
+8. The two operations(set kv+set tags) are not transaction,which may cause inconsistency.
+
+### Which apollo sdk API should I use?
+
+| Configuration API | apollo sdk API |
+| ---------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| GetConfiguration | cache := client.GetConfigCache(c.NamespaceName) value,_ := client.Get("key") |
+| SaveConfiguration | use open API via http. [update](https://www.apolloconfig.com/#/zh/usage/apollo-open-api-platform?id=_3211-%e4%bf%ae%e6%94%b9%e9%85%8d%e7%bd%ae%e6%8e%a5%e5%8f%a3) + [commit](https://www.apolloconfig.com/#/zh/usage/apollo-open-api-platform?id=_3213-%e5%8f%91%e5%b8%83%e9%85%8d%e7%bd%ae%e6%8e%a5%e5%8f%a3) |
+| DeleteConfiguration | use open API via http. [delete](https://www.apolloconfig.com/#/zh/usage/apollo-open-api-platform?id=_3212-%e5%88%a0%e9%99%a4%e9%85%8d%e7%bd%ae%e6%8e%a5%e5%8f%a3) + [commit](https://www.apolloconfig.com/#/zh/usage/apollo-open-api-platform?id=_3213-%e5%8f%91%e5%b8%83%e9%85%8d%e7%bd%ae%e6%8e%a5%e5%8f%a3) |
+| SubscribeConfiguration | https://github.com/apolloconfig/agollo/wiki/%E7%9B%91%E5%90%AC%E5%8F%98%E6%9B%B4%E4%BA%8B%E4%BB%B6 |
+
+### How to subscribe all config changes of the specified app
+
+Subscribe all namespaces declared in config.json
\ No newline at end of file
diff --git a/docs/en/design/faas/faas-poc-design.md b/docs/en/design/faas/faas-poc-design.md
new file mode 100644
index 0000000000..b5bcd395b7
--- /dev/null
+++ b/docs/en/design/faas/faas-poc-design.md
@@ -0,0 +1,45 @@
+## FaaS design document
+
+### 1. Architecture
+
+![img.png](../../../img/faas/faas-design.jpg)
+
+In this FaaS solution, the following two problems are mainly solved:
+1. What is the relationship between the compiled wasm file and the docker image?
+ 1. The target wasm file is finally built into an ordinary image and pushed to Dockerhub. The process of pulling the image is also consistent with the original image, but the target wasm file will be extracted from the image and loaded separately during actual operation.
+2. How to make k8s manage and deploy wasm files?
+ 1. Incorporating into the k8s life cycle management and scheduling strategy, the Containerd-shim-layotto-v2 plugin implements the v2 interface definition of Containerd, and changes the container runtime to Layotto Runtime. For example, the implementation of k8s creating a container is modified to load and run functions in form of wasm.
+ 2. Thanks to the excellent sandbox isolation environment of WebAssembly, Layotto as a function base can load and run multiple wasm functions. Although they all run in the same process, they do not affect each other. Compared with docker, this idea of nanoprocess can make fuller use of resources.
+
+### 2. Core components
+
+#### A、Function
+
+The wasm1 and wasm2 in the above figure respectively represent two functions. After the function is developed, it will be compiled into the form of `*.wasm` and loaded and run. It makes full use of the sandbox isolation environment provided by [WebAssembly(wasm)](https://webassembly.org/) to avoid mutual influence between multiple functions.
+
+#### B、[Layotto](https://github.com/mosn/layotto)
+
+The goal is to provide services, resources, and safety for the function. As the base of function runtime, it provides functions including WebAssembly runtime, access to infrastructure, maximum resource limit for functions, and system call permission verification for functions.
+
+#### C、[Containerd](https://containerd.io/)
+
+Officially supported container runtime, docker is currently the most widely used implementation. In addition, secure containers such as kata and gvisor also use this technology. Layotto also refers to their implementation ideas and integrates the process of loading and running functions into the container runtime.
+
+#### D、[Containerd-shim-layotto-v2](https://github.com/layotto/containerd-wasm)
+
+Based on the V2 interface definition of Containerd, the runtime logic of the container is customized. For example, the implementation of creating a container is modified to let Layotto load and run the wasm function.
+
+#### E、[Kubernetes](https://kubernetes.io/)
+
+The current container scheduling standards, life cycle management and scheduling strategies are excellent. Layotto chose to use the containerd in order to perfectly integrate the scheduling of functions with the k8s ecology.
+
+### 3. Runtime ABI
+
+#### A. [proxy-wasm-go-sdk](https://github.com/layotto/proxy-wasm-go-sdk)
+
+On the basis of [proxy-wasm/spec](https://github.com/proxy-wasm/spec), refer to the definition of [Runtime API](https://github.com/mosn/layotto/blob/main/spec/proto/runtime/v1/runtime.proto), add APIs for functions to access infrastructure.
+
+#### B. [proxy-wasm-go-host](https://github.com/layotto/proxy-wasm-go-host)
+
+It is used to implement the logic of Runtime ABI in Layotto.
+
diff --git a/docs/en/design/lock/lock-api-design.md b/docs/en/design/lock/lock-api-design.md
new file mode 100644
index 0000000000..4b08c2d0fa
--- /dev/null
+++ b/docs/en/design/lock/lock-api-design.md
@@ -0,0 +1,258 @@
+# 0. tl;dl
+This proposal try to add TryLock and Unlock API.
+
+The Lock Renewal API might be controversial and will not be added into the first version
+
+# 1. Why it is needed
+Application developers need distributed locks to keep their data safe from race conditions,but implementing a distributed lock correctly is challenging,since you have to be careful to prevent deadlock or some incorrect corner cases from happening.
+
+An easy-to-use distributed lock API provided by runtime sidecar can be helpful.
+
+
+# 2. Evaluation of products on the market
+| **System** | **tryLock(non-blocking lock)** | **Blocking lock(based on watch)** | **Availability** | **Write operations are linearizable** | **sequencer([chubby's feature](https://static.googleusercontent.com/media/research.google.com/zh-TW//archive/chubby-osdi06.pdf))** | **Lock renewal** |
+| --- | --- | --- | --- | --- | --- | --- |
+| Stand-alone redis | yes | x | unavailable when single failure | yes | yes(need poc) | yes |
+| redis cluster | yes | x | yes | no. Locks will be unsafe when fail-over happens | yes(need poc) | yes |
+| redis Redlock | yes | | | | | |
+| consul | yes | | | | | |
+| zookeeper | yes | yes | yes. [the election completes within 200 ms](https://pdos.csail.mit.edu/6.824/papers/zookeeper.pdf) | yes | yes use zxid as sequencer | yes |
+| etcd | yes | yes | yes | yes | yes use revision | yes lease.KeepAlive |
+
+There are some differences in feature supporting.
+
+# 2. High-level design
+## 2.1. API
+### 2.1.0. Design principles
+We are faced with many temptations. In fact, there are many lock related features that can be supported (blocking locks, reentrant locks, read-write locks, sequencer, etc.)
+
+But after all, our goal is to design a general API specification, so we should be as conservative as possible in API definition.Start simple, abstract the simplest and most commonly used functions into API specifications, and wait for user feedback before considering adding more abstraction into API specification.
+
+### 2.1.1. TryLock/Unlock API
+The most basic locking and unlocking API.
+
+TryLock is non-blocking, it return directly if the lock is not obtained.
+
+proto:
+
+```protobuf
+ // Distributed Lock API
+ // A non-blocking method trying to get a lock with ttl.
+ rpc TryLock(TryLockRequest)returns (TryLockResponse) {}
+
+ rpc Unlock(UnlockRequest)returns (UnlockResponse) {}
+
+
+
+message TryLockRequest {
+ // Required. The lock store name,e.g. `redis`.
+ string store_name = 1;
+
+ // Required. resource_id is the lock key. e.g. `order_id_111`
+ // It stands for "which resource I want to protect"
+ string resource_id = 2;
+
+ // Required. lock_owner indicate the identifier of lock owner.
+ // You can generate a uuid as lock_owner.For example,in golang:
+ //
+ // req.LockOwner = uuid.New().String()
+ //
+ // This field is per request,not per process,so it is different for each request,
+ // which aims to prevent multi-thread in the same process trying the same lock concurrently.
+ //
+ // The reason why we don't make it automatically generated is:
+ // 1. If it is automatically generated,there must be a 'my_lock_owner_id' field in the response.
+ // This name is so weird that we think it is inappropriate to put it into the api spec
+ // 2. If we change the field 'my_lock_owner_id' in the response to 'lock_owner',which means the current lock owner of this lock,
+ // we find that in some lock services users can't get the current lock owner.Actually users don't need it at all.
+ // 3. When reentrant lock is needed,the existing lock_owner is required to identify client and check "whether this client can reenter this lock".
+ // So this field in the request shouldn't be removed.
+ string lock_owner = 3;
+
+ // Required. expire is the time before expire.The time unit is second.
+ int32 expire = 4;
+}
+
+
+message TryLockResponse {
+
+ bool success = 1;
+}
+
+message UnlockRequest {
+ string store_name = 1;
+ // resource_id is the lock key.
+ string resource_id = 2;
+
+ string lock_owner = 3;
+}
+
+message UnlockResponse {
+ enum Status {
+ SUCCESS = 0;
+ LOCK_UNEXIST = 1;
+ LOCK_BELONG_TO_OTHERS = 2;
+ INTERNAL_ERROR = 3;
+ }
+
+ Status status = 1;
+}
+
+```
+
+**Q: What is the time unit of the expire field?**
+
+A: Seconds.
+
+**Q: Can we force the user to set the number of seconds to be large enough(instead of too small)?**
+
+A: There is no way to limit it at compile time or startup, forget it
+
+**Q: What would happen if different applications pass the same lock_owner?**
+
+case 1. If two apps with different app-id pass the same lock_owner,they won't conflict because lock_owner is grouped by 'app-id ',while 'app-id' is configurated in sidecar's static config(configurated in config.json or passed as parameters at startup)
+
+case 2.If two apps with same app-id pass the same lock_owner,they will conflict and the second app will obtained the same lock already used by the first app.Then the correctness property will be broken.
+
+So user has to care about the uniqueness property of lock_owner.
+
+**Q: Why not add metadata field**
+
+A: Try to be conservative at the beginning, wait until someone feedbacks that there is a need, or find that there is a need to be added in the process of implementing the component
+
+**Q: How to add features such as sequencer and reentrant locks in the future?**
+
+A: Add feature options in the API parameters,and the component must also implement the Support() function
+
+### 2.1.2. Lock Renewal API
+Renewal API aims to refresh the existing lock and postpone the expiration time.
+
+Lock Renewal API won't be in the first version of this API. Here are some considerations for discussion.
+
+#### Solution A: add an API "LockKeepAlive"
+ ![img_3.png](../../../img/lock/img_3.png)
+
+```protobuf
+rpc LockKeepAlive(stream LockKeepAliveRequest) returns (stream LockKeepAliveResponse){}
+
+message LockKeepAliveRequest {
+ // resource_id is the lock key.
+ string resource_id = 1;
+
+ string lock_owner = 2;
+ // expire is the time to expire
+ int64 expire = 3;
+}
+
+message LockKeepAliveResponse {
+ enum Status {
+ SUCCESS = 0;
+ LOCK_UNEXIST = 1;
+ LOCK_BELONG_TO_OTHERS = 2;
+ }
+ // resource_id is the lock key.
+ string resource_id = 1;
+
+ Status status = 2;
+}
+```
+
+Users have to start a thread or coroutine to periodically renew the lock.
+
+The input parameters and return results of this API are all streams. App and sidecar only need to maintain one connection. Each time the lock needs to be renewed, the connection is reused to transfer the renewal request.
+
+**Q: Why not put the lock renewal as a stream parameter into tryLock?**
+
+- Lock renewal is not a high frequency demand, so we want trylock to be as simple as possible;
+
+- Single responsibility principle.When we want to add a blocking lock,the renewal API can be reused;
+
+**Q: The renewal logic is too complicated, can we make it transparent to users?**
+
+A: sdk can do this logic for users.Sdk will start a thread,coroutine or nodejs timing event to automatically renew the lease
+
+
+#### Solution B: Make users not aware of the renewal logic
+Our sidecar can automatically renew the lease with heartbeat. App and sidecar maintain a heartbeat for failure detection.
+
+![img_1.png](../../../img/lock/img_1.png)
+
+Disadvantages/difficulties:
+
+1. If we reuse a public heartbeat, it is difficult to customize the heartbeat interval
+
+An option is to ensure that the heartbeat interval low enough, such as 1 time per second
+
+2. How to ensure reliable failure detection?
+
+For example, the following java code `unlock()` method may fail:
+
+```java
+try{
+
+}finally{
+ lock.unlock()
+}
+```
+
+If it is a lock in JVM, unlock can guarantee success (unless the entire JVM fails), but unlock may fail if it is called via the network. How to ensure that the heartbeat is interrupted after the call fails?
+
+Here shows the corner case:
+![](https://user-images.githubusercontent.com/26001097/124790369-ae1d6480-df7d-11eb-8f87-401635c49b69.png)
+
+![](https://user-images.githubusercontent.com/26001097/124790972-38fe5f00-df7e-11eb-8d0b-685750dcc3ba.png)
+
+![img_2.png](../../../img/lock/img_2.png)
+
+Solving this case requires the app to report some fine-grained status with the heartbeat.
+
+We can define a http callback SPI, which is polled and detected by the sidecar, and the data structure returned by the callback is as follows:
+
+```json
+{
+ "status": "UP",
+ "details": {
+ "lock": [
+ {
+ "resource_id": "res1",
+ "lock_owner": "dasfdasfasdfa",
+ "type": "unlock_fail"
+ }
+ ],
+ "xxx": []
+ }
+}
+```
+
+The application has to handle status collection, reporting, cleaning up after the report is successful, and limiting the map capacity (for example, what if the map is too large when report fails too much times?), which requires the app to implement some complex logic, and it must be put in the SDK.
+
+3. This implementation is actually the same as Solution A. It opens a separate connection for status management and failure detection, and user reports the status through this public connection when necessary.
+
+4. Solution B actually make API spec rely on heartbeat logic. It relies on the heartbeat interval and the data structure returned by the heartbeat. It is equivalent to that the API spec relies on the implementation of sidecar, unless we can also standardize the heartbeat API (including interval, returned data structure, etc.)
+
+#### Conclusion
+At present, the Lock Renewal API might be controversial and will not be added into the first version.
+
+Personally I prefer the solution A.Let the SDK do the renewal logic. Although users have to directly deal with the lease renewal logic when using grpc, it is not hard for developers to understand.
+
+I put it here to see your opinion.
+
+# 3. Future work
+
+- Reentrant Lock
+
+There will be some counting logic.We need to consider whether all locks support reentrancy by default, or add a feature option in the parameter to identify that the user needs it to be reentrant
+
+- Heartbeat API
+
+If we want to implement more coordinator APIs such as blocking lock and leader election,we need a heartbeat API for failure detection,like `LeaseKeepAlive` in Etcd.
+
+- Blocking lock
+
+- Sequencer
+
+# 4. Reference
+
+[How to do distributed locking](https://martin.kleppmann.com/2016/02/08/how-to-do-distributed-locking.html)
+
+[The Chubby lock service for loosely-coupled distributed systems](https://static.googleusercontent.com/media/research.google.com/zh-TW//archive/chubby-osdi06.pdf)
\ No newline at end of file
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/design/notify/phone_call.md b/docs/en/design/notify/phone_call.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/design/notify/phone_call.md
rename to docs/en/design/notify/phone_call.md
diff --git a/docs/en/design/pluggable/design.md b/docs/en/design/pluggable/design.md
new file mode 100644
index 0000000000..e17384f56e
--- /dev/null
+++ b/docs/en/design/pluggable/design.md
@@ -0,0 +1,41 @@
+# Pluggable Component Design Document
+
+## Background
+
+Currently, the components of Layotto are implemented in Layotto's projects, which requires users to develop new components using Golang language and implement them in the Layotto project before compiling them uniformly.
+It is very unfriendly for multilingual users, so Layotto needs to provide the capability of pluggable components, allowing users to implement their own components in any language. Layotto communicates with external components through the GRPC protocol.
+
+## Programme
+
+- Implementing local cross language component service discovery based on UDS (Unix domain socket) to reduce communication overhead.
+- Implement cross language implementation capabilities for components based on protobuf.
+
+## Data Flow Architecture
+
+![](../../../img/pluggable/layotto_datatflow.png)
+
+This is the data flow starting from the current user's call to SDK.
+The dashed portion is the data flow that the pluggable component primarily participates in.
+
+### Component Discovery
+
+![](../../../img/pluggable/layotto.png)
+
+As shown in the above figure, the user-defined component starts the socket service and places the socket file in the specified directory.
+When Layotto starts, it will read all socket files in the directory (skipping folders) and establish a socket connection.
+
+At present, layotto aligns with dapr and is not responsible for the lifecycle of user components.
+If the user components are offline during the service period, there will be no reconnection, and the component service cannot be used.
+
+Based on the usage of the community, it will be decided whether Layotto needs to support a process management module or use a separate service for management.
+
+Due to the incomplete support for UDS in Windows and the cancellation of compatibility with Windows by Layotto itself,
+the new feature adopts a UDS discovery mode that is not compatible with Windows systems.
+
+## Component Register
+
+- As shown in the data flow framework composition above, the components registered by the user need to implement the GRPC service defined by the pluggable proto.
+Layotto will generate golang's grpc file based on the protobuf idl. Here Corresponds to the wrap component in the data flow diagram.
+- There is no difference between wrap component and build in component for layotto runtime, and there is no special perception for users.
+- Layotto uses the GRPC reflect library to obtain which components are implemented by the user's provided services, and registers them in the global component registry for users to use.
+
diff --git a/docs/en/design/pluggable/usage.md b/docs/en/design/pluggable/usage.md
new file mode 100644
index 0000000000..7c6ac73e31
--- /dev/null
+++ b/docs/en/design/pluggable/usage.md
@@ -0,0 +1,57 @@
+# Pluggable Component Usage Guide
+
+This example demonstrates how users can implement and register their own components through the pluggable component capabilities provided by Layotto.
+And verify the correctness of your component writing through Layotto SDK calls.
+
+## step1.Write and run pluggable components
+
+Next, run the already written code
+
+```shell
+cd demo/pluggable/hello
+go run .
+```
+
+Printing the following result indicates successful service startup
+
+```shell
+start grpc server
+```
+
+>1. Taking the go implementation of the `hello` component as an example, find the corresponding component's proto file in `layotto/spec/proto/pluggable` and generate the corresponding implementation language's grpc file.
+The Go language's pb file has been generated and placed under `spec/proto/pluggable/v1`, and users can directly reference it when using it.
+>2. In addition to implementing the interfaces defined in the protobuf file, the component also needs to use socket mode to start the file and store the socket file in the default path of `/tmp/runtime/component-sockets`,
+You can also use the environment variable `LAYOTTO_COMPONENTS_SOCKETS_FOLDER` modify the sock storage path location.
+>3. In addition, users also need to register the reflection service in the GRPC server, which is used to obtain the specific implementation interface spec of the GRPC service during layotto service discovery. For specific code, please refer to `demo/pluggable/hello/main.go`
+
+## step2. Launch Layotto
+
+```shell
+cd cmd/layotto
+go build -o layotto .
+./layotto start -c ../../configs/config_hello_component.json
+```
+
+> The type of the component filled in the configuration file is `hello-grpc-demo`, which is determined by the prefix name of the socket file.
+> The configuration items are consistent with registering regular hello components.
+> Provide metadata items for users to set custom configuration requirements.
+
+## step3. Component verification
+
+Based on existing component testing code, test the correctness of user implemented pluggable components.
+
+```shell
+cd demo/hello/common
+go run . -s helloworld
+```
+
+The program outputs the following results to indicate successful registration and operation of pluggable components.
+
+```shell
+runtime client initializing for: 127.0.0.1:34904
+hello
+```
+
+## Understand the implementation principle of Layotto pluggable components
+
+If you are interested in the implementation principles or want to extend some functions, you can read the [Design Document for Pluggable Components](en/design/pluggable/design.md)
\ No newline at end of file
diff --git a/docs/en/design/pubsub/pubsub-api-and-compability-with-dapr-component.md b/docs/en/design/pubsub/pubsub-api-and-compability-with-dapr-component.md
new file mode 100644
index 0000000000..67e71b5d4a
--- /dev/null
+++ b/docs/en/design/pubsub/pubsub-api-and-compability-with-dapr-component.md
@@ -0,0 +1,149 @@
+# Pub/Sub API and compatibility with Dapr's packages
+# 1. Requirements
+1. Support Pub/Sub API
+2. The architecture can reuse Dapr's packages as much as possible
+
+# 2. High-level design
+## 2.1. Architecture: whether to reuse Dapr's sdk and proto
+In order to develop a set of universally accepted API specs with the Dapr and Envoy communities in the future, we try to be consistent with the Dapr API at present.
+
+Dapr's component library can be reused directly; the following discusses whether sdk and proto are reused and how to reuse.
+
+### The problems we are facing
+
+1. The SDK of dapr is hard-coded to call the package name of the API, and there is 'dapr' word in the name
+ ![img.png](../../../img/mq/design/img.png)
+2. We will have differentiated requirements in the future, such as new fields and new APIs. If we use dapr.proto directly, it will be inflexible
+### Solution
+
+#### Do not reuse sdk and proto; move the proto file to a neutral path
+![img_1.png](../../../img/mq/design/img_1.png)
+
+We first define an api-spec.proto, this proto is a superset of dapr API, and the path name is neutral without the word 'layotto' or 'dapr'.Based on this proto, we can develop a neutral RuntimeAPI sdk.
+
+Later, try to promote the proto into an api-spec accepted by the runtime community, or rebuild a path-neutral api-spec.proto with other communities.
+
+It does not matter if the proto changes during the promotion process. Layotto internally extracts an API layer under the proto to prevent proto changes;
+
+If it is not easy to push, we can write a dapr adapter in the neutral SDK in the short term, and use our SDK to adjust dapr and layotto:
+![img_2.png](../../../img/mq/design/img_2.png)
+
+Advantages:
+
+1. Neat and tidy. If you want to reuse Dapr's sdk and proto, there is an inevitable problem: when the API and dapr are different, you need to encapsulate a layer of your own logic, which will bring complexity, hacky, a sense of copycat, and raise the difficulty of code reading.
+1. Extendible when the APIs are different from Dapr's
+
+Disadvantages:
+
+1. Subsequent Dapr client or proto changes, we may not know, resulting in inconsistencies
+
+
+## 2.2. API Design
+### 2.2.1. Design principle: How to add fields to Dapr's API
+We want to reuse Dapr API, but in the long run, there will definitely be customization requirements. When our API and dapr's are different (for example, we just want to add a new field to a certain API of Dapr), should we create a new method name or just add a field to the original method?
+
+If we add a field to the original method, it may cause field conflicts.
+
+After serveral discussion,we finally decide to add fields directly in that situation.Conflicts of API are inevitable (of course,we will try to raise pull requests to the Dapr community to avoid conflicts)
+
+In the future, when everyone really sits together to reach a consensus and build api-spec, a new proto with a new path will be created. Anyway, there will be a new proto at that time, so don't worry about the current conflict.
+
+### 2.2.2. Between APP and Layotto
+Use the same grpc API as Dapr
+
+```protobuf
+service AppCallback {
+ // Lists all topics subscribed by this app.
+ rpc ListTopicSubscriptions(google.protobuf.Empty) returns (ListTopicSubscriptionsResponse) {}
+
+ // Subscribes events from Pubsub
+ rpc OnTopicEvent(TopicEventRequest) returns (TopicEventResponse) {}
+
+}
+```
+
+```protobuf
+service Dapr {
+ // Publishes events to the specific topic.
+ rpc PublishEvent(PublishEventRequest) returns (google.protobuf.Empty) {}
+}
+
+```
+
+### 2.2.3. Between Layotto and Component
+Use the same as Dapr;
+PublishRequest.Data and NewMessage.Data put json data conforming to CloudEvent 1.0 specification (can be deserialized and put into map[string]interface{})
+
+```go
+// PubSub is the interface for message buses
+type PubSub interface {
+ Init(metadata Metadata) error
+ Features() []Feature
+ Publish(req *PublishRequest) error
+ Subscribe(req SubscribeRequest, handler func(msg *NewMessage) error) error
+ Close() error
+}
+
+// PublishRequest is the request to publish a message
+type PublishRequest struct {
+ Data []byte `json:"data"`
+ PubsubName string `json:"pubsubname"`
+ Topic string `json:"topic"`
+ Metadata map[string]string `json:"metadata"`
+}
+
+
+// NewMessage is an event arriving from a message bus instance
+type NewMessage struct {
+ Data []byte `json:"data"`
+ Topic string `json:"topic"`
+ Metadata map[string]string `json:"metadata"`
+}
+
+```
+
+### 2.2.4. How does the sidecar know which port to call back
+
+Configure the callback port at startup. The price is that the sidecar can only serve one process.
+
+Temporarily choose this plan in this issue
+
+### 2.2.5. How to keep the subscription list real-time
+
+The app is called when the sidecar starts, and the subscription relationship is obtained at that time. Therefore, there are requirements for the startup sequence. Start the app first.
+
+It can be optimized into a timed polling mechanism in the future
+
+### 2.2.6. Does the subscription relationship support declarative configuration?
+
+In the first phase, only the way of opening an API for callback is supported, and the subsequent optimization will be declarative configuration or dynamic configuration.
+
+## 2.3. Config Design
+![img.png](../../../img/mq/design/config.png)
+
+# 3. Future Work
+## A Bigger Control Plane
+
+The Control Plane in the Service Mesh era only serves RPC, but in the Runtime API era, component configuration also needs to be distributed by the cluster; components also need service discovery and routing, so components also need their own Control Plane.
+
+It is convenient to have a Bigger Control Plane that integrates RPC and all middleware configuration data
+
+Maybe we have to extend the xDS protocol, like 'runtime Discovery Service'.
+
+## Subscription relationship support configuration
+
+The subscription relationship is now obtained by the callback mechanism.We want the subscription relationship be obtained through configuration.
+
+## appcallback support TLS
+
+
+## Separate component configuration and personality configuration (callback port, app-id)
+The current component configuration and app personality configuration (callback port, app-id) are put together, and there are some problems:
+
+1. It's not easy to distribute the configuration to the whole cluster
+1. Can't do component access control (for example, Dapr can restrict app-id1 to only access topic_id1)
+![img_4.png](../../../img/mq/design/img_4.png)
+
+Need to refactor the original component logic.
+
+## Tracing
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/design/rpc/rpc-design-doc.md b/docs/en/design/rpc/rpc-design-doc.md
similarity index 84%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/design/rpc/rpc-design-doc.md
rename to docs/en/design/rpc/rpc-design-doc.md
index fbd98b37a9..4ca4ea4608 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/design/rpc/rpc-design-doc.md
+++ b/docs/en/design/rpc/rpc-design-doc.md
@@ -1,6 +1,6 @@
-# RPC DESIGN
+RPC DESIGN
-## API
+### API
runtime rpc API is same with Dapr.
### Core Abstraction
@@ -9,9 +9,9 @@ in order to decoupling with pb definition,add independent RPC abstrations.
- invoker: provide complete rpc ability,currently only Mosn invoker
- callback:before/after filter,extend with custom logic(eg: protocol convertion)
- channel:send request and receive response, talk to diffrent transport protocol(http、bolt...)
-
+
due to Mosn do all the dirty work, a lightweight framework is enough for layotto currently.
-
+
![img.png](../../../img/rpc/rpc-layer.png)
@@ -48,9 +48,9 @@ In layotto, we design a convenient way to support xprotocols. The only task need
"channel": [{
"size": 16, // analogy to connection nums
"protocol": "http", // communicate with mosn via this protocol
- "listener": "egress_runtime_http" // mosn's protocol listener name
- }]
- }
- }
- }
+ "listener": "egress_runtime_http" // mosn's protocol listener name
+ }]
+ }
+ }
+}
```
\ No newline at end of file
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/development/CONTRIBUTING.md b/docs/en/development/CONTRIBUTING.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/development/CONTRIBUTING.md
rename to docs/en/development/CONTRIBUTING.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/development/commands.md b/docs/en/development/commands.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/development/commands.md
rename to docs/en/development/commands.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/development/component_ref/component_ref.md b/docs/en/development/component_ref/component_ref.md
similarity index 97%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/development/component_ref/component_ref.md
rename to docs/en/development/component_ref/component_ref.md
index 4fb806b747..5a455a576a 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/development/component_ref/component_ref.md
+++ b/docs/en/development/component_ref/component_ref.md
@@ -1,13 +1,13 @@
-# layotto component reference
+## layotto component reference
-## Background
+### Background
When a component starts, it may need to use another component's skill. For example, when the `sequencer` component `A` starts, it needs to read its settings from the `config` component `B`.
To make this happen, layotto offers the "component reference" feature. This feature lets component A use the features of component B.
### Related Designs
-Currently, other components can only reference two types of components: ConfigStore and SecretStore. These are used to get configuration and secret keys.
+Currently, other components can only reference two types of components: ConfigStore and SecretStore. These are used to get configuration and secret keys.
The "referenced" components must implement the interface :
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/development/contributing-doc.md b/docs/en/development/contributing-doc.md
similarity index 91%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/development/contributing-doc.md
rename to docs/en/development/contributing-doc.md
index bc537ac24a..41ff32d759 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/development/contributing-doc.md
+++ b/docs/en/development/contributing-doc.md
@@ -8,7 +8,7 @@ This document describes how to modify/add documents. Documentation for this repo
Documents are stored in the 'docs/' directory, where 'docs/en' stores English documents and 'docs/zh' stores Chinese documents.
-![img_2.png](/img/development/doc/img_2.png)
+![img_2.png](../../img/development/doc/img_2.png)
## 2. Documentation Site Description
Files under docs/ directory will be automatically deployed to github pages and rendered through [docsify](https://docsify.js.org/#/).
@@ -19,7 +19,7 @@ Generally speaking, after the .md file is merged into the main branch, you can s
### step 1. Write a new markdown file
To add a document, create a folder and a .md file based on the directory structure. For example, if you want to write a design document for the distributed lock API, just create a new directory:
-![img_1.png](/img/development/doc/img_1.png)
+![img_1.png](../../img/development/doc/img_1.png)
### step 2. Update the sidebar
Remember to update the sidebar after adding new documents or revising existing documents.
@@ -42,9 +42,9 @@ The hyperlink mentioned here is the kind of links that will jump to other docume
### Incorrect Syntax
If you try to create a hyperlink with a relative path, then a 404 page will appear once you clicked it:
-![img_6.png](/img/development/doc/img_6.png)
+![img_6.png](../../img/development/doc/img_6.png)
-![img_7.png](/img/development/doc/img_7.png)
+![img_7.png](../../img/development/doc/img_7.png)
### Correct Syntax
@@ -52,7 +52,7 @@ There are two suggested ways to write hyperlinks:
a. Use a path relative to the 'docs/' directory. Such as:
-![img_5.png](/img/development/doc/img_5.png)
+![img_5.png](../../img/development/doc/img_5.png)
b. Use the full Url. Such as:
@@ -63,7 +63,7 @@ see [runtime_config.json](https://github.com/mosn/layotto/blob/main/configs/runt
## 5. Tips on image links
Images are stored under docs/img/ directory for the purpose that the Docsify site can access it
-![img.png](/img/development/doc/img.png)
+![img.png](../../img/development/doc/img.png)
It is recommended to use the full path when referencing images in documents, to avoid a bunch of messy path problems.
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/development/developing-api.md b/docs/en/development/developing-api.md
similarity index 99%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/development/developing-api.md
rename to docs/en/development/developing-api.md
index 0a4bce2db8..b965e9e7b4 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/development/developing-api.md
+++ b/docs/en/development/developing-api.md
@@ -78,7 +78,7 @@ need to have:
Correct example:[Dapr pub-sub quickstart](https://github.com/dapr/quickstarts/tree/v1.0.0/pub-sub)
Before the operation, explain what effect this demo want to achieve with illustration
-![img.png](/img/development/api/img.png)
+![img.png](../../img/development/api/img.png)
Counter-example: The document only writes operation steps 1234, and users do not understand what they want to do.
@@ -102,7 +102,7 @@ Need to have :
##### How to use this API
- List of interfaces.For example:
-![img_4.png](/img/development/api/img_4.png)
+![img_4.png](../../img/development/api/img_4.png)
List out which interfaces are there. On the one hand, the users of the province go to the proto and don’t know which APIs are related. On the other hand, it can avoid the disgust of users due to the lack of interface documentation
- About the interface`s input and output parameters: use proto comments as interface documentation
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/development/developing-component.md b/docs/en/development/developing-component.md
similarity index 87%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/development/developing-component.md
rename to docs/en/development/developing-component.md
index 4be0c27418..e484bfe9cc 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/development/developing-component.md
+++ b/docs/en/development/developing-component.md
@@ -16,7 +16,7 @@ When developing new components, you can refer to the existing components. For ex
The folder name can use the component name, referring to the redis component below
-![img.png](/img/development/component/img.png)
+![img.png](../../img/development/component/img.png)
Tools you may use in the process of development (for the purpose of reference only, hope to simplify the development) :
@@ -43,23 +43,23 @@ components/configstores/ apollo/configstore_test.go:
First, in configstore.go, encapsulate all calls to the SDK, network and Apollo into a single interface.
-![mock.png](/img/development/component/mock.png)
-![img_8.png](/img/development/component/img_8.png)
+![mock.png](../../img/development/component/mock.png)
+![img_8.png](../../img/development/component/img_8.png)
Then, encapsulate your code that calls the SDK and network into a struct which achieves that interface:
-![img_9.png](/img/development/component/img_9.png)
+![img_9.png](../../img/development/component/img_9.png)
Once you've done this refactoring, your code is testable (this is part of the idea of test-driven development, refactoring code into a form that can be injectable in order to improve its testability)
Next, we can mock that interface when we write ut:
-![img_10.png](/img/development/component/img_10.png)
+![img_10.png](../../img/development/component/img_10.png)
Just mock it into the struct you want to test, and test it.
-![img_11.png](/img/development/component/img_11.png)
+![img_11.png](../../img/development/component/img_11.png)
Note: Generally, during "integration test", network call will be made and a normal ZooKeeper or Redis will be called. On contract, the single test focuses on the local logic, and will not call the real environment
@@ -72,10 +72,10 @@ So how should let Layotto load the components at startup?
Need to integrate new components in cmd/layotto/main.go, including:
### 3.1. Import your components in main.go
-![img_1.png](/img/development/component/img_1.png)
+![img_1.png](../../img/development/component/img_1.png)
### 3.2. Register your component in the NewRuntimeGrpcServer function of main.go
-![img_4.png](/img/development/component/img_4.png)
+![img_4.png](../../img/development/component/img_4.png)
After that, Layotto initializes the ZooKeeper component if the user has configured "I want to use ZooKeeper" in the Layotto configuration file
@@ -93,7 +93,7 @@ So how to configure when users want to use Zookeeper? We need to provide a sampl
We can copy a json configuration file from another component. For example, copy configs/config_redis.json and paste it into configs/config_zookeeper.json when developing a plug-in component
Then edit and modify the configuration shown below:
-![img_3.png](/img/development/component/img_3.png)
+![img_3.png](../../img/development/component/img_3.png)
@@ -103,12 +103,12 @@ We need a client demo, such as the distributed lock client demo that has two cor
#### a. If the component has a generic client, it doesn't need to be developed
If there is a common folder under demo directory, it means the demo is a general purpose demo, which can be used by different components. You can pass the storeName parameter on the command line, and you don't need to develop a demo if you have this
-![img_6.png](/img/development/component/img_6.png)
+![img_6.png](../../img/development/component/img_6.png)
#### b. If the component does not have a generic client or requires custom metadata arguments, copy and paste them
For example, when implementing distributed locks using ZooKeeper, you need some custom configurations. Then you can write your demo based on the Redis demo
-![img_7.png](/img/development/component/img_7.png)
+![img_7.png](../../img/development/component/img_7.png)
Note: If there are errors in the demo code that shouldn't be there , you can panic directly. Later, we will directly use demo to run the integration test. If panic occurs, it means that the integration test fails. For example the demo/lock/redis/client.go:
@@ -122,19 +122,19 @@ Note: If there are errors in the demo code that shouldn't be there , you can pan
```
### 4.3. Refer to the QuickStart documentation to start Layotto and Demo and see if any errors are reported
-For example, refer to the [QuickStart documentation of the Distributed Lock API](start/lock/start.md) , start your dependent environment (such as ZooKeeper), and start Layotto (remember to use the configuration file you just added!). And check for errors.
+For example, refer to the [QuickStart documentation of the Distributed Lock API](zh/start/lock/start.md) , start your dependent environment (such as ZooKeeper), and start Layotto (remember to use the configuration file you just added!). And check for errors.
Note: The following Error is ok, just ignore it
-![img_2.png](/img/development/component/img_2.png)
+![img_2.png](../../img/development/component/img_2.png)
Start demo and call Layotto to see if any errors are reported. If it is a universal client, you can pass storeName with -s storeName in the command line
-![img_5.png](/img/development/component/img_5.png)
+![img_5.png](../../img/development/component/img_5.png)
If there is no error when running, it means the test passed!
## 5、New component description documents
When the above code work is completed , it is better to add the configuration documentation of the component, explaining what configuration items the component supports and how to start the environment that the component depends on (for example, how to start ZooKeeper with Docker).
-You can refer to the [Redis component description of the Lock API (Chinese)](component_specs/lock/redis.md) and [the Redis component description of the Lock API (English)](component_specs/lock/redis.md), also can copy and paste change.
\ No newline at end of file
+You can refer to the [Redis component description of the Lock API (Chinese)](zh/component_specs/lock/redis.md) and [the Redis component description of the Lock API (English)](en/component_specs/lock/redis.md), also can copy and paste change.
\ No newline at end of file
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/development/github-workflows.md b/docs/en/development/github-workflows.md
similarity index 93%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/development/github-workflows.md
rename to docs/en/development/github-workflows.md
index b9947b2359..1da577af77 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/development/github-workflows.md
+++ b/docs/en/development/github-workflows.md
@@ -85,7 +85,7 @@ Layotto Env Pipeline Task Trigger Events:
## Layotto Dev Pipeline 🌊 (Before Merged)
-![release.png](/img/development/workflow/workflow-dev.png)
+![release.png](../../img/development/workflow/workflow-dev.png)
### Job Task Content
@@ -121,7 +121,7 @@ The layotto dev pipeline (before merged) is mainly responsible for verifying th
## Layotto Dev Pipeline 🌊 (After Merged)
-![release.png](/img/development/workflow/workflow-merge.png)
+![release.png](../../img/development/workflow/workflow-merge.png)
### Job Task Content
@@ -160,7 +160,7 @@ The layotto dev pipeline (after merged) is mainly responsible for the verificat
## Layotto Release Pipeline 🌊
-![release.png](/img/development/workflow/workflow-release.png)
+![release.png](../../img/development/workflow/workflow-release.png)
### Job Task Content
@@ -177,9 +177,9 @@ The layotto release pipeline is mainly responsible for the release and verifica
+ Linux AMD64 Artifact : Build linux amd64 binary verification for code
+ Linux ARM64 Artifact : Build linux arm64 binary verification for code
+ Linux AMD64 WASM Artifact : Build linux AMD64 binary verification for layotto wasm
-+ Linux AMD64 WASM Image : Release the latest version of layotto wasm image. The image specification is `layotto/faas-amd64:{latest_tagname}`
-+ Linux AMD64 Image : Release the latest version of layotto wasm image. The image specification is `layotto/layotto:{latest_tagname}`
-+ Linux ARMD64 Image : Release the latest version of layotto wasm image. The image specification is `layotto/layotto.arm64:{latest_tagname}`
++ Linux AMD64 WASM Image : Release the latest version of layotto wasm image. The image specification is layotto/faas-amd64:{latest_tagname}
++ Linux AMD64 Image : Release the latest version of layotto wasm image. The image specification is layotto/layotto:{latest_tagname}
++ Linux ARMD64 Image : Release the latest version of layotto wasm image. The image specification is layotto/layotto.arm64:{latest_tagname}
### Job Trigger Event
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/development/test-quickstart.md b/docs/en/development/test-quickstart.md
similarity index 99%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/development/test-quickstart.md
rename to docs/en/development/test-quickstart.md
index d8f8785b3d..f489ea64ae 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/development/test-quickstart.md
+++ b/docs/en/development/test-quickstart.md
@@ -6,7 +6,7 @@ So we need to test Quickstart regularly to make sure it works.
But the process of manually testing Quickstart periodically and fixing exceptions in the documentation is too time-consuming.
-
+
It's a pain in the ass!
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/operation/README.md b/docs/en/operation/README.md
similarity index 95%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/operation/README.md
rename to docs/en/operation/README.md
index 0be93f1eb3..70bf84a217 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/operation/README.md
+++ b/docs/en/operation/README.md
@@ -10,7 +10,7 @@ There are some ways to deploy Layotto that you can find below.
### Deploy Layotto using released binaries
-You can start Layotto directly via executing the binary file. Refer to the [Quick start](start) guide.
+You can start Layotto directly via executing the binary file. Refer to the [Quick start](en/start) guide.
### Deploy using Docker
@@ -25,7 +25,7 @@ It does not contain a `config.json` configuration file in the image, you can mou
docker run -v "$(pwd)/configs/config.json:/runtime/configs/config.json" -d -p 34904:34904 --name layotto layotto/layotto start
```
-Of course, you can also run Layotto and other systems (such as Redis) at the same time via docker-compose. Refer to the [Quick start](start/state/start?id=step-1-deploy-redis-and-layotto)
+Of course, you can also run Layotto and other systems (such as Redis) at the same time via docker-compose. Refer to the [Quick start](en/start/state/start?id=step-1-deploy-redis-and-layotto)
### Deploy on Kubernetes
@@ -51,7 +51,7 @@ The principle of this script is to replace the binary file in the MOSN proxyv2 i
You can prepare your own image and K8s configuration file, then deploy Layotto via Kubernetes.
-We are working on the official Layotto image and the solution for deploying to Kubernetes using Helm, so feel free to join us to build it. More details in ``
+We are working on the official Layotto image and the solution for deploying to Kubernetes using Helm, so feel free to join us to build it. More details in
## 2.Toggle existing MOSN to Layotto for MOSN users
diff --git a/docs/docs/operation/sidecar_injector.md b/docs/en/operation/sidecar_injector.md
similarity index 100%
rename from docs/docs/operation/sidecar_injector.md
rename to docs/en/operation/sidecar_injector.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/sdk_reference/go/start.md b/docs/en/sdk_reference/go/start.md
similarity index 97%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/sdk_reference/go/start.md
rename to docs/en/sdk_reference/go/start.md
index 0ce299e4a6..ddc2551d9c 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/sdk_reference/go/start.md
+++ b/docs/en/sdk_reference/go/start.md
@@ -66,7 +66,7 @@ func main() {
Start Layotto and test the HelloWorld program above by using the simplest configuration file, the content is as follows:
-> For a more detailed introduction to configuration files, please refer to [configuration](configuration/overview.md).
+> For a more detailed introduction to configuration files, please refer to [configuration](en/configuration/overview.md).
```json
{
@@ -141,7 +141,7 @@ NewClientWithConnection(conn *grpc.ClientConn) Client
## Experience other interface functions
-Taking the distributed id `sequencer` as an example, use the 'redis' type. Please refer to this [document](component_specs/sequencer/redis. md) for relevant parameters usage.
+Taking the distributed id `sequencer` as an example, use the 'redis' type. Please refer to this [document](en/component_specs/sequencer/redis. md) for relevant parameters usage.
Please refer to `demo/sequencer/common/client.go` for specific code configuration, as follows:
@@ -258,7 +258,7 @@ Currently, go-sdk only encapsulates GRPC with a thin layer, so for interfaces th
Here, take the `local` component type in the `file` interface as an example.
-You can view the proto files in [`spec/proto/runtime`](https://github.com/mosn/layotto/tree/main/spec/proto/runtime/v1) or the [GRPC API docs](api_reference/README.md)
+You can view the proto files in [`spec/proto/runtime`](https://github.com/mosn/layotto/tree/main/spec/proto/runtime/v1) or the [GRPC API docs](en/api_reference/README.md)
The complete code reference is `demo/file/local/client.go`, and the content is as follows:
@@ -344,6 +344,6 @@ func main() {
## More Examples
-For other SDK interfaces, please refer to the code examples in the [demo directory](https://github.com/mosn/layotto/tree/main/demo) and [quick start startup document](start/state/state.md)
+For other SDK interfaces, please refer to the code examples in the [demo directory](https://github.com/mosn/layotto/tree/main/demo) and [quick start startup document](en/start/state/state.md)
-Refer to the [configs example](https://github.com/mosn/layotto/tree/main/configs) for writing relevant configuration files and [Component Configuration Document](configuration/overview.md)
\ No newline at end of file
+Refer to the [configs example](https://github.com/mosn/layotto/tree/main/configs) for writing relevant configuration files and [Component Configuration Document](en/configuration/overview.md)
\ No newline at end of file
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/actuator/start.md b/docs/en/start/actuator/start.md
similarity index 98%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/actuator/start.md
rename to docs/en/start/actuator/start.md
index 70198c3997..33153eb6c7 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/actuator/start.md
+++ b/docs/en/start/actuator/start.md
@@ -165,4 +165,4 @@ If you are implementing your own Layotto component, you can add health check cap
### How it works
-If you are interested in the implementation principle, or want to extend some functions in Actuator, you can read [Actuator's design document](design/actuator/actuator-design-doc.md)
\ No newline at end of file
+If you are interested in the implementation principle, or want to extend some functions in Actuator, you can read [Actuator's design document](en/design/actuator/actuator-design-doc.md)
\ No newline at end of file
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/api_plugin/generate.md b/docs/en/start/api_plugin/generate.md
similarity index 92%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/api_plugin/generate.md
rename to docs/en/start/api_plugin/generate.md
index 3587aa62aa..9da10dc2d6 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/api_plugin/generate.md
+++ b/docs/en/start/api_plugin/generate.md
@@ -9,7 +9,7 @@ Writing the API plugin yourself is boring. You can use Layotto's code generator
>
> The code of in-tree plugins is located in the layotto repo, while the code of out-of-tree plugins can be placed in your own repo outside the layotto repo.
>
-> **This guide will show you how to generate out-of-tree plugins**. If you want to generate in-tree plugins, see [the other doc](api_reference/how_to_generate_api_doc) for help.
+> **This guide will show you how to generate out-of-tree plugins**. If you want to generate in-tree plugins, see [the other doc](en/api_reference/how_to_generate_api_doc) for help.
Let's say you want to add a `PublishTransactionalMessage` method to the existing pubsub API. You write a new proto file `cmd/layotto_multiple_api/advanced_queue/advanced_queue.proto`:
@@ -55,6 +55,6 @@ Fix the path error and then you can register this API plugin in your `main`.
## Reference
-[How to generate code and documentation from the .proto files](api_reference/how_to_generate_api_doc)
+[How to generate code and documentation from the .proto files](en/api_reference/how_to_generate_api_doc)
[protoc-gen-p6](https://github.com/layotto/protoc-gen-p6)
\ No newline at end of file
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/api_plugin/helloworld.md b/docs/en/start/api_plugin/helloworld.md
similarity index 94%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/api_plugin/helloworld.md
rename to docs/en/start/api_plugin/helloworld.md
index bf36e1389e..bf4c1c5e22 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/api_plugin/helloworld.md
+++ b/docs/en/start/api_plugin/helloworld.md
@@ -58,4 +58,4 @@ This message is the response of the helloworld API you just registered in step 1
## Next
You can refer to the demo code to implement your own API. Have a try !
-For more details,you can refer to the [design doc](design/api_plugin/design.md)
\ No newline at end of file
+For more details,you can refer to the [design doc](zh/design/api_plugin/design.md)
\ No newline at end of file
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/configuration/overview.md b/docs/en/start/configuration/overview.md
similarity index 90%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/configuration/overview.md
rename to docs/en/start/configuration/overview.md
index c92d7f6242..e5d130934f 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/configuration/overview.md
+++ b/docs/en/start/configuration/overview.md
@@ -20,6 +20,6 @@ A: Configuration has some special capabilities, such as sidecar caching, such as
This is like the difference between the configuration center and the database, both are storage, but the former is domain-specific and has special functions
## Quick start
-- [Use Apollo as Configuration Center](start/configuration/start-apollo.md)
-- [Use Etcd as Configuration Center](start/configuration/start.md)
-- [Use Nacos as Configuration Center](start/configuration/start-nacos.md)
\ No newline at end of file
+- [Use Apollo as Configuration Center](en/start/configuration/start-apollo.md)
+- [Use Etcd as Configuration Center](en/start/configuration/start.md)
+- [Use Nacos as Configuration Center](en/start/configuration/start-nacos.md)
\ No newline at end of file
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/configuration/start-apollo.md b/docs/en/start/configuration/start-apollo.md
similarity index 97%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/configuration/start-apollo.md
rename to docs/en/start/configuration/start-apollo.md
index 781a3576f1..01d4b60ec9 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/configuration/start-apollo.md
+++ b/docs/en/start/configuration/start-apollo.md
@@ -3,7 +3,7 @@ This example shows how to add, delete, modify, and watch the [apollo configurati
The architecture of this example is shown in the figure below. The processes started are: client APP, Layotto, Apollo server
-![img.png](/img/configuration/apollo/arch.png)
+![img.png](../../../img/configuration/apollo/arch.png)
## Step 1.Deploy Apollo (optional)
@@ -39,7 +39,7 @@ After success, a new layotto file will be generated in the directory. Let's run
>
>A: With the default configuration, Layotto will connect to apollo's demo server, but the configuration in that demo server may be modified by others. So the error may be because some configuration has been modified.
>
-> In this case, you can try other demos, such as [the etcd demo](start/configuration/start.md)
+> In this case, you can try other demos, such as [the etcd demo](en/start/configuration/start.md)
## Step 3. Run the client demo
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/configuration/start-nacos.md b/docs/en/start/configuration/start-nacos.md
similarity index 97%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/configuration/start-nacos.md
rename to docs/en/start/configuration/start-nacos.md
index 3bce31a88b..131df84455 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/configuration/start-nacos.md
+++ b/docs/en/start/configuration/start-nacos.md
@@ -4,7 +4,7 @@ TThis example shows how to add, delete, modify, and watch the [nacos configurati
The architecture of this example is shown in the figure below. The processes started are: client APP, Layotto, Nacos server
-![](/img/configuration/nacos/layotto-nacos-configstore-component.png)
+![](../../../img/configuration/nacos/layotto-nacos-configstore-component.png)
[Then config file](https://github.com/mosn/layotto/blob/main/configs/config_nacos.json) claims `nacos` in the `config_store` section, but users can change it to other configuration center they want (currently only support nacos and nacos).
## Deploy Nacos and Layotto
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/configuration/start.md b/docs/en/start/configuration/start.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/configuration/start.md
rename to docs/en/start/configuration/start.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/cryption/start.md b/docs/en/start/cryption/start.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/cryption/start.md
rename to docs/en/start/cryption/start.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/delay_queue/start.md b/docs/en/start/delay_queue/start.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/delay_queue/start.md
rename to docs/en/start/delay_queue/start.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/email/start.md b/docs/en/start/email/start.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/email/start.md
rename to docs/en/start/email/start.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/faas/start.md b/docs/en/start/faas/start.md
similarity index 94%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/faas/start.md
rename to docs/en/start/faas/start.md
index b2045aafa3..8ade1f9d0a 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/faas/start.md
+++ b/docs/en/start/faas/start.md
@@ -1,10 +1,10 @@
-# FaaS QuickStart
+## FaaS QuickStart
-## 1. Features
+### 1. Features
Layotto supports loading and running functions in the form of wasm, and supports calling each other between functions and accessing infrastructure, such as Redis.
-Detailed design documents can refer to:[FaaS design](design/faas/faas-poc-design.md)
+Detailed design documents can refer to:[FaaS design](en/design/faas/faas-poc-design.md)
### 2. Dependent software
@@ -19,7 +19,7 @@ The following software needs to be installed to run this demo:
Follow the instructions on the official website.
3. [virtualbox](https://www.oracle.com/virtualization/technologies/vm/virtualbox.html)
-
+
Download the installation package from the official website and install it. You can also use [homebrew](https://brew.sh/) to install it on mac. If the startup fails after installation, please refer to [The host-only adapter we just created is not visible](https://github.com/kubernetes/minikube/issues/3614).
@@ -145,7 +145,7 @@ There are 100 inventories for book1.
### 5. Process introduction
-![img.png](/img/faas/faas-request-process.jpg)
+![img.png](../../../img/faas/faas-request-process.jpg)
1. send http request to func1
2. func1 calls func2 through Runtime ABI
@@ -156,7 +156,7 @@ There are 100 inventories for book1.
1. Virtualbox failed to start, "The host-only adapter we just created is not visible":
- refer [The host-only adapter we just created is not visible](https://github.com/kubernetes/minikube/issues/3614)
+ refer [The host-only adapter we just created is not visible](https://github.com/kubernetes/minikube/issues/3614)
2. When Layotto is started, the redis connection fails, and "occurs an error: redis store: error connecting to redis at" is printed:
diff --git a/docs/en/start/istio/README.md b/docs/en/start/istio/README.md
new file mode 100644
index 0000000000..68be20b05b
--- /dev/null
+++ b/docs/en/start/istio/README.md
@@ -0,0 +1,7 @@
+# Demo of Istio 1.10.6 integration
+
+The latest version of Layotto integrates Istio 1.10.6. You can manage the traffic of Layotto (data plane) using Istio (control plane).
+
+You can experience integrating Istio 1.10.6 in [Online Lab](en/start/lab.md)
+
+For more details, see ["Deploying Layotto using Istio"](en/operation/?id=option-1-deploy-using-istio)
\ No newline at end of file
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/istio/start.md b/docs/en/start/istio/start.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/istio/start.md
rename to docs/en/start/istio/start.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/lab.md b/docs/en/start/lab.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/lab.md
rename to docs/en/start/lab.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/lifecycle/start.md b/docs/en/start/lifecycle/start.md
similarity index 98%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/lifecycle/start.md
rename to docs/en/start/lifecycle/start.md
index c8f5fd0dc9..dbedb4a97f 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/lifecycle/start.md
+++ b/docs/en/start/lifecycle/start.md
@@ -128,7 +128,7 @@ Explore other Quickstarts through the navigation bar on the left.
[API Reference](https://mosn.io/layotto/api/v1/runtime.html)
-[Design doc](design/lifecycle/apply_configuration)
+[Design doc](zh/design/lifecycle/apply_configuration)
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/lock/start.md b/docs/en/start/lock/start.md
similarity index 97%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/lock/start.md
rename to docs/en/start/lock/start.md
index 360adfe04e..f0de2c3e7c 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/lock/start.md
+++ b/docs/en/start/lock/start.md
@@ -4,7 +4,7 @@ This example shows how to call redis through Layotto to trylock/unlock.
The architecture of this example is shown in the figure below, and the started processes are: redis, Layotto, a client program with two goroutines trying the same lock concurrently.
-![img.png](/img/lock/img.png)
+![img.png](../../../img/lock/img.png)
## step 1. Deploy Redis and Layotto
@@ -41,7 +41,7 @@ Use the following command to check if redis is installed:
docker images
```
-![img.png](/img/mq/start/img.png)
+![img.png](../../../img/mq/start/img.png)
3. Run the container
@@ -159,4 +159,4 @@ Explore other Quickstarts through the navigation bar on the left.
### Understand the design principle of Distributed Lock API
-If you are interested in the design principle, or want to extend some functions, you can read [Distributed Lock API design document](design/lock/lock-api-design.md)
\ No newline at end of file
+If you are interested in the design principle, or want to extend some functions, you can read [Distributed Lock API design document](en/design/lock/lock-api-design.md)
\ No newline at end of file
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/network_filter/tcpcopy.md b/docs/en/start/network_filter/tcpcopy.md
similarity index 84%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/network_filter/tcpcopy.md
rename to docs/en/start/network_filter/tcpcopy.md
index 5ca17897d1..4e51266c08 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/network_filter/tcpcopy.md
+++ b/docs/en/start/network_filter/tcpcopy.md
@@ -2,10 +2,9 @@
## Introduction
-When you run the demo according to the quick-start document [Configuration demo with apollo](start/configuration/start-apollo.md), you may notice that there is such a configuration in the configuration file config_apollo.json:
+When you run the demo according to the quick-start document [Configuration demo with apollo](en/start/configuration/start-apollo.md), you may notice that there is such a configuration in the configuration file config_apollo.json:
```json
-
{
"type": "tcpcopy",
"config": {
@@ -17,16 +16,15 @@ When you run the demo according to the quick-start document [Configuration demo
"mem_max_rate": 80
}
}
-
```
The meaning of this configuration is to load the tcpcopy plug-in at startup to dump the tcp traffic.
After enabling this configuration, when Layotto receives a request and the conditions for traffic dump are met, it will write the binary request data to the local file system.
-The "dumped" binary traffic data will be stored in the `${user's home directory}/logs/mosn` directory, or under the /home/admin/logs/mosn directory:
+The "dumped" binary traffic data will be stored in the ${user's home directory}/logs/mosn directory, or under the /home/admin/logs/mosn directory:
-![img.png](/img/tcp_dump.png)
+![img.png](../../../img/tcp_dump.png)
You can use these data in combination with other tools and infrastructure to do something cool, such as traffic playback, bypass verification, etc.
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/oss/start.md b/docs/en/start/oss/start.md
similarity index 98%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/oss/start.md
rename to docs/en/start/oss/start.md
index 980fa34f0a..86bfca9920 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/oss/start.md
+++ b/docs/en/start/oss/start.md
@@ -147,7 +147,7 @@ Explore other Quickstarts through the navigation bar on the left.
[API reference](https://mosn.io/layotto/api/v1/s3.html)
-[Design doc of ObjectStorageService API ](design/oss/design)
+[Design doc of ObjectStorageService API ](zh/design/oss/design)
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/phone/start.md b/docs/en/start/phone/start.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/phone/start.md
rename to docs/en/start/phone/start.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/pubsub/start.md b/docs/en/start/pubsub/start.md
similarity index 98%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/pubsub/start.md
rename to docs/en/start/pubsub/start.md
index fdf63cbed8..1670fb4f08 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/pubsub/start.md
+++ b/docs/en/start/pubsub/start.md
@@ -11,7 +11,7 @@ This example shows how to call redis through Layotto to publish/subscribe messag
The architecture of this example is shown in the figure below. The running processes are: redis, a Subscriber program that listens to events, Layotto, and a Publisher program that publishes events.
-![img_1.png](/img/mq/start/img_1.png)
+![img_1.png](../../../img/mq/start/img_1.png)
### Step 1. Start the Subscriber
@@ -210,4 +210,4 @@ Explore other Quickstarts through the navigation bar on the left.
#### Understand the principle of Pub/Sub API implementation
-If you are interested in the implementation principle, or want to extend some functions, you can read [Pub/Sub API design document](design/pubsub/pubsub-api-and-compability-with-dapr-component.md)
+If you are interested in the implementation principle, or want to extend some functions, you can read [Pub/Sub API design document](en/design/pubsub/pubsub-api-and-compability-with-dapr-component.md)
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/rpc/dubbo_json_rpc.md b/docs/en/start/rpc/dubbo_json_rpc.md
similarity index 86%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/rpc/dubbo_json_rpc.md
rename to docs/en/start/rpc/dubbo_json_rpc.md
index 5a52cfa88e..ba69391288 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/rpc/dubbo_json_rpc.md
+++ b/docs/en/start/rpc/dubbo_json_rpc.md
@@ -3,7 +3,7 @@
## Quick Start
### step 1. Edit config file,add `dubbo_json_rpc` filter
-![jsonrpc.jpg](/img/rpc/jsonrpc.jpg)
+![jsonrpc.jpg](../../../img/rpc/jsonrpc.jpg)
### step 2. Compile and start layotto
@@ -54,8 +54,8 @@ Start dubbo server:
go run demo/rpc/dubbo_json_rpc/dubbo_json_client/client.go -d '{"jsonrpc":"2.0","method":"GetUser","params":["A003"],"id":9527}'
```
-![jsonrpc.jpg](/img/rpc/jsonrpcresult.jpg)
+![jsonrpc.jpg](../../../img/rpc/jsonrpcresult.jpg)
## Next Step
-If you are interested in the implementation principle, or want to extend some functions, you can read [RPC design document](design/rpc/rpc-design-doc.md)
+If you are interested in the implementation principle, or want to extend some functions, you can read [RPC design document](en/design/rpc/rpc-design-doc.md)
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/rpc/helloworld.md b/docs/en/start/rpc/helloworld.md
similarity index 95%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/rpc/helloworld.md
rename to docs/en/start/rpc/helloworld.md
index d83a4db834..5a5497d2e6 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/rpc/helloworld.md
+++ b/docs/en/start/rpc/helloworld.md
@@ -37,7 +37,7 @@ go run ${project_path}/demo/rpc/http/echoserver/echoserver.go
go run ${project_path}/demo/rpc/http/echoclient/echoclient.go -d 'hello layotto'
```
-![rpchello.png](/img/rpc/rpchello.png)
+![rpchello.png](../../../img/rpc/rpchello.png)
## Explanation
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/secret/secret_ref.md b/docs/en/start/secret/secret_ref.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/secret/secret_ref.md
rename to docs/en/start/secret/secret_ref.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/secret/start.md b/docs/en/start/secret/start.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/secret/start.md
rename to docs/en/start/secret/start.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/sequencer/start.md b/docs/en/start/sequencer/start.md
similarity index 96%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/sequencer/start.md
rename to docs/en/start/sequencer/start.md
index 70d51042f4..6870230653 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/sequencer/start.md
+++ b/docs/en/start/sequencer/start.md
@@ -10,7 +10,7 @@ This example shows how to call Etcd through Layotto to generate a distributed un
The architecture of this example is shown in the figure below, and the processes started are: Etcd, Layotto, and client programs
-![img.png](/img/sequencer/etcd/img.png)
+![img.png](../../../img/sequencer/etcd/img.png)
### step 1. Deploy etcd and Layotto
#### **With Docker Compose**
@@ -174,7 +174,7 @@ In fact, sdk is only a very thin package for grpc, using sdk is about equal to d
#### Want to learn more about Sequencer API?
What does the Sequencer API do, what problems it solves, and in what scenarios should I use it?
-If you are confused and want to know more details about the use of Sequencer API, you can read [Sequencer API Usage Document](api_reference/sequencer/reference)
+If you are confused and want to know more details about the use of Sequencer API, you can read [Sequencer API Usage Document](en/api_reference/sequencer/reference)
#### Details later, let's continue to experience other APIs
Explore other Quickstarts through the navigation bar on the left.
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/sms/start.md b/docs/en/start/sms/start.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/sms/start.md
rename to docs/en/start/sms/start.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/state/start.md b/docs/en/start/state/start.md
similarity index 96%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/state/start.md
rename to docs/en/start/state/start.md
index fa2a15ee46..a54113e73d 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/state/start.md
+++ b/docs/en/start/state/start.md
@@ -9,7 +9,7 @@ This example shows how to call redis through Layotto to add, delete, modify and
The architecture of this example is shown in the figure below, and the started processes are: redis, Layotto, client program
-![img.png](/img/state/img.png)
+![img.png](../../../img/state/img.png)
### step 1. Deploy Redis and Layotto
@@ -46,7 +46,7 @@ Use the following command to check if redis is installed:
docker images
```
-![img.png](/img/mq/start/img.png)
+![img.png](../../../img/mq/start/img.png)
3. Run the container
@@ -183,7 +183,7 @@ In fact, sdk is only a very thin package for grpc, using sdk is about equal to d
#### Want to learn more about State API?
What does the State API do, what problems it solves, and in what scenarios should I use it?
-If you have such confusion and want to know more details about State API, you can read [State API Usage Document](building_blocks/state/reference.md)
+If you have such confusion and want to know more details about State API, you can read [State API Usage Document](zh/api_reference/state/reference)
#### Details later, let's continue to experience other APIs
Explore other Quickstarts through the navigation bar on the left.
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/stream_filter/flow_control.md b/docs/en/start/stream_filter/flow_control.md
similarity index 94%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/stream_filter/flow_control.md
rename to docs/en/start/stream_filter/flow_control.md
index b71ab961a4..0e088f095a 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/stream_filter/flow_control.md
+++ b/docs/en/start/stream_filter/flow_control.md
@@ -1,4 +1,4 @@
-[查看中文版本](start/stream_filter/flow_control.md)
+[查看中文版本](zh/start/stream_filter/flow_control.md)
## Method Level Flow Control
@@ -29,7 +29,7 @@ this can help `/spec.proto.runtime.v1.Runtime/SayHello` method has a flow contro
this code of the client is here [client.go](https://github.com/mosn/layotto/blob/main/demo/flowcontrol/client.go),the logic is very simple, send 10 times request to the server,and the result is below:
-![img.png](/img/flow_control.png)
+![img.png](../../../img/flow_control.png)
the previous 5 times request access is succeed while the last 5 times request is under control.
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/trace/prometheus.md b/docs/en/start/trace/prometheus.md
similarity index 100%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/trace/prometheus.md
rename to docs/en/start/trace/prometheus.md
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/trace/skywalking.md b/docs/en/start/trace/skywalking.md
similarity index 98%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/trace/skywalking.md
rename to docs/en/start/trace/skywalking.md
index 51670ccf48..9d4beb8368 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/trace/skywalking.md
+++ b/docs/en/start/trace/skywalking.md
@@ -86,7 +86,7 @@ Run the demo client:
Access http://127.0.0.1:8080
-![](/img/trace/sky.png)
+![](../../../img/trace/sky.png)
## Release resources
If you run Layotto with docker, remember to shut it down:
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/trace/trace.md b/docs/en/start/trace/trace.md
similarity index 97%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/trace/trace.md
rename to docs/en/start/trace/trace.md
index 688f8224ca..9524980ee3 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/trace/trace.md
+++ b/docs/en/start/trace/trace.md
@@ -50,7 +50,7 @@ The corresponding client demo is in [client.go](https://github.com/mosn/layotto/
Check the log of layotto, you will see the detailed tracking log printed out:
-![img.png](/img/trace/trace.png)
+![img.png](../../../img/trace/trace.png)
### Configuration parameter description
@@ -79,7 +79,7 @@ The interceptor will start tracing every time the grpc method is called, generat
Overall diagram of the tracing framework:
-![img.png](/img/trace/structure.png)
+![img.png](../../../img/trace/structure.png)
#### Span structure:
@@ -150,7 +150,7 @@ trace.SetExtraComponentInfo(ctx, fmt.Sprintf("method: %+v, store: %+v", "Get", "
The results printed by trace are as follows:
-![img.png](/img/trace/trace.png)
+![img.png](../../../img/trace/trace.png)
## 2. Metrics
@@ -162,6 +162,6 @@ curl --location --request GET 'http://127.0.0.1:34903/metrics'
The result is shown below:
-![img.png](/img/trace/metric.png)
+![img.png](../../../img/trace/metric.png)
For the metric principle of mosn, please refer to [mosn official document](https://mosn.io/blog/code/mosn-log/)
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/wasm/start.md b/docs/en/start/wasm/start.md
similarity index 89%
rename from docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/wasm/start.md
rename to docs/en/start/wasm/start.md
index 3bba3001f9..22a69cf992 100644
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/start/wasm/start.md
+++ b/docs/en/start/wasm/start.md
@@ -1,4 +1,3 @@
-# Run business logic in Layotto using WASM
## Run business logic in Layotto using WASM
### What is WASM on Layotto?
@@ -6,11 +5,11 @@ The sidecar of service mesh and multi-runtime is a common infrastructure for the
For example, a business system has developed an SDK in the form of a jar package for use by other business systems. Their features are not universal to the entire company, so they cannot persuade the middleware team to develop them into the company's unified sidecar.
-![img_1.png](/img/wasm/img_1.png)
+![img_1.png](../../../img/wasm/img_1.png)
And if it becomes like this:
-![img.png](/img/wasm/img.png)
+![img.png](../../../img/wasm/img.png)
If developers no longer develop sdk (jar package), change to develop .wasm files and support independent upgrade and deployment, there will be no pain to push the users to upgrade.
@@ -101,26 +100,24 @@ docker rm -f redis-test
We can specify the WASM file to be loaded in `./demo/faas/config.json` config file:
```json
-
"config": {
-"function1": {
-"name": "function1",
-"instance_num": 1,
-"vm_config": {
-"engine": "wasmtime",
-"path": "demo/faas/code/golang/client/function_1.wasm"
-}
-},
-"function2": {
-"name": "function2",
-"instance_num": 1,
-"vm_config": {
-"engine": "wasmtime",
-"path": "demo/faas/code/golang/server/function_2.wasm"
-}
+ "function1": {
+ "name": "function1",
+ "instance_num": 1,
+ "vm_config": {
+ "engine": "wasmtime",
+ "path": "demo/faas/code/golang/client/function_1.wasm"
+ }
+ },
+ "function2": {
+ "name": "function2",
+ "instance_num": 1,
+ "vm_config": {
+ "engine": "wasmtime",
+ "path": "demo/faas/code/golang/server/function_2.wasm"
+ }
+ }
}
-}
-
```
tip: we also support wasmer as the engine value in vm_config.
diff --git a/docs/i18n/en-US/code.json b/docs/i18n/en-US/code.json
deleted file mode 100644
index 77e305f432..0000000000
--- a/docs/i18n/en-US/code.json
+++ /dev/null
@@ -1,420 +0,0 @@
-{
- "theme.ErrorPageContent.title": {
- "message": "page crashed",
- "description": "The title of the fallback page when the page crashed"
- },
- "theme.BackToTopButton.buttonAriaLabel": {
- "message": "回到顶部",
- "description": "The ARIA label for the back to top button"
- },
- "theme.blog.paginator.navAriaLabel": {
- "message": "博文列表分页导航",
- "description": "The ARIA label for the blog pagination"
- },
- "theme.blog.paginator.newerEntries": {
- "message": "较新的博文",
- "description": "The label used to navigate to the newer blog posts page (previous page)"
- },
- "theme.blog.paginator.olderEntries": {
- "message": "较旧的博文",
- "description": "The label used to navigate to the older blog posts page (next page)"
- },
- "theme.blog.archive.title": {
- "message": "历史博文",
- "description": "The page & hero title of the blog archive page"
- },
- "theme.blog.archive.description": {
- "message": "历史博文",
- "description": "The page & hero description of the blog archive page"
- },
- "theme.blog.post.paginator.navAriaLabel": {
- "message": "博文分页导航",
- "description": "The ARIA label for the blog posts pagination"
- },
- "theme.blog.post.paginator.newerPost": {
- "message": "The new post",
- "description": "The blog post button label to navigate to the newer/previous post"
- },
- "theme.blog.post.paginator.olderPost": {
- "message": "The old post",
- "description": "The blog post button label to navigate to the older/next post"
- },
- "theme.blog.post.plurals": {
- "message": "{count} Post",
- "description": "Pluralized label for \"{count} posts\". Use as much plural forms (separated by \"|\") as your language support (see https://www.unicode.org/cldr/cldr-aux/charts/34/supplemental/language_plural_rules.html)"
- },
- "theme.blog.tagTitle": {
- "message": "{nPosts} 含有标签「{tagName}」",
- "description": "The title of the page for a blog tag"
- },
- "theme.tags.tagsPageLink": {
- "message": "查看所有标签",
- "description": "The label of the link targeting the tag list page"
- },
- "theme.colorToggle.ariaLabel": {
- "message": "切换浅色/暗黑模式(当前为{mode})",
- "description": "The ARIA label for the navbar color mode toggle"
- },
- "theme.colorToggle.ariaLabel.mode.dark": {
- "message": "暗黑模式",
- "description": "The name for the dark color mode"
- },
- "theme.colorToggle.ariaLabel.mode.light": {
- "message": "浅色模式",
- "description": "The name for the light color mode"
- },
- "theme.docs.DocCard.categoryDescription.plurals": {
- "message": "{count} 个项目",
- "description": "The default description for a category card in the generated index about how many items this category includes"
- },
- "theme.docs.breadcrumbs.navAriaLabel": {
- "message": "页面路径",
- "description": "The ARIA label for the breadcrumbs"
- },
- "theme.docs.paginator.navAriaLabel": {
- "message": "文件选项卡",
- "description": "The ARIA label for the docs pagination"
- },
- "theme.docs.paginator.previous": {
- "message": "pre page",
- "description": "The label used to navigate to the previous doc"
- },
- "theme.docs.paginator.next": {
- "message": "next page",
- "description": "The label used to navigate to the next doc"
- },
- "theme.docs.tagDocListPageTitle.nDocsTagged": {
- "message": "{count} 篇文档带有标签",
- "description": "Pluralized label for \"{count} docs tagged\". Use as much plural forms (separated by \"|\") as your language support (see https://www.unicode.org/cldr/cldr-aux/charts/34/supplemental/language_plural_rules.html)"
- },
- "theme.docs.tagDocListPageTitle": {
- "message": "{nDocsTagged}「{tagName}」",
- "description": "The title of the page for a docs tag"
- },
- "theme.docs.versionBadge.label": {
- "message": "版本:{versionLabel}"
- },
- "theme.docs.versions.unreleasedVersionLabel": {
- "message": "此为 {siteTitle} {versionLabel} 版尚未发行的文档。",
- "description": "The label used to tell the user that he's browsing an unreleased doc version"
- },
- "theme.docs.versions.unmaintainedVersionLabel": {
- "message": "此为 {siteTitle} {versionLabel} 版的文档,现已不再积极维护。",
- "description": "The label used to tell the user that he's browsing an unmaintained doc version"
- },
- "theme.docs.versions.latestVersionSuggestionLabel": {
- "message": "最新的文档请参阅 {latestVersionLink} ({versionLabel})。",
- "description": "The label used to tell the user to check the latest version"
- },
- "theme.docs.versions.latestVersionLinkLabel": {
- "message": "latest version",
- "description": "The label used for the latest version suggestion link label"
- },
- "theme.common.headingLinkTitle": {
- "message": "{heading}的直接链接",
- "description": "Title for link to heading"
- },
- "theme.common.editThisPage": {
- "message": "Edit this page",
- "description": "The link label to edit the current page"
- },
- "theme.lastUpdated.atDate": {
- "message": "于 {date} ",
- "description": "The words used to describe on which date a page has been last updated"
- },
- "theme.lastUpdated.byUser": {
- "message": "由 {user} ",
- "description": "The words used to describe by who the page has been last updated"
- },
- "theme.lastUpdated.lastUpdatedAtBy": {
- "message": "最后{byUser}{atDate}更新",
- "description": "The sentence used to display when a page has been last updated, and by who"
- },
- "theme.navbar.mobileVersionsDropdown.label": {
- "message": "选择版本",
- "description": "The label for the navbar versions dropdown on mobile view"
- },
- "theme.NotFound.title": {
- "message": "404 page",
- "description": "The title of the 404 page"
- },
- "theme.tags.tagsListLabel": {
- "message": "tag:",
- "description": "The label alongside a tag list"
- },
- "theme.AnnouncementBar.closeButtonAriaLabel": {
- "message": "关闭",
- "description": "The ARIA label for close button of announcement bar"
- },
- "theme.admonition.caution": {
- "message": "warn",
- "description": "The default label used for the Caution admonition (:::caution)"
- },
- "theme.admonition.danger": {
- "message": "danger",
- "description": "The default label used for the Danger admonition (:::danger)"
- },
- "theme.admonition.info": {
- "message": "info",
- "description": "The default label used for the Info admonition (:::info)"
- },
- "theme.admonition.note": {
- "message": "note",
- "description": "The default label used for the Note admonition (:::note)"
- },
- "theme.admonition.tip": {
- "message": "tip",
- "description": "The default label used for the Tip admonition (:::tip)"
- },
- "theme.admonition.warning": {
- "message": "注意",
- "description": "The default label used for the Warning admonition (:::warning)"
- },
- "theme.blog.sidebar.navAriaLabel": {
- "message": "最近博文导航",
- "description": "The ARIA label for recent posts in the blog sidebar"
- },
- "theme.CodeBlock.copied": {
- "message": "复制成功",
- "description": "The copied button label on code blocks"
- },
- "theme.CodeBlock.copyButtonAriaLabel": {
- "message": "复制代码到剪贴板",
- "description": "The ARIA label for copy code blocks button"
- },
- "theme.CodeBlock.copy": {
- "message": "复制",
- "description": "The copy button label on code blocks"
- },
- "theme.CodeBlock.wordWrapToggle": {
- "message": "切换自动换行",
- "description": "The title attribute for toggle word wrapping button of code block lines"
- },
- "theme.DocSidebarItem.expandCategoryAriaLabel": {
- "message": "展开侧边栏分类 '{label}'",
- "description": "The ARIA label to expand the sidebar category"
- },
- "theme.DocSidebarItem.collapseCategoryAriaLabel": {
- "message": "折叠侧边栏分类 '{label}'",
- "description": "The ARIA label to collapse the sidebar category"
- },
- "theme.NavBar.navAriaLabel": {
- "message": "主导航",
- "description": "The ARIA label for the main navigation"
- },
- "theme.navbar.mobileLanguageDropdown.label": {
- "message": "选择语言",
- "description": "The label for the mobile language switcher dropdown"
- },
- "theme.NotFound.p1": {
- "message": "我们找不到您要找的页面。",
- "description": "The first paragraph of the 404 page"
- },
- "theme.NotFound.p2": {
- "message": "请联系原始链接来源网站的所有者,并告知他们链接已损坏。",
- "description": "The 2nd paragraph of the 404 page"
- },
- "theme.TOCCollapsible.toggleButtonLabel": {
- "message": "Overview",
- "description": "The label used by the button on the collapsible TOC component"
- },
- "theme.blog.post.readingTime.plurals": {
- "message": "Read will take {readingTime} min",
- "description": "Pluralized label for \"{readingTime} min read\". Use as much plural forms (separated by \"|\") as your language support (see https://www.unicode.org/cldr/cldr-aux/charts/34/supplemental/language_plural_rules.html)"
- },
- "theme.blog.post.readMore": {
- "message": "Read More",
- "description": "The label used in blog post item excerpts to link to full blog posts"
- },
- "theme.blog.post.readMoreLabel": {
- "message": "Read {title} ",
- "description": "The ARIA label for the link to full blog posts from excerpts"
- },
- "theme.docs.breadcrumbs.home": {
- "message": "Home Page",
- "description": "The ARIA label for the home page in the breadcrumbs"
- },
- "theme.docs.sidebar.collapseButtonTitle": {
- "message": "收起侧边栏",
- "description": "The title attribute for collapse button of doc sidebar"
- },
- "theme.docs.sidebar.collapseButtonAriaLabel": {
- "message": "收起侧边栏",
- "description": "The title attribute for collapse button of doc sidebar"
- },
- "theme.docs.sidebar.navAriaLabel": {
- "message": "文档侧边栏",
- "description": "The ARIA label for the sidebar navigation"
- },
- "theme.docs.sidebar.closeSidebarButtonAriaLabel": {
- "message": "关闭导航栏",
- "description": "The ARIA label for close button of mobile sidebar"
- },
- "theme.docs.sidebar.toggleSidebarButtonAriaLabel": {
- "message": "切换导航栏",
- "description": "The ARIA label for hamburger menu button of mobile navigation"
- },
- "theme.docs.sidebar.expandButtonTitle": {
- "message": "展开侧边栏",
- "description": "The ARIA label and title attribute for expand button of doc sidebar"
- },
- "theme.docs.sidebar.expandButtonAriaLabel": {
- "message": "展开侧边栏",
- "description": "The ARIA label and title attribute for expand button of doc sidebar"
- },
- "theme.navbar.mobileSidebarSecondaryMenu.backButtonLabel": {
- "message": "← 回到主菜单",
- "description": "The label of the back button to return to main menu, inside the mobile navbar sidebar secondary menu (notably used to display the docs sidebar)"
- },
- "theme.ErrorPageContent.tryAgain": {
- "message": "重试",
- "description": "The label of the button to try again rendering when the React error boundary captures an error"
- },
- "theme.common.skipToMainContent": {
- "message": "跳到主要内容",
- "description": "The skip to content label used for accessibility, allowing to rapidly navigate to main content with keyboard tab/enter navigation"
- },
- "theme.tags.tagsPageTitle": {
- "message": "Tag",
- "description": "The title of the tag list page"
- },
- "theme.unlistedContent.title": {
- "message": "Unlisted",
- "description": "The unlisted content banner title"
- },
- "theme.unlistedContent.message": {
- "message": "此页面未列出。搜索引擎不会对其索引,只有拥有直接链接的用户才能访问。",
- "description": "The unlisted content banner message"
- },
- "theme.SearchBar.seeAll": {
- "message": "See all {count} results"
- },
- "theme.SearchPage.documentsFound.plurals": {
- "message": "One document found|{count} documents found",
- "description": "Pluralized label for \"{count} documents found\". Use as much plural forms (separated by \"|\") as your language support (see https://www.unicode.org/cldr/cldr-aux/charts/34/supplemental/language_plural_rules.html)"
- },
- "theme.SearchPage.existingResultsTitle": {
- "message": "Search results for \"{query}\"",
- "description": "The search page title for non-empty query"
- },
- "theme.SearchPage.emptyResultsTitle": {
- "message": "Search the documentation",
- "description": "The search page title for empty query"
- },
- "theme.SearchPage.inputPlaceholder": {
- "message": "Type your search here",
- "description": "The placeholder for search page input"
- },
- "theme.SearchPage.inputLabel": {
- "message": "Search",
- "description": "The ARIA label for search page input"
- },
- "theme.SearchPage.algoliaLabel": {
- "message": "Search by Algolia",
- "description": "The ARIA label for Algolia mention"
- },
- "theme.SearchPage.noResultsText": {
- "message": "No results were found",
- "description": "The paragraph for empty search result"
- },
- "theme.SearchPage.fetchingNewResults": {
- "message": "Fetching new results...",
- "description": "The paragraph for fetching new search results"
- },
- "theme.SearchBar.label": {
- "message": "Search",
- "description": "The ARIA label and placeholder for search button"
- },
- "theme.SearchModal.searchBox.resetButtonTitle": {
- "message": "Clear the query",
- "description": "The label and ARIA label for search box reset button"
- },
- "theme.SearchModal.searchBox.cancelButtonText": {
- "message": "Cancel",
- "description": "The label and ARIA label for search box cancel button"
- },
- "theme.SearchModal.startScreen.recentSearchesTitle": {
- "message": "Recent",
- "description": "The title for recent searches"
- },
- "theme.SearchModal.startScreen.noRecentSearchesText": {
- "message": "No recent searches",
- "description": "The text when no recent searches"
- },
- "theme.SearchModal.startScreen.saveRecentSearchButtonTitle": {
- "message": "Save this search",
- "description": "The label for save recent search button"
- },
- "theme.SearchModal.startScreen.removeRecentSearchButtonTitle": {
- "message": "Remove this search from history",
- "description": "The label for remove recent search button"
- },
- "theme.SearchModal.startScreen.favoriteSearchesTitle": {
- "message": "Favorite",
- "description": "The title for favorite searches"
- },
- "theme.SearchModal.startScreen.removeFavoriteSearchButtonTitle": {
- "message": "Remove this search from favorites",
- "description": "The label for remove favorite search button"
- },
- "theme.SearchModal.errorScreen.titleText": {
- "message": "Unable to fetch results",
- "description": "The title for error screen of search modal"
- },
- "theme.SearchModal.errorScreen.helpText": {
- "message": "You might want to check your network connection.",
- "description": "The help text for error screen of search modal"
- },
- "theme.SearchModal.footer.selectText": {
- "message": "to select",
- "description": "The explanatory text of the action for the enter key"
- },
- "theme.SearchModal.footer.selectKeyAriaLabel": {
- "message": "Enter key",
- "description": "The ARIA label for the Enter key button that makes the selection"
- },
- "theme.SearchModal.footer.navigateText": {
- "message": "to navigate",
- "description": "The explanatory text of the action for the Arrow up and Arrow down key"
- },
- "theme.SearchModal.footer.navigateUpKeyAriaLabel": {
- "message": "Arrow up",
- "description": "The ARIA label for the Arrow up key button that makes the navigation"
- },
- "theme.SearchModal.footer.navigateDownKeyAriaLabel": {
- "message": "Arrow down",
- "description": "The ARIA label for the Arrow down key button that makes the navigation"
- },
- "theme.SearchModal.footer.closeText": {
- "message": "to close",
- "description": "The explanatory text of the action for Escape key"
- },
- "theme.SearchModal.footer.closeKeyAriaLabel": {
- "message": "Escape key",
- "description": "The ARIA label for the Escape key button that close the modal"
- },
- "theme.SearchModal.footer.searchByText": {
- "message": "Search by",
- "description": "The text explain that the search is making by Algolia"
- },
- "theme.SearchModal.noResultsScreen.noResultsText": {
- "message": "No results for",
- "description": "The text explains that there are no results for the following search"
- },
- "theme.SearchModal.noResultsScreen.suggestedQueryText": {
- "message": "Try searching for",
- "description": "The text for the suggested query when no results are found for the following search"
- },
- "theme.SearchModal.noResultsScreen.reportMissingResultsText": {
- "message": "Believe this query should return results?",
- "description": "The text for the question where the user thinks there are missing results"
- },
- "theme.SearchModal.noResultsScreen.reportMissingResultsLinkText": {
- "message": "Let us know.",
- "description": "The text for the link to report missing results"
- },
- "theme.SearchModal.placeholder": {
- "message": "Search docs",
- "description": "The placeholder of the input of the DocSearch pop-up modal"
- }
-}
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-blog/code/flowcontrol/flowcontrol_code_analyze.md b/docs/i18n/en-US/docusaurus-plugin-content-blog/code/flowcontrol/flowcontrol_code_analyze.md
deleted file mode 100644
index 2751e4c5cd..0000000000
--- a/docs/i18n/en-US/docusaurus-plugin-content-blog/code/flowcontrol/flowcontrol_code_analyze.md
+++ /dev/null
@@ -1,80 +0,0 @@
-# Source Parsing 7 Layer Traffic Governance, Interface Limit
-
-> Author Profile:
-> was a fester of an open source community committed to embracing open sources and hoping to interact with each other’s open-source enthusiasts for progress and growth.
->
-> Writing Time: 20 April 2022
-
-## Overview
-
-The purpose of this document is to analyze the implementation of the interface flow
-
-## Prerequisite:
-
-Document content refers to the following version of the code
-
-[https://github.com/mosn/mosn](https://github.com/mosn/mosn)
-
-Mosn d11b5a638a137045c2fb03d9d8ca36ecc0def11 (Division Develop)
-
-## Source analysis
-
-### Overall analysis
-
-Reference to [https://mosn.io/docs/concept/extensions/](https://mosn.io/docs/concept/extensions/)
-
-Mosn Stream Filter Extension
-
-![01.png](https://gw.alipayobjects.com/mdn/rms_5891a1/afts/img/A*tSn4SpIkAa4AAAAAAAAAAAAAARQnAQ)
-
-### Code in: [flowcontrol代码](https://github.com/mosn/mosn/tree/master/pkg/filter/stream/flowcontrol)
-
-### stream_filter_factory.go analysis
-
-This class is a factory class to create StreamFilter.
-
-Some constant values are defined for default values
-
-![02.png](https://gw.alipayobjects.com/mdn/rms_5891a1/afts/img/A*PAWCTL6MS40AAAAAAAAAAAAAARQnAQ)
-
-Defines the restricted stream config class to load yaml definition and parse production corresponding functions
-
-![03.png](https://gw.alipayobjects.com/mdn/rms_5891a1/afts/img/A*Ua32SokhILEAAAAAAAAAAAAAARQnAQ)
-
-init() Inner initialization is the storage of name and corresponding constructor to the filter blocking plant map
-
-![04.png](https://gw.alipayobjects.com/mdn/rms_5891a1/afts/img/A*kb3qRqWnqxYAAAAAAAAAAAAAARQnAQ)
-
-Highlight createRpcFlowControlFilterFactory Production rpc Current Factory
-
-![05.png](https://gw.alipayobjects.com/mdn/rms_5891a1/afts/img/A*u5rkS54zkgAAAAAAAAAAAAAAARQnAQ)
-
-Before looking at streamfilter, we see how factory classes are producing restricted streamers
-
-![06.png](https://gw.alipayobjects.com/mdn/rms_5891a1/afts/img/A*cj0nT5O69OYAAAAAAAAAAAAAARQnAQ)
-
-Limit the streaming to the restricted stream chain structure to take effect in sequential order.
-
-CreateFilterChain method adds multiple filters to the link structure
-
-![07.png](https://gw.alipayobjects.com/mdn/rms_5891a1/afts/img/A*a8ClQ76odpEAAAAAAAAAAAAAARQnAQ)
-
-We can see that this interface is achieved by a wide variety of plant types, including those that we are studying today.
-
-![08.png](https://gw.alipayobjects.com/mdn/rms_5891a1/afts/img/A*sBDbT44r2vgAAAAAAAAAAAAAARQnAQ)
-
-### Stream_filter.go Analysis
-
-![09.png](https://gw.alipayobjects.com/mdn/rms_5891a1/afts/img/A*wsw3RKe1GH8AAAAAAAAAAAAAARQnAQ)
-
-## Overall process:
-
-Finally, we look back at the overall process progress:
-
-1. Starting from the initialization function of stream_filter_factory.go, the program inserted createRpcFlowControlFilterFactory.
-
-2. Mosn created a filter chain (code position[factory.go](https://github.com/mosn/mosn/tree/master/pkg/streamfilter/factory.go)) by circulating CreateFilterChain to include all filters in the chain structure, including our master restricted streaming today.
-
-3. Create Limiter NewStreamFilter().
-
-4. OnReceive() and eventually by sentinel (whether the threshold has been reached, whether to release traffic or stop traffic, StreamFilterStop or StreamFilterContinue).
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-blog/code/layotto-rpc/index.md b/docs/i18n/en-US/docusaurus-plugin-content-blog/code/layotto-rpc/index.md
deleted file mode 100644
index 02dd0e66c0..0000000000
--- a/docs/i18n/en-US/docusaurus-plugin-content-blog/code/layotto-rpc/index.md
+++ /dev/null
@@ -1,946 +0,0 @@
-# Layotto Source Parsing — Processing RPC requests
-
-> This paper is based on the Dubbo Json RPC as an example of the Layotto RPC processing.
->
-> by:[Wang Zhilong](https://github.com/rayowang) | 21April 2022
-
-- [overview](#overview)
-- [source analysis](#source analysis)
- - [0x00 Layotto initialize RPC](#_0x00-layotto-initializ-rpc)
- - [0x01 Dubbo-go-sample client to request request] (#_0x01-dubbo-go-sample-client-request request)
- - [0x02 Mosn EventLoop Reader Processing Request Data](#_0x02-mosn-eventloop-read processing request)
- - [0x03 Grpc Sever as NetworkFilter to process requests](#_0x03-grpc-sever as -networkfilter-process requests)
- - [0x04 Layotto send RPC requests and write to Local Virtual Connections](#_0x04-layotto -rpc-request and write to -local-virtual connection)
- - [0x05 Mosn reads Remote and executes Filter and Proxy Forwarding](#_0x05-mosn-read-remote-remote--and --filter-and proxy forward)
- - [0x06 Dubbo-go-sample server received response to request return] (#_0x06-dubbo-go-sample-server-received response return)
- - [0x07 Mosn Framework handles response and writes back to Remote Virtual Connections](#_0x07-mosn-Framework handles response and -remote-virtual connection)
- - [0x08 Layotto receive RPC responses and read Local Virtual Connections](#_0x08-layotto-receive-rpc-response and read -local-virtual connection)
- - [0x09 Grpc Sever processed data frames returned to client](#_0x09-grpc-sever processed frame returned to client)
- - [0x10 Dubbo-go-sample client receiving response](#_0x10-dubbo-go-sample-client-receiving response)
-- [summary](#summary)
-
-## General description
-
-Layotto has a clear and rich semiconductor API as a distributed prototype collection of prototype language distinguished from the network proxy service Mesh and using standard protocol API, which is part of the RPC API.Through RPC API app developers can interact with local Layotto instances of applications that also use the Sidecar architecture, thereby indirectly calling different service methods and using built-in capabilities to perform distributive tracking and diagnosis, traffic control, error handling, secure links, etc.and Layotto is based on the Grpc handler design, using the X-Protocol protocol for secure and reliable communications, except for Http/Grpc communications with other services.As shown in the following code, the RPC API interface is in line with Dapr and is available for RPC calls through the Grpc interface InvokeService.
-
-```go
-type DaprClient interface {
- // Invokes a method on a remote Dapr app.
- InvokeService(ctx context.Context, in *InvokeServiceRequest, opts ...grpc.CallOption) (*v1.InvokeResponse, error)
- ...
-}
-```
-
-## Source analysis
-
-For ease of understanding, from outside to inside, from inside to outside, from flow to source code, that is, from Client, through one layer of logic to the Server receiving a return response to requests, from another layer of return to client, and from one layer of analysis of Layotto RPC processes, split into 10 steps.Also, since the content of Gypc Client and Server handshakes and interactions is not the focus of this paper, the analysis is relatively brief and the other steps are relatively detailed and one can move directly from the directory to the corresponding step depending on his or her case.
-
-Note:based on commit hash:1d2bed68c3b2372c34a12aeed41be125a4fdd15a
-
-### 0x00 Layotto initialize RPC
-
-Layotto starts the process involves a large number of processes in which only the initialization of the process related to RPC and described below is analyzed because Layotto is based on Mosn and is therefore starting from the Main function, urfave/cli library calls Mosn StageManager Mos, thus initializing GrpcServer in Mosn NetworkFilter as follows.
-
-```go
-mosn.io/mosn/pkg/stagemanager.(*StageManager).runInitStage at stage_manager.go
-=>
-mosn.io/mosn/pkg/mosn.(*Mosn).initServer at mosn.go
-=>
-mosn.io/mosn/pkg/filter/network/grpc.(*grpcServerFilterFactory).Init at factory.go
-=>
-mosn.io/mosn/pkg/filter/network/grpc.(*Handler).New at factory.go
-// 新建一个带有地址的 Grpc 服务器。同一个地址返回同一个服务器,只能启动一次
-func (s *Handler) New(addr string, conf json.RawMessage, options ...grpc.ServerOption) (*registerServerWrapper, error) {
- s.mutex.Lock()
- defer s.mutex.Unlock()
- sw, ok := s.servers[addr]
- if ok {
- return sw, nil
- }
- ln, err := NewListener(addr)
- if err != nil {
- log.DefaultLogger.Errorf("create a listener failed: %v", err)
- return nil, err
- }
- // 调用 NewRuntimeGrpcServer
- srv, err := s.f(conf, options...)
- if err != nil {
- log.DefaultLogger.Errorf("create a registered server failed: %v", err)
- return nil, err
- }
- sw = ®isterServerWrapper{
- server: srv,
- ln: ln,
- }
- s.servers[addr] = sw
- return sw, nil
-}
-=
-main.NewRunvtimeGrpcServer at main.go
-=>
-mosn.io/layotto/pkg/runtime.(*MosnRuntime).initRuntime at runtime.go
-=>
-mosn.io/layotto/pkg/runtime.(*MosnRuntime).initRpcs at runtime.go
-=>
-mosn.io/layotto/components/rpc/invoker/mosn.(*mosnInvoker).Init at mosninvoker.go
-func (m *mosnInvoker) Init(conf rpc.RpcConfig) error {
- var config mosnConfig
- if err := json.Unmarshal(conf.Config, &config); err != nil {
- return err
- }
-
- // 初始化 RPC 调用前的 Filter
- for _, before := range config.Before {
- m.cb.AddBeforeInvoke(before)
- }
-
- // 初始化 RPC 调用后的 Filter
- for _, after := range config.After {
- m.cb.AddAfterInvoke(after)
- }
-
- if len(config.Channel) == 0 {
- return errors.New("missing channel config")
- }
-
- // 初始化与 Mosn 通信使用的通道、协议及对应端口
- channel, err := channel.GetChannel(config.Channel[0])
- if err != nil {
- return err
- }
- m.channel = channel
- return nil
-}
-...
-// 完成一些列初始化后在 grpcServerFilter 中启动 Grpc Server
-mosn.io/mosn/pkg/filter/network/grpc.(*grpcServerFilterFactory).Init at factory.go
-func (f *grpcServerFilterFactory) Init(param interface{}) error {
- ...
- opts := []grpc.ServerOption{
- grpc.UnaryInterceptor(f.UnaryInterceptorFilter),
- grpc.StreamInterceptor(f.StreamInterceptorFilter),
- }
- // 经过上述初始化,完成 Grpc registerServerWrapper 的初始化
- sw, err := f.handler.New(addr, f.config.GrpcConfig, opts...)
- if err != nil {
- return err
- }
- // 启动 Grpc sever
- sw.Start(f.config.GracefulStopTimeout)
- f.server = sw
- log.DefaultLogger.Debugf("grpc server filter initialized success")
- return nil
-}
-...
-// StageManager 在 runInitStage 之后进入 runStartStage 启动 Mosn
-func (stm *StageManager) runStartStage() {
- st := time.Now()
- stm.SetState(Starting)
- for _, f := range stm.startupStages {
- f(stm.app)
- }
-
- stm.wg.Add(1)
- // 在所有启动阶段完成后启动 Mosn
- stm.app.Start()
- ...
-}
-```
-
-### 0x01 Dubbo-go-sample client request
-
-Follow the example of [Dubbo Json Rpc Example](https://mosn.io/layotto/#/en/start/rpc/dub_json_rpc)
-
-```shell
-go un demo/rpc/dubbo_json_rpc/dub_json_client/client.go -d '{"jsonrpc": "2.0", "method":"GetUser", "params":["A003"],"id":9527}'
-```
-
-Use Layotto for App Gypc API InvokeService initiate RPC calls, data filling and connecting processes leading to the dispatch of data to Layotto via SendMsg in Grpc clientStream, as follows.
-
-```go
-
-func main() {
- data := flag.String("d", `{"jsonrpc":"2.0","method":"GetUser","params":["A003"],"id":9527}`, "-d")
- flag.Parse()
-
- conn, err := grpc.Dial("localhost:34904", grpc.WithInsecure())
- if err != nil {
- log.Fatal(err)
- }
-
- cli := runtimev1pb.NewRuntimeClient(conn)
- ctx, cancel := context.WithCancel(context.TODO())
- defer cancel()
- // 通过 Grpc 接口 InvokeService 进行 RPC 调用
- resp, err := cli.InvokeService(
- ctx,
- // 使用 runtimev1pb.InvokeServiceRequest 发起 Grpc 请求
- &runtimev1pb.InvokeServiceRequest{
- // 要请求的 server 接口 ID
- Id: "org.apache.dubbo.samples.UserProvider",
- Message: &runtimev1pb.CommonInvokeRequest{
- // 要请求的接口对应的方法名
- Method: "GetUser",
- ContentType: "",
- Data: &anypb.Any{Value: []byte(*data)},
- HttpExtension: &runtimev1pb.HTTPExtension{Verb: runtimev1pb.HTTPExtension_POST},
- },
- },
- )
- if err != nil {
- log.Fatal(err)
- }
-
- fmt.Println(string(resp.Data.GetValue()))
-}
-=>
-mosn.io/layotto/spec/proto/runtime/v1.(*runtimeClient).InvokeService at runtime.pb.go
-=>
-google.golang.org/grpc.(*ClientConn).Invoke at call.go
-=>
-google.golang.org/grpc.(*clientStream).SendMsg at stream.go
-=>
-google.golang.org/grpc.(*csAttempt).sendMsg at stream.go
-=>
-google.golang.org/grpc/internal/transport.(*http2Client).Write at http2_client.go
-```
-
-### 0x02 Mosn EventLoop Reader Processing Request Data
-
-The kernel from Layotto mentioned above is a mock-up of Mosn, so when network connection data arrives, it will first be read and written at the L4 network level in Mosn as follows.
-
-```go
-mosn.io/mosn/pkg/network.(*listener).accept at listener.go
-=>
-mosn.io/mosn/pkg/server.(*activeListener).OnAccept at handler.go
-=>
-mosn.io/mosn/pkg/server.(*activeRawConn).ContinueFilterChain at handler.go
-=>
-mosn.io/mosn/pkg/server.(*activeListener).newConnection at handler.go
-=>
-mosn.io/mosn/pkg/network.(*connection).Start at connection.go
-=>
-mosn.io/mosn/pkg/network.(*connection).startRWLoop at connection.go
-func (c *connection) startRWLoop(lctx context.Context) {
- c.internalLoopStarted = true
-
- utils.GoWithRecover(func() {
- // 读协程
- c.startReadLoop()
- }, func(r interface{}) {
- c.Close(api.NoFlush, api.LocalClose)
- })
-
- if c.checkUseWriteLoop() {
- c.useWriteLoop = true
- utils.GoWithRecover(func() {
- // 写协程
- c.startWriteLoop()
- }, func(r interface{}) {
- c.Close(api.NoFlush, api.LocalClose)
- })
- }
-}
-```
-
-In the startRWLoop method, we can see that two separate protocols will be opened to deal with reading and writing operations on the connection: startReadLoop and startWriteLoop; the following streams will be made in startReadLoop; the data read at the network level will be handled by the filterManager filter chain, as follows.
-
-```go
-mosn.io/mosn/pkg/network.(*connection).doRead at connection.go
-=>
-mosn.io/mosn/pkg/network.(*connection).onRead at connection.go
-=>
-mosn.io/mosn/pkg/network.(*filterManager).OnRead at filtermanager.go
-=>
-mosn.io/mosn/pkg/network.(*filterManager).onContinueReading at filtermanager.go
-func (fm *filterManager) onContinueReading(filter *activeReadFilter) {
- var index int
- var uf *activeReadFilter
-
- if filter != nil {
- index = filter.index + 1
- }
-
- // filterManager遍历过滤器进行数据处理
- for ; index < len(fm.upstreamFilters); index++ {
- uf = fm.upstreamFilters[index]
- uf.index = index
- // 对没有初始化的过滤器调用其初始化方法 OnNewConnection,本例为func (f *grpcFilter) OnNewConnection() api.FilterStatus(向 Listener 发送 grpc 连接以唤醒 Listener 的 Accept)
- if !uf.initialized {
- uf.initialized = true
-
- status := uf.filter.OnNewConnection()
-
- if status == api.Stop {
- return
- }
- }
-
- buf := fm.conn.GetReadBuffer()
-
- if buf != nil && buf.Len() > 0 {
- // 通知相应过滤器处理
- status := uf.filter.OnData(buf)
-
- if status == api.Stop {
- return
- }
- }
- }
-}
-=>
-mosn.io/mosn/pkg/filter/network/grpc.(*grpcFilter).OnData at filter.go
-=>
-mosn.io/mosn/pkg/filter/network/grpc.(*grpcFilter).dispatch at filter.go
-func (f *grpcFilter) dispatch(buf buffer.IoBuffer) {
- if log.DefaultLogger.GetLogLevel() >= log.DEBUG {
- log.DefaultLogger.Debugf("grpc get datas: %d", buf.Len())
- }
- // 发送数据唤醒连接读取
- f.conn.Send(buf)
- if log.DefaultLogger.GetLogLevel() >= log.DEBUG {
- log.DefaultLogger.Debugf("read dispatch finished")
- }
-}
-```
-
-### 0x03 Grpc Sever processed requests as NetworkFilter
-
-Reading data from the original connection in the first phase will enter the Grpc Serve handling, the Serve method will use the net.Listener listener, each time a new protocol is launched to handle the new connection (handleRawCon), and a RPC call based on Http2-based transport will be set out below.
-
-```go
-google.golang.org/grpc.(*Server).handleRawConn at server.go
-func (s *Server) handleRawConn(lisAddr string, rawConn net.Conn) {
- // 校验服务状态
- if s.quit.HasFired() {
- rawConn.Close()
- return
- }
- rawConn.SetDeadline(time.Now().Add(s.opts.connectionTimeout))
- conn, authInfo, err := s.useTransportAuthenticator(rawConn)
- if err != nil {
- ...
- }
- // HTTP2 握手,创建 Http2Server 与客户端交换帧的初始化信息,帧和窗口大小等
- st := s.newHTTP2Transport(conn, authInfo)
- if st == nil {
- return
- }
-
- rawConn.SetDeadline(time.Time{})
- if !s.addConn(lisAddr, st) {
- return
- }
- // 创建一个协程进行流处理
- go func() {
- s.serveStreams(st)
- s.removeConn(lisAddr, st)
- }()
- ...
-}
-=>
-google.golang.org/grpc.(*Server).serveStreams at server.go
-=>
-google.golang.org/grpc.(*Server).handleStream at server.go
-func (s *Server) handleStream(t transport.ServerTransport, stream *transport.Stream, trInfo *traceInfo) {
- // 找到到需要调用的 FullMethod,此例为 spec.proto.runtime.v1.Runtime/InvokeService
- sm := stream.Method()
- if sm != "" && sm[0] == '/' {
- sm = sm[1:]
- }
- ...
- service := sm[:pos]
- method := sm[pos+1:]
-
- // 从注册的 service 列表中找到对应 serviceInfo 对象
- srv, knownService := s.services[service]
- if knownService {
- // 根据方法名找到单向请求的 md——MethodDesc,此 demo 为 mosn.io/layotto/spec/proto/runtime/v1._Runtime_InvokeService_Handler
- if md, ok := srv.methods[method]; ok {
- s.processUnaryRPC(t, stream, srv, md, trInfo)
- return
- }
- // 流式请求
- if sd, ok := srv.streams[method]; ok {
- s.processStreamingRPC(t, stream, srv, sd, trInfo)
- return
- }
- }
- ...
-=>
-google.golang.org/grpc.(*Server).processUnaryRPC at server.go
-=>
-mosn.io/layotto/spec/proto/runtime/v1._Runtime_InvokeService_Handler at runtime.pb.go
-=>
-google.golang.org/grpc.chainUnaryServerInterceptors at server.go
-=>
-// 服务端单向调用拦截器,用以调用 Mosn 的 streamfilter
-mosn.io/mosn/pkg/filter/network/grpc.(*grpcServerFilterFactory).UnaryInterceptorFilter at factory.go
-=>
-google.golang.org/grpc.getChainUnaryHandler at server.go
-// 递归生成链式UnaryHandler
-func getChainUnaryHandler(interceptors []UnaryServerInterceptor, curr int, info *UnaryServerInfo, finalHandler UnaryHandler) UnaryHandler {
- if curr == len(interceptors)-1 {
- return finalHandler
- }
-
- return func(ctx context.Context, req interface{}) (interface{}, error) {
- // finalHandler就是mosn.io/layotto/spec/proto/runtime/v1._Runtime_InvokeService_Handler
- return interceptors[curr+1](ctx, req, info, getChainUnaryHandler(interceptors, curr+1, info, finalHandler))
- }
-}
-```
-
-### 0x04 Layotto send RPC requests and write to local virtual connections
-
-The 0x03 process follows Runtime_InvokeService_Handler, converted from the GRPC Default API to Dapr API, entering the light RPC framework provided by Layotto in Mosn, as follows.
-
-```go
-mosn.io/layotto/spec/proto/runtime/v1._Runtime_InvokeService_Handler at runtime.pb.go
-=>
-mosn.io/layotto/pkg/grpc/default_api.(*api).InvokeService at api.go
-=>
-mosn.io/layotto/pkg/grpc/dapr.(*daprGrpcAPI).InvokeService at dapr_api.go
-=>
-mosn.io/layotto/components/rpc/invoker/mosn.(*mosnInvoker).Invoke at mosninvoker.go
-// 请求 Mosn 底座和返回响应
-func (m *mosnInvoker) Invoke(ctx context.Context, req *rpc.RPCRequest) (resp *rpc.RPCResponse, err error) {
- defer func() {
- if r := recover(); r != nil {
- err = fmt.Errorf("[runtime][rpc]mosn invoker panic: %v", r)
- log.DefaultLogger.Errorf("%v", err)
- }
- }()
-
- // 1. 如果超时时间为 0,设置默认 3000ms 超时
- if req.Timeout == 0 {
- req.Timeout = 3000
- }
- req.Ctx = ctx
- log.DefaultLogger.Debugf("[runtime][rpc]request %+v", req)
- // 2. 触发请求执行前的自定义逻辑
- req, err = m.cb.BeforeInvoke(req)
- if err != nil {
- log.DefaultLogger.Errorf("[runtime][rpc]before filter error %s", err.Error())
- return nil, err
- }
- // 3. 核心调用,下文会进行详细分析
- resp, err = m.channel.Do(req)
- if err != nil {
- log.DefaultLogger.Errorf("[runtime][rpc]error %s", err.Error())
- return nil, err
- }
- resp.Ctx = req.Ctx
- // 4. 触发请求返回后的自定义逻辑
- resp, err = m.cb.AfterInvoke(resp)
- if err != nil {
- log.DefaultLogger.Errorf("[runtime][rpc]after filter error %s", err.Error())
- }
- return resp, err
-}
-=>
-mosn.io/layotto/components/rpc/invoker/mosn/channel.(*httpChannel).Do at httpchannel.go
-func (h *httpChannel) Do(req *rpc.RPCRequest) (*rpc.RPCResponse, error) {
- // 1. 使用上一阶段设置的默认超时设置 context 超时
- timeout := time.Duration(req.Timeout) * time.Millisecond
- ctx, cancel := context.WithTimeout(req.Ctx, timeout)
- defer cancel()
-
- // 2. 创建连接得到,启动 readloop 协程进行 Layotto 和 Mosn 的读写交互(具体见下文分析)
- conn, err := h.pool.Get(ctx)
- if err != nil {
- return nil, err
- }
-
- // 3. 设置数据写入连接的超时时间
- hstate := conn.state.(*hstate)
- deadline, _ := ctx.Deadline()
- if err = conn.SetWriteDeadline(deadline); err != nil {
- hstate.close()
- h.pool.Put(conn, true)
- return nil, common.Error(common.UnavailebleCode, err.Error())
- }
- // 4. 因为初始化时配置的 Layotto 与 Mosn 交互使用的是 Http 协议,所以这里会构造 Http 请求
- httpReq := h.constructReq(req)
- defer fasthttp.ReleaseRequest(httpReq)
-
- // 借助 fasthttp 请求体写入虚拟连接
- if _, err = httpReq.WriteTo(conn); err != nil {
- hstate.close()
- h.pool.Put(conn, true)
- return nil, common.Error(common.UnavailebleCode, err.Error())
- }
-
- // 5. 构造 fasthttp.Response 结构体读取和解析 hstate 的返回,并设置读取超时时间
- httpResp := &fasthttp.Response{}
- hstate.reader.SetReadDeadline(deadline)
-
- // 在 Mosn 数据返回前这里会阻塞,readloop 协程读取 Mosn 返回的数据之后流程见下述 0x08 阶段
- if err = httpResp.Read(bufio.NewReader(hstate.reader)); err != nil {
- hstate.close()
- h.pool.Put(conn, true)
- return nil, common.Error(common.UnavailebleCode, err.Error())
- }
- h.pool.Put(conn, false)
- ...
-}
-=>
-mosn.io/layotto/components/rpc/invoker/mosn/channel.(*connPool).Get at connpool.go
-// Get is get wrapConn by context.Context
-func (p *connPool) Get(ctx context.Context) (*wrapConn, error) {
- if err := p.waitTurn(ctx); err != nil {
- return nil, err
- }
-
- p.mu.Lock()
- // 1. 从连接池获取连接
- if ele := p.free.Front(); ele != nil {
- p.free.Remove(ele)
- p.mu.Unlock()
- wc := ele.Value.(*wrapConn)
- if !wc.isClose() {
- return wc, nil
- }
- } else {
- p.mu.Unlock()
- }
-
- // 2. 创建新的连接
- c, err := p.dialFunc()
- if err != nil {
- p.freeTurn()
- return nil, err
- }
- wc := &wrapConn{Conn: c}
- if p.stateFunc != nil {
- wc.state = p.stateFunc()
- }
- // 3. 启动 readloop 独立协程读取 Mosn 返回的数据
- if p.onDataFunc != nil {
- utils.GoWithRecover(func() {
- p.readloop(wc)
- }, nil)
- }
- return wc, nil
-}
-=>
-```
-
-The creation of a new connection in the second step above requires attention by calling dialFunc func() in the protocol that initialized the init phase (net.Conn, error), because the configuration interacted with Mosn with Http protocols, this is newHttpChanel, which is currently supported by the Bolt, Dubbo et al.
-
-```go
-mosn.io/layotto/components/rpc/invoker/mosn/channel.newHttpChannel at httpchannel.go
-// newHttpChannel is used to create rpc.Channel according to ChannelConfig
-func newHttpChannel(config ChannelConfig) (rpc.Channel, error) {
- hc := &httpChannel{}
- // 为减少连接创建开销的连接池,定义在 mosn.io/layotto/components/rpc/invoker/mosn/channel/connpool.go
- hc.pool = newConnPool(
- config.Size,
- // dialFunc
- func() (net.Conn, error) {
- _, _, err := net.SplitHostPort(config.Listener)
- if err == nil {
- return net.Dial("tcp", config.Listener)
- }
- //创建一对虚拟连接(net.Pipe),Layotto 持有 local,Mosn 持有 remote, Layotto 向 local 写入,Mosn 会收到数据, Mosn 从 remote读取,执行 filter 逻辑并进行代理转发,再将响应写到 remote ,最后 Layotto 从 remote 读取,获得响应
- local, remote := net.Pipe()
- localTcpConn := &fakeTcpConn{c: local}
- remoteTcpConn := &fakeTcpConn{c: remote}
- // acceptFunc 是定义在 mosn.io/layotto/components/rpc/invoker/mosn/channel.go 中的闭包,闭包中监听了 remote 虚拟连接
- if err := acceptFunc(remoteTcpConn, config.Listener); err != nil {
- return nil, err
- }
- // the goroutine model is:
- // request goroutine ---> localTcpConn ---> mosn
- // ^ |
- // | |
- // | |
- // hstate <-- readloop goroutine <------
- return localTcpConn, nil
- },
- // stateFunc
- func() interface{} {
- // hstate 是 readloop 协程与 request 协程通信的管道,是一对读写 net.Conn,请求协程从 reader net.Conn 中读数据,readloop 协程序往 writer net.Conn 写数据
- s := &hstate{}
- s.reader, s.writer = net.Pipe()
- return s
- },
- hc.onData,
- hc.cleanup,
- )
- return hc, nil
-}
-```
-
-### 0x05 Mosn read Remote and execute Filter and proxy forwarding
-
-(1) Similar to 0x02, filtermanager executes the filter processing phase where proxy forwarding is made in proxy with the following code.
-
-```go
-...
-mosn.io/mosn/pkg/network.(*filterManager).onContinueReading at filtermanager.go
-=>
-mosn.io/mosn/pkg/proxy.(*proxy).OnData at proxy.go
-func (p *proxy) OnData(buf buffer.IoBuffer) api.FilterStatus {
- if p.fallback {
- return api.Continue
- }
-
- if p.serverStreamConn == nil {
- ...
- p.serverStreamConn = stream.CreateServerStreamConnection(p.context, proto, p.readCallbacks.Connection(), p)
- }
- //把数据分发到对应协议的解码器,在这里因为是 POST /org.apache.dubbo.samples.UserProvider HTTP/1.1,所以是 mosn.io/mosn/pkg/stream/http.(*serverStreamConnection).serve at stream.go
- p.serverStreamConn.Dispatch(buf)
-
- return api.Stop
-}
-=>
-```
-
-(2) ServerStreamConnection.serve listens and handles requests to downstream OnReceive, as described below.
-
-```go
-mosn.io/mosn/pkg/stream/http.(*serverStream).handleRequest at stream.go
-func (s *serverStream) handleRequest(ctx context.Context) {
- if s.request != nil {
- // set non-header info in request-line, like method, uri
- injectCtxVarFromProtocolHeaders(ctx, s.header, s.request.URI())
- hasData := true
- if len(s.request.Body()) == 0 {
- hasData = false
- }
-
- if hasData {
- //在此进入 downstream OnReceive
- s.receiver.OnReceive(s.ctx, s.header, buffer.NewIoBufferBytes(s.request.Body()), nil)
- } else {
- s.receiver.OnReceive(s.ctx, s.header, nil, nil)
- }
- }
-}
-=>
-mosn.io/mosn/pkg/proxy.(*downStream).OnReceive at downstream.go
-func (s *downStream) OnReceive(ctx context.Context, headers types.HeaderMap, data types.IoBuffer, trailers types.HeaderMap) {
- ...
- var task = func() {
- ...
-
- phase := types.InitPhase
- for i := 0; i < 10; i++ {
- s.cleanNotify()
-
- phase = s.receive(s.context, id, phase)
- ...
- }
- }
- }
-
- if s.proxy.serverStreamConn.EnableWorkerPool() {
- if s.proxy.workerpool != nil {
- // use the worker pool for current proxy
- s.proxy.workerpool.Schedule(task)
- } else {
- // use the global shared worker pool
- pool.ScheduleAuto(task)
- }
- return
- }
-
- task()
- return
-
-}
-```
-
-(3) The above ScheduleAuto schedule, after processing the reveive of downstream Stream, processing upstam Request, as well as an application with an application from the network layer, eventually sending data from connection.Write and entering WaitNotify phases, as detailed below.
-
-```go
-mosn.io/mosn/pkg/sync.(*workerPool).ScheduleAuto at workerpool.go
-=>
-mosn.io/mosn/pkg/sync.(*workerPool).spawnWorker at workerpool.go
-=>
-mosn.io/mosn/pkg/proxy.(*downStream).receive at downstream.go
-=>
-InitPhase=>DownFilter=>MatchRoute=>DownFilterAfterRoute=>ChooseHost=>DownFilterAfterChooseHost=>DownRecvHeader=>DownRecvData
-=>
-mosn.io/mosn/pkg/proxy.(*downStream).receiveData at downstream.go
-=>
-mosn.io/mosn/pkg/proxy.(*upstreamRequest).appendData at upstream.go
-=>
-mosn.io/mosn/pkg/stream/http.(*clientStream).doSend at stream.go
-=>
-github.com/valyala/fasthttp.(*Request).WriteTo at http.go
-=>
-mosn.io/mosn/pkg/stream/http.(*streamConnection).Write at stream.go
->
-mosn.io/mosn/pkg/network.(*connection).Write at connection.go
-=>
-mosn.io/mosn/pkg/proxy.(*downStream).receive at downstream.go
-func (s *downStream) receive(ctx context.Context, id uint32, phase types.Phase) types.Phase {
- for i := 0; i <= int(types.End-types.InitPhase); i++ {
- s.phase = phase
-
- switch phase {
- ...
- case types.WaitNotify:
- s.printPhaseInfo(phase, id)
- if p, err := s.waitNotify(id); err != nil {
- return p
- }
-
- if log.Proxy.GetLogLevel() >= log.DEBUG {
- log.Proxy.Debugf(s.context, "[proxy] [downstream] OnReceive send downstream response %+v", s.downstreamRespHeaders)
- }
- ...
-}
-=>
-func (s *downStream) waitNotify(id uint32) (phase types.Phase, err error) {
- if atomic.LoadUint32(&s.ID) != id {
- return types.End, types.ErrExit
- }
-
- if log.Proxy.GetLogLevel() >= log.DEBUG {
- log.Proxy.Debugf(s.context, "[proxy] [downstream] waitNotify begin %p, proxyId = %d", s, s.ID)
- }
- select {
- // 阻塞等待
- case <-s.notify:
- }
- return s.processError(id)
-}
-```
-
-### 0x06 Dubbo-go-sample server received request return response
-
-Here is a dubo-go-sample server handling, leave it now, post log messages and check the source code by interested classes.
-
-```
-[2022-04-18/21:03:03:18 github.com/apache/dub-go-samples/rpc/jsonrpc/go-server/pkg.(*UserProvider2).GetUser: user_provider2.go: 53] userID: "A003"
-[2022-04-18/21:03:18 github.com/apache/dub-go-samples/rpc/jsonrpc/go-server/pkg. (*UserProvider2).GetUser: user_provider2.go: 56] rsp:&pkg.User{ID:"113", Name:"Moorse", Age:30, sex:0, Birth:703391943, Sex:"MAN"MAN"}
-```
-
-### 0x07 Mosn framework handles responses and writes back to Remote Virtual Connection
-
-After the third phase of 0x05 above, the response logic goes into the UpRecvData phase of the reveive cycle phase through a series of final response writing back to the remote virtual connection at 0x04, as follows.
-
-```go
-mosn.io/mosn/pkg/proxy.(*downStream).receive at downstream.go
-func (s *downStream) waitNotify(id uint32) (phase types.Phase, err error) {
- if atomic.LoadUint32(&s.ID) != id {
- return types.End, types.ErrExit
- }
-
- if log.Proxy.GetLogLevel() >= log.DEBUG {
- log.Proxy.Debugf(s.context, "[proxy] [downstream] waitNotify begin %p, proxyId = %d", s, s.ID)
- }
- // 返回响应
- select {
- case <-s.notify:
- }
- return s.processError(id)
-}
-=>
-UpFilter
-=>
-UpRecvHeader
-=>
-func (s *downStream) receive(ctx context.Context, id uint32, phase types.Phase) types.Phase {
- for i := 0; i <= int(types.End-types.InitPhase); i++ {
- s.phase = phase
-
- switch phase {
- ...
- case types.UpRecvData:
- if s.downstreamRespDataBuf != nil {
- s.printPhaseInfo(phase, id)
- s.upstreamRequest.receiveData(s.downstreamRespTrailers == nil)
- if p, err := s.processError(id); err != nil {
- return p
- }
- }
- ...
-}
-=>
-mosn.io/mosn/pkg/proxy.(*upstreamRequest).receiveData at upstream.go
-=>
-mosn.io/mosn/pkg/proxy.(*downStream).onUpstreamData at downstream.go
-=>
-mosn.io/mosn/pkg/proxy.(*downStream).appendData at downstream.go
-=>
-mosn.io/mosn/pkg/stream/http.(*serverStream).AppendData at stream.go
-=>
-mosn.io/mosn/pkg/stream/http.(*serverStream).endStream at stream.go
-=>
-mosn.io/mosn/pkg/stream/http.(*serverStream).doSend at stream.go
-=>
-github.com/valyala/fasthttp.(*Response).WriteTo at http.go
-=>
-github.com/valyala/fasthttp.writeBufio at http.go
-=>
-github.com/valyala/fasthttp.(*statsWriter).Write at http.go
-=>
-mosn.io/mosn/pkg/stream/http.(*streamConnection).Write at stream.go
-```
-
-### 0x08 Layotto receive RPC responses and read Local Virtual Connection
-
-Readloop Reading IO, activated by 0x04 above, is activated from connection read data from Mosn and then forwarded to the hstate pipe to return to the request process, as follows.
-
-```go
-mosn.io/layotto/components/rpc/invoker/mosn/channel.(*connPool).readloop at connpool.go
-// readloop is loop to read connected then exec onDataFunc
-func (p *connPool) readloop(c *wrapConn) {
- var err error
-
- defer func() {
- c.close()
- if p.cleanupFunc != nil {
- p.cleanupFunc(c, err)
- }
- }()
-
- c.buf = buffer.NewIoBuffer(defaultBufSize)
- for {
- // 从连接读取数据
- n, readErr := c.buf.ReadOnce(c)
- if readErr != nil {
- err = readErr
- if readErr == io.EOF {
- log.DefaultLogger.Debugf("[runtime][rpc]connpool readloop err: %s", readErr.Error())
- } else {
- log.DefaultLogger.Errorf("[runtime][rpc]connpool readloop err: %s", readErr.Error())
- }
- }
-
- if n > 0 {
- // 在onDataFunc 委托给 hstate 处理数据
- if onDataErr := p.onDataFunc(c); onDataErr != nil {
- err = onDataErr
- log.DefaultLogger.Errorf("[runtime][rpc]connpool onData err: %s", onDataErr.Error())
- }
- }
-
- if err != nil {
- break
- }
-
- if c.buf != nil && c.buf.Len() == 0 && c.buf.Cap() > maxBufSize {
- c.buf.Free()
- c.buf.Alloc(defaultBufSize)
- }
- }
-}
-=>
-mosn.io/layotto/components/rpc/invoker/mosn/channel.(*httpChannel).onData at httpchannel.go
-=>
-mosn.io/layotto/components/rpc/invoker/mosn/channel.(*hstate).onData at httpchannel.go
-=>
-net.(*pipe).Write at pipe.go
-=>
-mosn.io/layotto/components/rpc/invoker/mosn/channel.(*httpChannel).Do at httpchannel.go
-func (h *httpChannel) Do(req *rpc.RPCRequest) (*rpc.RPCResponse, error) {
- ...
- // 接上述0x04阶段,mosn 数据返回后,从 hstate 读取 readloop 协程从 mosn 返回的数据
- if err = httpResp.Read(bufio.NewReader(hstate.reader)); err != nil {
- hstate.close()
- h.pool.Put(conn, true)
- return nil, common.Error(common.UnavailebleCode, err.Error())
- }
- h.pool.Put(conn, false)
-
- // 获取 fasthttp 的数据部分,解析状态码,失败返回错误信息和状态码
- body := httpResp.Body()
- if httpResp.StatusCode() != http.StatusOK {
- return nil, common.Errorf(common.UnavailebleCode, "http response code %d, body: %s", httpResp.StatusCode(), string(body))
- }
-
- // 6. 将结果转换为 rpc.RPCResponse 返回
- rpcResp := &rpc.RPCResponse{
- ContentType: string(httpResp.Header.ContentType()),
- Data: body,
- Header: map[string][]string{},
- }
- httpResp.Header.VisitAll(func(key, value []byte) {
- rpcResp.Header[string(key)] = []string{string(value)}
- })
- return rpcResp, nil
-```
-
-### 0x09 Grpc Sever processed data frames returned to clients
-
-Grpc does not write data directly to connections, but uses a systray loop to fetch frames from a cache structure and write them back to the client, as follows.
-
-```go
-google.golang.org/grpc/internal/transport.NewServerTransport at http2_server.go
-func NewServerTransport(conn net.Conn, config *ServerConfig) (_ ServerTransport, err error) {
- ...
- // 协程异步loop循环
- go func() {
- t.loopy = newLoopyWriter(serverSide, t.framer, t.controlBuf, t.bdpEst)
- t.loopy.ssGoAwayHandler = t.outgoingGoAwayHandler
- if err := t.loopy.run(); err != nil {
- if logger.V(logLevel) {
- logger.Errorf("transport: loopyWriter.run returning. Err: %v", err)
- }
- }
- t.conn.Close()
- t.controlBuf.finish()
- close(t.writerDone)
- }()
- go t.keepalive()
- return t, nil
-}
-=>
-google.golang.org/grpc/internal/transport.(*loopyWriter).run at controlbuf.go
-=>
-google.golang.org/grpc/internal/transport.(*bufWriter).Flush at http_util.go
-=>
-mosn.io/mosn/pkg/filter/network/grpc.(*Connection).Write at conn.go
-=>
-mosn.io/mosn/pkg/network.(*connection).Write at connection.go
-=>
-mosn.io/mosn/pkg/network.(*connection).writeDirectly at connection.go
-=>
-mosn.io/mosn/pkg/network.(*connection).doWrite at connection.go
-```
-
-### 0x10 dubbo-go-sample customer received response
-
-The transmission of data from 0x01 above will be blocked in the client grpc bottom reading, and Layotto returns data from some of the processing layers above to enable ClientBottom Read IO, as follows.
-
-```go
-google.golang.org/grpc.(*ClientCon). Invoke at call.go
-=>
-google.golang.org/grpc.(*ClientCon). Invoke at call.go
-=>
-google.golang.org/grpc.(*clientStream). RecvMsg at stream. o
-=>
-google.golang.org/grpc.(*clientStream).withRetry at stream.go
-=>
-google.golang.org/grpc.(*csAtempt.recvMsg at stream.go
-=>
-google.golang.org/grpc.recvAndDecompress at rpc_util. o
-=>
-google.golang.org/grpc.recv at rpc_util.go
-=>
-google.golang.org/grpc.(*parser).recvMsg at rpc_util.go
-=>
-google.golang.org/grpc.(*csAttempt).recvMsg at stream. o
-func (p *parser) recvMsg(maxReceiveMessageSize int) (pf payloadFormat, msg []byte, err error) LO
- if _, err := p. .Read(p.header[:]); err != nil {
- return 0, nil, err
- }
- ...
-}
-```
-
-Last returned data:
-
-```json
-{"jsonrpc": "2.0", "id":9527, "result":{"id":"113", "name":"Moorse", "age":30,"time":703394193,"sex":"MAN"}}
-```
-
-## Summary
-
-The Layotto RPC process involves knowledge related to GRPC, Dapr, Mosn and others, and the overall process is lengthy, although it is clearer and simpler simply to see Layotto for Mosn an abstract lightweight RPC framework and is more innovative and useful for further study.Here Layotto RPC requests are analyzed and time-limited without some more comprehensive and in-depth profiles, such as defects, welcome contact:rayo.wangzl@gmail.com.It is also hoped that there will be greater participation in source analysis and open source communities, learning together and making progress together.
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-blog/code/start_process/start_process.md b/docs/i18n/en-US/docusaurus-plugin-content-blog/code/start_process/start_process.md
deleted file mode 100644
index 83225298ac..0000000000
--- a/docs/i18n/en-US/docusaurus-plugin-content-blog/code/start_process/start_process.md
+++ /dev/null
@@ -1,291 +0,0 @@
-# Source parsing layotto startup process
-
-> Author Intro to:
-> Libin, https://github.com/ZLBer
->
-> Writing: 4 May 2022
-
-- [Overview](#Overview)
-- [source analysis](#source analysis)
- - [1.cmd analysis](#1.cmd analysis)
- - [2.Callback functionNewRuntimeGrpcServer分析](#2.callback function NewRuntimeGrpcServer analysis)
- - [3.runtimeanalyze](#3.runtime analyse)
-- [summary](#summary)
-
-## Overview
-
-Layotto "Parasite" in MOSN. The start process is in effect starting MOSN, MOSN back Layotto during startup to get Layotto start.
-
-## Source analysis
-
-Everything originating from our command line: layotto start -c `configpath`
-
-### 1.cmd analysis
-
-Main init function starts with:
-
-```
-func init() {
- //将layotto的初始化函数传给mosn,让mosn启动的时候进行回调
- mgrpc.RegisterServerHandler("runtime", NewRuntimeGrpcServer)
- ....
-}
-```
-
-cmd action starts to execute:
-
-```
- Action: func(c *cli.Context) error {
- app := mosn.NewMosn()
- //stagemanager用于管理mosn启动的每个阶段,可以添加相应的阶段函数,比如下面的ParamsParsedStage、InitStage、PreStartStage、AfterStartStage
- //这里是将configpath传给mosn,下面都是mosn相关的逻辑
- stm := stagemanager.InitStageManager(c, c.String("config"), app)
- stm.AppendParamsParsedStage(ExtensionsRegister)
- stm.AppendParamsParsedStage(func(c *cli.Context) {
- err := featuregate.Set(c.String("feature-gates"))
- if err != nil {
- os.Exit(1)
- }
- })·
- stm.AppendInitStage(mosn.DefaultInitStage)
- stm.AppendPreStartStage(mosn.DefaultPreStartStage)
- stm.AppendStartStage(mosn.DefaultStartStage)
- //这里添加layotto的健康检查机制
- stm.AppendAfterStartStage(SetActuatorAfterStart)
- stm.Run()
- // wait mosn finished
- stm.WaitFinish()
- return nil
- },
-```
-
-### NewRuntimeGrpcServer Analysis
-
-Returns NewRuntimeGrpcServer when MOSN is launched, data is an unparsed configuration, opts is a grpc configuration, returning Gpc server
-
-```
-func NewRuntimeGrpcServer(data json.RawMessage, opts ...grpc.ServerOption) (mgrpc.RegisteredServer, error) {
- // 将原始的配置文件解析成结构体形式。
- cfg, err := runtime.ParseRuntimeConfig(data)
- // 新建layotto runtime, runtime包含各种组件的注册器和各种组件的实例。
- rt := runtime.NewMosnRuntime(cfg)
- // 3.runtime开始启动
- server, err := rt.Run(
- ...
- // 4. 添加所有组件的初始化函数
- // 我们只看下File组件的,将NewXXX()添加到组件Factory里
- runtime.WithFileFactory(
- file.NewFileFactory("aliyun.oss", alicloud.NewAliCloudOSS),
- file.NewFileFactory("minio", minio.NewMinioOss),
- file.NewFileFactory("aws.s3", aws.NewAwsOss),
- file.NewFileFactory("tencent.oss", tencentcloud.NewTencentCloudOSS),
- file.NewFileFactory("local", local.NewLocalStore),
- file.NewFileFactory("qiniu.oss", qiniu.NewQiniuOSS),
- ),
- ...
- return server, err
-
- )
-
- //
-}
-
-```
-
-### runtime analysis
-
-Look at the structure of runtime, the composition of the `runtime' at the aggregate level of the`:'
-
-```
-type MosnRuntime struct {
- // 包括组件的config
- runtimeConfig *MosnRuntimeConfig
- info *info.RuntimeInfo
- srv mgrpc.RegisteredServer
- // 组件注册器,用来注册和新建组件,里面有组件的NewXXX()函数
- helloRegistry hello.Registry
- configStoreRegistry configstores.Registry
- rpcRegistry rpc.Registry
- pubSubRegistry runtime_pubsub.Registry
- stateRegistry runtime_state.Registry
- lockRegistry runtime_lock.Registry
- sequencerRegistry runtime_sequencer.Registry
- fileRegistry file.Registry
- bindingsRegistry mbindings.Registry
- secretStoresRegistry msecretstores.Registry
- customComponentRegistry custom.Registry
- hellos map[string]hello.HelloService
- // 各种组件
- configStores map[string]configstores.Store
- rpcs map[string]rpc.Invoker
- pubSubs map[string]pubsub.PubSub
- states map[string]state.Store
- files map[string]file.File
- locks map[string]lock.LockStore
- sequencers map[string]sequencer.Store
- outputBindings map[string]bindings.OutputBinding
- secretStores map[string]secretstores.SecretStore
- customComponent map[string]map[string]custom.Component
- AppCallbackConn *rawGRPC.ClientConn
- errInt ErrInterceptor
- started bool
- //初始化函数
- initRuntimeStages []initRuntimeStage
-}
-```
-
-runtime is the run function logic as follows:
-
-```
-func (m *MosnRuntime) Run(opts..Option) (mgrpc.RegisteredServer, error) um
- // launch flag
- m. targeted = true
- // newly created runtime configuration
- o := newRuntimeOptions()
- // run our previously imported option,. Really register various components Factory with
- for _, opt := range opts {
- opt(o)
- }
- //initialization component
- if err := m. nitRuntime(o); err != nil {
- return nil, err
- }
-
- //initialize Grpc,api assignment
- var grpcOpts[]grpc. Absorption
- if o.srvMaker != nil LO
- grpcOpts = append(grpcOpts, grpc.GithNewServer(o.srvMaker))
- }
- var apis []grpc.GrpcAPI
- ac := &grpc. pimplicationContextFe
- m.runtimeConfig.AppManagement.AppId,
- m.hellos,
- m.configStories,
- m.rpcs,
- m.pubSubs,
- m. tates,
- m.files,
- m.locks,
- m.sequencers,
- m.sendToOutputBinding,
- m.secretStories,
- m. ustomCompany,
- }
- // Factor generation of each component
- for _, apiFactory := range o. piFactorys LOR
- api := apiFactory(ac)
- // init the GrpcAPI
- if err := api.Init(m. ppCallbackCon); err != nil {
- return nil, err
- }
- apis = append(apis, api)
- }
- // pass the api interface and configuration to grpc
- grpcOpts = append(grpcOpts,
- grpc.GrpOptions(o.options... ,
- grpc.MithGrpcAPIs(apis),
- )
- //start grpc
- var err error = nil
- m. rv, err = grpc.NewGrpServer (grpcOpts...)
- return m.srv, err
-}
-
-```
-
-Component initialization function initRuntime :
-
-```
-func (m *MosnRuntime) initRuntime (r *runtimeOptions) errant error LO
- st := time.Now()
- if len(m.initRuntimeStages) === 0 56
- m.initRuntimeStages = append(m. nitRuntimeStages, DefaultInitRuntimeStage
- }
- // Call DefaultInitRuntimeStage
- for _, f := range m. nitRuntime Stages FEM
- err := f(r, m)
- if err != nil {
- return err
- }
- }
- . .
- return nil
-}
-```
-
-DefaultInitRuntimeStage component initialization logic, call init method for each component:
-
-```
-func DefaultInitRuntimeStage(o *runtimeOptions, m *MosnRuntime) error {
- ...
- //初始化config/state/file/lock/sequencer/secret等各种组件
- if err := m.initCustomComponents(o.services.custom); err != nil {
- return err
- }
- if err := m.initHellos(o.services.hellos...); err != nil {
- return err
- }
- if err := m.initConfigStores(o.services.configStores...); err != nil {
- return err
- }
- if err := m.initStates(o.services.states...); err != nil {
- return err
- }
- if err := m.initRpcs(o.services.rpcs...); err != nil {
- return err
- }
- if err := m.initOutputBinding(o.services.outputBinding...); err != nil {
- return err
- }
- if err := m.initPubSubs(o.services.pubSubs...); err != nil {
- return err
- }
- if err := m.initFiles(o.services.files...); err != nil {
- return err
- }
- if err := m.initLocks(o.services.locks...); err != nil {
- return err
- }
- if err := m.initSequencers(o.services.sequencers...); err != nil {
- return err
- }
- if err := m.initInputBinding(o.services.inputBinding...); err != nil {
- return err
- }
- if err := m.initSecretStores(o.services.secretStores...); err != nil {
- return err
- }
- return nil
-}
-```
-
-Example file component, see initialization function:
-
-```
-func (m *MosnRuntime) initFiles(files ...file.FileFactory) ERRORY ERROR LO
-
- //register configured components on
- m.fileRegistry.Register(files...)
- for name, config := range m. untimesConfig.Files Fact
- //create/create a new component instance
- c, err := m.fileRegistry.Create(name)
- if err !=nil L/
- m. rrInt(err, "creation files component %s failed", name)
- return err
- }
- if err := c. nit(context.TODO(), &config); err != nil LO
- m. rrInt(err, "init files component %s failed", name)
- return err
- }
- //assignment to runtime
- m. files[name] = c
- }
- return nil
-}
-```
-
-Here MOSN, Grpc and Layotto are all started, and the code logic of the component can be called through the Gypc interface.
-
-## Summary
-
-Overall view of the entire startup process, Layotto integration with MOSN to start, parse configuration files, generate component classes in the configuration file and expose the api of Grpc.
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-blog/code/webassembly/index.md b/docs/i18n/en-US/docusaurus-plugin-content-blog/code/webassembly/index.md
deleted file mode 100644
index 6953d27192..0000000000
--- a/docs/i18n/en-US/docusaurus-plugin-content-blog/code/webassembly/index.md
+++ /dev/null
@@ -1,634 +0,0 @@
-# Layotto Source Parsing — WebAssembly
-
-> This paper mainly analyses the relevant implementation and application of Layotto Middle WASM.
->
-> by:[Wang Zhilong](https://github.com/rayowang) | 18 May 2022
-
-- [overview](#overview)
-- [source analysis](#source analysis)
- - [Frame INIT](#Frame INIT)
- - [workflow](#workflow)
- - [FaaSmode](#FaaS mode)
-- [summary](#summary)
-
-## General description
-
-WebAssemly Abbreviations WASM, a portable, small and loaded binary format operating in sandboxing implementation environment, was originally designed to achieve high-performance applications in web browsers, benefiting from its good segregation and security, multilingual support, cool-start fast flexibility and agility and application to embed other applications for better expansion, and obviously we can embed it into Layotto.Layotto supports loading compiled WASM files and interacting with the Target WASM API via proxy_abi_version_0_2_0;
-other Layotto also supports loading and running WASM carrier functions and supports interfaces between Function and access to infrastructure; and Layotto communities are also exploring the compilation of components into WASM modules to increase segregation between modules.This article uses the Layotto official [quickstart](https://mosn.io/layotto/#/zh/start/wasm/start) example of accessing redis as an example to analyze WebAssemly in Layotto Related implementation and application.
-
-## Source analysis
-
-Note:is based on commit hash:f1cf350a52b5a1a0b3788a31681007a056e332ef
-
-### Frame INIT
-
-As the bottom layer of Layotto is Mosn, the WASM extension framework is also the WASM extension framework that reuses Mosn, as shown in figure 1 Layotto & Mosn WASM framework [1].
-
-![mosn\_wasm\_ext\_framework\_module](https://gw.alipayobjects.com/mdn/rms_5891a1/afts/img/A*jz4BSJmVQ3gAAAAAAAAAAAAAARQnAQ)
-
-
Figure 1 Layotto & Mosn WASM framework
-
-Among them, Manager is responsible for managing and dynamically updating WASM plugins;VM for managing WASM virtual machines, modules and instances;ABI serves as the application binary interface to provide an external interface [2].
-
-Here a brief review of the following concepts:\
-[Proxy-Wasm](https://github.com/proxy-wasm) :WebAssembly for Proxies (ABI specification) is an unrelated ABI standard that defines how proxy and WASM modules interact [3] in functions and callbacks.
-[proxy-wasm-go-sdk](https://github.com/tetratelabs/proxy-wasm-go-sdk) :defines the interface of function access to system resources and infrastructure services based on [proxy-wasm/spec](https://github.com/proxy-wasm/spec) which brings together the Runtime API to increase access to infrastructure.\
-[proxy-wasm-go-host](https://github.com/mosn/proxy-wasm-go-host) WebAssembly for Proxies (GoLang host implementation):Proxy-Wasm golang implementation to implement Runtime ABI logic in Layotto.\
-VM: Virtual Machine Virtual machine. The Runtime types are wasmtime, wasmer, V8, Lucet, WAMR, and wasm3
-
-1, see first the configuration of stream filter in [quickstart例子](https://mosn.io/layotto/#/start/waste/start) as follows, two WASM plugins can be seen, using waste VM to start a separate instance with configuration: below
-
-```json
- "stream_filters": [
- LO
- "type": "Layotto",
- "config": API
- "Function1": LOs
- "name": "function1", // Plugin name
- "instance_num": 1, // Number of sandbox instances
- "vm_config": LO
- "engine": "waste", // Virtual Machine Type Runtime Type
- "path": "demo/faas/code/golang/client/function_1. asm" /waste file path
- }
- },
- "Function2": LO
- "name": "function2", // Plugin name
- "instance_num": 1, // Number of sandbox instances
- "vm_config": LO
- "engine": "waste", // Virtual Machine Type Runtime Type
- "path": "demo/faas/code/golang/server/function_2. asm" /wasm file path
- }
- }
- }
- }
-]
-```
-
-The primary logic in the configuration above is to receive HTTP requests, then call function2 through ABI, and return function2 as detailed below in code:
-
-```go
-func (Ctx *pHeaders) OnHttpRequestBody(bodySize int, endOfStream bool) types.Action Led
- /1. get request body
- body, err := proxywasm. etHttpRequestBody(0, bodySize)
- if err != nil L/
- proxywasm.LogErrorf("GetHttpRequestBody failed: %v", err)
- return types. ctionPause
- }
-
- /2. parse request param
- bookName, err := getQueryParam(string(body), "name")
- if err != nil Led
- proxywasm. ogErrorf("param not found: %v", err)
- returns types. ctionPause
- }
-
- /3. Request function2 through ABI
- inventories, err := proxywasm. nvokeService("id_2", "", bookName)
- if err != nil LO
- proxywasm.Logrorf("invoke service failed: %v", err)
- return types. ctionPause
- }
-
- /4. return result
- proxywasm. ppendHttpResponseBody([]byte ("There are " + inventories + " inventories for " + bookName + ".")
- return types.ActionContinue
-}
-```
-
-Function2 Primary logic is to receive HTTP requests, then call redisis through ABI and return to redis, as shown below in code:
-
-```go
-func (Ctx *pHeaders) OnHttpRequestBody(bodySize int, endOfStream bool) types.Action 6
- //1. get requested body
- body, err := proxywasm.GetHttpRequestBody(0, bodySize)
- if err != nil Led
- proxywasm. ogErrorf("GetHttpRequestBody failed: %v", err)
- returns types.ActionPause
- }
- bookName:= string(body)
-
- / 2. get request state from redis by specific key through ABI
- inventories, err := proxywastem. etState("redis", bookName)
- if err != nil LO
- proxywasm.LogErrorf("GetState failed: %v", err)
- returns types. ctionPause
- }
-
- / 3. return result
- proxywasm.AppendHttpResponseBody([]byte(inventories))
- return types.ActionContinue
-}
-```
-
-2. The Manager component of the Frame 1 WASM is initialized at Mosn filter Init stage as shown below in code:
-
-```go
-// Create a proxy factory for WasmFilter
-func createProxyWasmFilterFactory(confs map[string]interface{}) (api.StreamFilterChainFactory, error) {
- factory := &FilterConfigFactory{
- config: make([]*filterConfigItem, 0, len(confs)),
- RootContextID: 1,
- plugins: make(map[string]*WasmPlugin),
- router: &Router{routes: make(map[string]*Group)},
- }
-
- for configID, confIf := range confs {
- conf, ok := confIf.(map[string]interface{})
- if !ok {
- log.DefaultLogger.Errorf("[proxywasm][factory] createProxyWasmFilterFactory config not a map, configID: %s", configID)
- return nil, errors.New("config not a map")
- }
- // Parse the wasm filter configuration
- config, err := parseFilterConfigItem(conf)
- if err != nil {
- log.DefaultLogger.Errorf("[proxywasm][factory] createProxyWasmFilterFactory fail to parse config, configID: %s, err: %v", configID, err)
- return nil, err
- }
-
- var pluginName string
- if config.FromWasmPlugin == "" {
- pluginName = utils.GenerateUUID()
-
- // The WASM plug-in configuration is initialized according to the stream filter configuration. VmConfig is vm_config, and InstanceNum is instance_num
- v2Config := v2.WasmPluginConfig{
- PluginName: pluginName,
- VmConfig: config.VmConfig,
- InstanceNum: config.InstanceNum,
- }
-
- // The WasmManager instance manages the configuration of all plug-ins in a unified manner by managing the PluginWrapper object, providing the ability to add, delete, check and modify. Continue 3
- err = wasm.GetWasmManager().AddOrUpdateWasm(v2Config)
- if err != nil {
- config.PluginName = pluginName
- addWatchFile(config, factory)
- continue
- }
-
- addWatchFile(config, factory)
- } else {
- pluginName = config.FromWasmPlugin
- }
- config.PluginName = pluginName
-
- // PluginWrapper wraps the plug-in and configuration in AddOrUpdateWasm above to complete the initialization, which is pulled from sync.Map according to the plug-in name to manage and register the PluginHandler
- pw := wasm.GetWasmManager().GetWasmPluginWrapperByName(pluginName)
- if pw == nil {
- return nil, errors.New("plugin not found")
- }
-
- config.VmConfig = pw.GetConfig().VmConfig
- factory.config = append(factory.config, config)
-
- wasmPlugin := &WasmPlugin{
- pluginName: config.PluginName,
- plugin: pw.GetPlugin(),
- rootContextID: config.RootContextID,
- config: config,
- }
- factory.plugins[config.PluginName] = wasmPlugin
- // Register PluginHandler to provide extended callback capabilities for the plug-in's life cycle, such as the plug-in starting OnPluginStart and updating OnConfigUpdate. Continue 4
- pw.RegisterPluginHandler(factory)
- }
-
- return factory, nil
-}
-```
-
-3 Corresponding to Figure 1 WASM frame, NewWasmPlugin, for creating initialization of the WASM plugin, where VM, Module and Instance refer to virtual machines, modules and instances in WASM, as shown below in code:
-
-```go
-func NewWasmPlugin(wasmConfig v2.WasmPluginConfig) (types.WasmPlugin, error) {
- // check instance num
- instanceNum := wasmConfig.InstanceNum
- if instanceNum <= 0 {
- instanceNum = runtime.NumCPU()
- }
-
- wasmConfig.InstanceNum = instanceNum
-
- // Get the wasmer compilation and execution engine according to the configuration
- vm := GetWasmEngine(wasmConfig.VmConfig.Engine)
- if vm == nil {
- log.DefaultLogger.Errorf("[wasm][plugin] NewWasmPlugin fail to get wasm engine: %v", wasmConfig.VmConfig.Engine)
- return nil, ErrEngineNotFound
- }
-
- // load wasm bytes
- var wasmBytes []byte
- if wasmConfig.VmConfig.Path != "" {
- wasmBytes = loadWasmBytesFromPath(wasmConfig.VmConfig.Path)
- } else {
- wasmBytes = loadWasmBytesFromUrl(wasmConfig.VmConfig.Url)
- }
-
- if len(wasmBytes) == 0 {
- log.DefaultLogger.Errorf("[wasm][plugin] NewWasmPlugin fail to load wasm bytes, config: %v", wasmConfig)
- return nil, ErrWasmBytesLoad
- }
-
- md5Bytes := md5.Sum(wasmBytes)
- newMd5 := hex.EncodeToString(md5Bytes[:])
- if wasmConfig.VmConfig.Md5 == "" {
- wasmConfig.VmConfig.Md5 = newMd5
- } else if newMd5 != wasmConfig.VmConfig.Md5 {
- log.DefaultLogger.Errorf("[wasm][plugin] NewWasmPlugin the hash(MD5) of wasm bytes is incorrect, config: %v, real hash: %s",
- wasmConfig, newMd5)
- return nil, ErrWasmBytesIncorrect
- }
-
- // Create the WASM module, which is the stateless binary code that has been compiled
- module := vm.NewModule(wasmBytes)
- if module == nil {
- log.DefaultLogger.Errorf("[wasm][plugin] NewWasmPlugin fail to create module, config: %v", wasmConfig)
- return nil, ErrModuleCreate
- }
-
- plugin := &wasmPluginImpl{
- config: wasmConfig,
- vm: vm,
- wasmBytes: wasmBytes,
- module: module,
- }
-
- plugin.SetCpuLimit(wasmConfig.VmConfig.Cpu)
- plugin.SetMemLimit(wasmConfig.VmConfig.Mem)
-
- // Contains module and runtime state to create instance, notable is that here will call proxywasm. RegisterImports registered users realize the Imports of function, Examples include proxy_invoke_service and proxy_get_state
-actual := plugin.EnsureInstanceNum(wasmConfig.InstanceNum)
- if actual == 0 {
- log.DefaultLogger.Errorf("[wasm][plugin] NewWasmPlugin fail to ensure instance num, want: %v got 0", instanceNum)
- return nil, ErrInstanceCreate
- }
-
- return plugin, nil
-}
-```
-
-Corresponding to ABI components in Figure 1 WASM frames, the OnPluginStart method calls proxy-wasm-go-host corresponding to ABI Exports and Imports etc.
-
-```go
-// Execute the plugin of FilterConfigFactory
-func (f *FilterConfigFactory) OnPluginStart(plugin types.WasmPlugin) {
- plugin.Exec(func(instance types.WasmInstance) bool {
- wasmPlugin, ok := f.plugins[plugin.PluginName()]
- if !ok {
- log.DefaultLogger.Errorf("[proxywasm][factory] createProxyWasmFilterFactory fail to get wasm plugin, PluginName: %s",
- plugin.PluginName())
- return true
- }
-
- // 获取 proxy_abi_version_0_2_0 版本的与 WASM 交互的 API
- a := abi.GetABI(instance, AbiV2)
- a.SetABIImports(f)
- exports := a.GetABIExports().(Exports)
- f.LayottoHandler.Instance = instance
-
- instance.Lock(a)
- defer instance.Unlock()
-
- // Use the exports function proxy_get_id (which corresponds to the GetID function in the WASM plug-in) to get the ID of WASM
- id, err := exports.ProxyGetID()
- if err != nil {
- log.DefaultLogger.Errorf("[proxywasm][factory] createProxyWasmFilterFactory fail to get wasm id, PluginName: %s, err: %v",
- plugin.PluginName(), err)
- return true
- }
- // If you register the ID and the corresponding plug-in in the route, the route can be performed using the key-value pair in the http Header. For example, 'id:id_1' is routed to Function1 based on id_1
- f.router.RegisterRoute(id, wasmPlugin)
-
- // The root context is created by proxy_on_context_create when the first plug-in is loaded with the given root ID and persists for the entire life of the virtual machine until proxy_on_delete is deleted
- // It is worth noting that the first plug-in here refers to a use case where multiple loosely bound plug-ins (accessed via the SDK using the Root ID to the Root Context) share data within the same configured virtual machine [4]
- err = exports.ProxyOnContextCreate(f.RootContextID, 0)
- if err != nil {
- log.DefaultLogger.Errorf("[proxywasm][factory] OnPluginStart fail to create root context id, err: %v", err)
- return true
- }
-
- vmConfigSize := 0
- if vmConfigBytes := wasmPlugin.GetVmConfig(); vmConfigBytes != nil {
- vmConfigSize = vmConfigBytes.Len()
- }
-
- // VM is called when the plug-in is started with the startup
- _, err = exports.ProxyOnVmStart(f.RootContextID, int32(vmConfigSize))
- if err != nil {
- log.DefaultLogger.Errorf("[proxywasm][factory] OnPluginStart fail to create root context id, err: %v", err)
- return true
- }
-
- pluginConfigSize := 0
- if pluginConfigBytes := wasmPlugin.GetPluginConfig(); pluginConfigBytes != nil {
- pluginConfigSize = pluginConfigBytes.Len()
- }
-
- // Called when the plug-in loads or reloads its configuration
- _, err = exports.ProxyOnConfigure(f.RootContextID, int32(pluginConfigSize))
- if err != nil {
- log.DefaultLogger.Errorf("[proxywasm][factory] OnPluginStart fail to create root context id, err: %v", err)
- return true
- }
-
- return true
- })
-}
-```
-
-### Workflow
-
-The workflow for Layotto Middle WASM is broadly as shown in figure 2 Layotto & Mosn WASM workflow, where the configuration is largely covered by the initial elements above, with a focus on the request processing.
-![mosn\_wasm\_ext\_framework\_workflow](https://gw.alipayobjects.com/mdn/rms_5891a1/afts/img/A*XTDeRq0alYsAAAAAAAAAAAAAARQnAQ)
-
-
Figure 2 Layotto & Mosn WAS Workflow
-
-By Layotto underneath Mosn, as a workpool schedule, implement the OnReceive method of StreamFilterChain to Wasm StreamFilter in proxy downstream, as configured and detailed in code: below
-
-```go
-func (f *Filter) OnReceive(ctx context.Context, headers api.HeaderMap, buf buffer.IoBuffer, trailers api.HeaderMap) api.StreamFilterStatus {
- // Gets the id of the WASM plug-in
- id, ok := headers.Get("id")
- if !ok {
- log.DefaultLogger.Errorf("[proxywasm][filter] OnReceive call ProxyOnRequestHeaders no id in headers")
- return api.StreamFilterStop
- }
-
- // Obtain the WASM plug-in from the router based on its id
- wasmPlugin, err := f.router.GetRandomPluginByID(id)
- if err != nil {
- log.DefaultLogger.Errorf("[proxywasm][filter] OnReceive call ProxyOnRequestHeaders id, err: %v", err)
- return api.StreamFilterStop
- }
- f.pluginUsed = wasmPlugin
-
- plugin := wasmPlugin.plugin
- // Obtain an instance of WasmInstance
- instance := plugin.GetInstance()
- f.instance = instance
- f.LayottoHandler.Instance = instance
-
- // The ABI consists of Exports and Imports, through which users interact with the WASM extension
- pluginABI := abi.GetABI(instance, AbiV2)
- if pluginABI == nil {
- log.DefaultLogger.Errorf("[proxywasm][filter] OnReceive fail to get instance abi")
- plugin.ReleaseInstance(instance)
- return api.StreamFilterStop
- }
- // Set the Imports section. The import section is provided by the user. The execution of the virtual machine depends on some of the capabilities provided by the host Layotto, such as obtaining request information, which are provided by the user through the import section and invoked by the WASM extension
- pluginABI.SetABIImports(f)
-
- // The Exports section is provided by the WASM plug-in and can be called directly by the user to wake up the WASM virtual machine and execute the corresponding WASM plug-in code in the virtual machine
- exports := pluginABI.GetABIExports().(Exports)
- f.exports = exports
-
- instance.Lock(pluginABI)
- defer instance.Unlock()
-
- // Create the current plug-in context according to rootContextID and contextID
- err = exports.ProxyOnContextCreate(f.contextID, wasmPlugin.rootContextID)
- if err != nil {
- log.DefaultLogger.Errorf("[proxywasm][filter] NewFilter fail to create context id: %v, rootContextID: %v, err: %v",
- f.contextID, wasmPlugin.rootContextID, err)
- return api.StreamFilterStop
- }
-
- endOfStream := 1
- if (buf != nil && buf.Len() > 0) || trailers != nil {
- endOfStream = 0
- }
-
- // Call proxy-wasm-go-host, encoding the request header in the format specified by the specification
- action, err := exports.ProxyOnRequestHeaders(f.contextID, int32(headerMapSize(headers)), int32(endOfStream))
- if err != nil || action != proxywasm.ActionContinue {
- log.DefaultLogger.Errorf("[proxywasm][filter] OnReceive call ProxyOnRequestHeaders err: %v", err)
- return api.StreamFilterStop
- }
-
- endOfStream = 1
- if trailers != nil {
- endOfStream = 0
- }
-
- if buf == nil {
- arg, _ := variable.GetString(ctx, types.VarHttpRequestArg)
- f.requestBuffer = buffer.NewIoBufferString(arg)
- } else {
- f.requestBuffer = buf
- }
-
- if f.requestBuffer != nil && f.requestBuffer.Len() > 0 {
- // Call proxy-wasm-go-host, encoding the request body in the format specified by the specification
- action, err = exports.ProxyOnRequestBody(f.contextID, int32(f.requestBuffer.Len()), int32(endOfStream))
- if err != nil || action != proxywasm.ActionContinue {
- log.DefaultLogger.Errorf("[proxywasm][filter] OnReceive call ProxyOnRequestBody err: %v", err)
- return api.StreamFilterStop
- }
- }
-
- if trailers != nil {
- // Call proxy-wasm-go-host, encoding the request tail in the format specified by the specification
- action, err = exports.ProxyOnRequestTrailers(f.contextID, int32(headerMapSize(trailers)))
- if err != nil || action != proxywasm.ActionContinue {
- log.DefaultLogger.Errorf("[proxywasm][filter] OnReceive call ProxyOnRequestTrailers err: %v", err)
- return api.StreamFilterStop
- }
- }
-
- return api.StreamFilterContinue
-}
-```
-
-2, proxy-wasm-go-host encode Mosn requests for triplets into the specified format and call Proxy-Wasm ABI equivalent interface in Proxy_on_request_headers and call the WASMER virtual machine to pass the request information to the WASM plugin.
-
-```go
-func (a *ABIContext) CallWasmFunction (functionName string, args ..interface{}) (interface{}, Action, error) um
- ff, err := a.Instance. eExportsFunc(functionName)
- if err != nil {
- return nil, ActionContinue, err
- }
-
- // Call waste virtual machine (Github.com/wasmerio/wasmer-go/wasmer.(*Function).Call at function.go)
- res, err := ff. all(args....)
- if err != nil L/
- a.Instance.HandleError(err)
- return nil, ActionContinue, err
- }
-
- // if we have sync call, e. HttpCall, then unlocked the waste instance and wait until it resp
- action := a.Imports.Wait()
-
- return res, action, nil
-}
-```
-
-3. The WASMER virtual machine is processed to call specific functions of the WASM plug-in, such as the OnHttpRequestBody function in the example
- // function, _:= instance.Exports.GetFunction("exported_function")
- // nativeFunction = function.Native()
- //_ = nativeFunction(1, 2, 3)
- // Native converts Function to a native Go function that can be called
-
-```go
-func (self *Function) Native() NativeFunction {
- ...
- self.lazyNative = func(receivedParameters ...interface{}) (interface{}, error) {
- numberOfReceivedParameters := len(receivedParameters)
- numberOfExpectedParameters := len(expectedParameters)
- ...
- results := C.wasm_val_vec_t{}
- C.wasm_val_vec_new_uninitialized(&results, C.size_t(len(ty.Results())))
- defer C.wasm_val_vec_delete(&results)
-
- arguments := C.wasm_val_vec_t{}
- defer C.wasm_val_vec_delete(&arguments)
-
- if numberOfReceivedParameters > 0 {
- C.wasm_val_vec_new(&arguments, C.size_t(numberOfReceivedParameters), (*C.wasm_val_t)(unsafe.Pointer(&allArguments[0])))
- }
-
- // Call functions inside the WASM plug-in
- trap := C.wasm_func_call(self.inner(), &arguments, &results)
-
- runtime.KeepAlive(arguments)
- runtime.KeepAlive(results)
- ...
- }
-
- return self.lazyNative
-}
-```
-
-4, proxy-wasm-go-sdk converts the requested data from the normative format to a user-friendly format and then calls the user extension code.Proxy-wasm-go-sdk, based on proxy-waste/spec implementation, defines the interface between function access to system resources and infrastructure services, and builds on this integration of the Runtime API, adding ABI to infrastructure access.
-
-```go
-// function1The main logic is to receive the HTTP request, call function2 using the ABI, and return the function2 result. The code is as follows
-func (ctx *httpHeaders) OnHttpRequestBody(bodySize int, endOfStream bool) types.Action {
- //1. get request body
- body, err := proxywasm.GetHttpRequestBody(0, bodySize)
- if err != nil {
- proxywasm.LogErrorf("GetHttpRequestBody failed: %v", err)
- return types.ActionPause
- }
-
- //2. parse request param
- bookName, err := getQueryParam(string(body), "name")
- if err != nil {
- proxywasm.LogErrorf("param not found: %v", err)
- return types.ActionPause
- }
-
- //3. request function2 through ABI
- inventories, err := proxywasm.InvokeService("id_2", "", bookName)
- if err != nil {
- proxywasm.LogErrorf("invoke service failed: %v", err)
- return types.ActionPause
- }
-
- //4. return result
- proxywasm.AppendHttpResponseBody([]byte("There are " + inventories + " inventories for " + bookName + "."))
- return types.ActionContinue
-}
-```
-
-5, WASM plugin is registered at RegisterFunc initialization. For example, Function1 RPC calls Proxy InvokeService,Function2 to get ProxyGetState specified in Redis as shown below in:
-
-Function1 Call Function2, Proxy InvokeService for Imports function proxy_invoke_service through the Proxy InvokeService
-
-```go
-func ProxyInvokeService(instance common). asmInstance, idPtr int32, idSize int32, methodPtr int32, methodPtr int32, paramPtr int32, resultPtr int32, resultSize int32) int32 56
- id, err := instance. etMemory(uint64(idPtr), uint64(idSize))
- if err != nil LO
- returnWasmResultInvalidMemoryAcces.Int32()
- }
-
- method, err := instance. etMemory(uint64 (methodPtr), uint64 (methodSize))
- if err != nil LO
- returnWasmResultInvalidMemoryAccess. nt32()
- }
-
- param, err := instance.GetMemory(uint64 (paramPtr), uint64 (paramSize))
- if err != nil Fe
- returnn WasmResultInvalidMemoryAccess. nt32()
- }
-
- ctx:= getImportHandler(instance)
-
- // Laytto rpc calls
- ret, res := ctx. nvokeService(string(id), string(param))
- if res != WasmResultOk 6
- return res.Int32()
-
-
- return copyIntoInstance(instance, ret, resultPtr, resultSize).Int32()
-}
-```
-
-Function2 Get Redis via ProxyGetState to specify key Valye, ProxyGetState for Imports function proxy_get_state
-
-```go
-func ProxyGetState(instance common.WasmInstance, storeNamePtr int32, storeNameSize int32, keyPtr int32, valuePtr int32, valueSize int32) int32 Fe
- storeName, err := instance. etMemory(uint64 (storeNamePtr), uint64 (storeNameSize))
- if err != nil LO
- returnWasmResultInvalidMemoryAccess.Int32()
- }
-
- key, err := instance. etMemory(uint64(keyPtr), uint64(keySize))
- if err != nil LO
- returnWasmResultInvalidMemoryAccess.Int32()
- }
-
- ctx := getImportHandler(instance)
-
- ret, res := ctx. etState(string(storeName), string(key))
- if res != WasmResultOk 6
- return res.Int32()
- }
-
- return copyIntoInstance(instance, ret, valuePtr, valueSize). Int32()
-}
-```
-
-More than the Layotto rpc process is briefly described as the implementation of [5]by two virtual connections using the Dapr API and underneath Mosn, see previous order articles [Layotto source parsing — processing RPC requests] (https://mosn.io/layotto/#/blog/code/layotto-rpc/index), where data from Redis can be obtained directly from Dapr State code and is not developed here.
-
-### FaaS Mode
-
-Look back back to the WASM features:bytes code that match the machine code; guarantee good segregation and security in the sandbox; compile cross-platforms, easily distributed, and load running; have lightweight and multilingual flexibilities and seem naturally suitable for FaaS.
-
-So Layotto also explores support for WASM FaaS mode by loading and running WASM carrier functions and supporting interfaces and access to infrastructure between Function.Since the core logic of loading the WASM has not changed, except that there is a difference between usage and deployment methods and those described above, the Layotto load part of the ASM logic is not redundant.
-
-In addition to the Wasm-Proxy implementation, the core logic of the FaaS mode is to manage the \*.wasm package and Kubernetes excellent structuring capabilities by expanding Containerd to multiple-run plugins containerd-shim-layotto-v2 [6]and using this "piercing wire" ingenuity to use Docker mirror capability. Specific structures and workflows can be found in Figure 3 Layotto FaaS Workflow.
-
-![layotto_faas_workflow](https://gw.alipayobjects.com/mdn/rms_5891a1/afts/img/A*XWmNT6-7FoEAAAAAAAAAAAAAARQnAQ)
-
-
Figure 3 Layotto FaaS Workflow
-
-Here a simple look at the master function of containerd-shim-layotto-v2. It can be seen that shim.Run runs the WASM as io.containerd.layotto.v2, and runtime_type of the containerd plugins.crimerd.runtimes corresponding to the plugin.When creating a Pod, you specify runtimeClassName: layotto in yaml speed, and eventually kubelet will load and run them when cric-plugin calls containerd-shim-layotto-v2 is running.
-
-```go
-func main() {
- startLayotto()
- // 解析输入参数,初始化运行时环境,调用 wasm.New 实例化 service 对象
- shim.Run("io.containerd.layotto.v2", wasm.New)
-}
-
-func startLayotto() {
- conn, err := net.Dial("tcp", "localhost:2045")
- if err == nil {
- conn.Close()
- return
- }
-
- cmd := exec.Command("layotto", "start", "-c", "/home/docker/config.json")
- cmd.Start()
-}
-```
-
-## Summary
-
-Layotto WebAssemly involves more basic WASM knowledge, but it is understandable that the examples are shallow deeper and gradual.At the end of the spectrum, the ASM technology can be seen to have been applied to many fields such as Web-Front, Serverlessness, Game Scene, Edge Computing, Service Grids, or even to the Docker parent Solomon Hykes recently said: "If the WASM technology is available in 2008, I will not be able to do the Docker" (later added that:Docker will not be replaced and will walk side by side with WASM) The ASM seems to be becoming lighter and better performing cloud-origin technology and being applied to more areas after the VM and Container, while believing that there will be more use scenes and users in Mosn community push and in Layotto continue exploration, here Layotto WebAssemly relevant source code analysis has been completed. Given time and length, some more comprehensive and in-depth profiles have not been carried out, and if there are flaws, welcome fingers, contact:rayo. angzl@gmail.com.
-
-### References
-
-- [1] [WebAssembly practice in MOSN](https://mosn.io/blog/posts/mosn-wasm-framework/)
-- [2] [feature: WASM plugin framework](https://github.com/mosn/mosn/pull/1589)
-- [3] [WebAssembly for Proxies (ABI Spec)](https://github.com/proxy-wasm/spec)
-- [4] [Proxy WebAssembly Architecture](https://techhenzy.com/proxy-webassembly-archive/)
-- [5] [Layotto source parse — processing RPC requests](https://mosn.io/layotto/#/blog/code/layotto-rpc/index)
-- [6] [Cloud native runtime for the next five years](https://www.soft.tech/blog/the-next-fuve-years-of-cloud-native-runtime/)
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-blog/exploration-and-practice-of-antcloud-native-application-runtime-archsummit-shanghai.md b/docs/i18n/en-US/docusaurus-plugin-content-blog/exploration-and-practice-of-antcloud-native-application-runtime-archsummit-shanghai.md
deleted file mode 100644
index 19977b713f..0000000000
--- a/docs/i18n/en-US/docusaurus-plugin-content-blog/exploration-and-practice-of-antcloud-native-application-runtime-archsummit-shanghai.md
+++ /dev/null
@@ -1,189 +0,0 @@
-# Ant Cloud Native Apps Exploring and Practice - ArchiSummit
-
-> The introduction of the Mesh model is a key path to the application of clouds and ant groups have achieved mass landings internally.The sinking of more middleware capabilities, such as Message, DB, Cache Mesh and others, will be the future shape of intermediate technology when the app evolves from Mesh.Apps run to help developers construct cloud native apps quickly and to further decouple apps and infrastructure, while the app runs at the core of API standards, the community is expected to build together.
-
->![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*nergRo8-RI0AAAAAAAAAAAAAARQnAQ)
-
-## Ant Group Mesh Introduction
-
-Ant is a technology and innovation-driven company, from its earliest days as a payment app on Taobao to its current services
-As a large company with 1.2 billion users worldwide, Ant's technical architecture evolution will probably be divided into the following stages:
-
-Prior to 2006, the earliest payment was a centralized monolithic application with modular development of different businesses.
-
-In 2007, as more scenes of payments were promoted, an application and data splitting began to be made and some modifications to SOA were made.
-
-After 2010, rapid payments, mobile payments, support for two-eleven and balance jewels have been introduced, and users have reached the level of hundreds of millions, and the number of ant applications has grown, and ants have developed many full sets of microservice middleware to support ant operations;
-
-In 2014, like the advent of more business formalities like rush flow, online payments and more scenes, higher requirements for ant availability and stability, ants supported LDC moderation in microservice intermediation, off-site support for business support, and elasticity scaling-up in mixed clouds that support bi-11 ultra-mass traffic.
-
-In 2020, ant business was not only digital finance, but also the emergence of new strategies such as digital life and internationalization, which prompted us to have a more efficient technical structure that would allow the operation to run faster and more steadily, so ant ants were able to internalize a more popular concept of cloud-origin in the industry.
-
->![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*KCSVTZWSf8wAAAAAAAAAAAAAARQnAQ)
-
-The technical structure of ant can also be seen to evolve along with the business innovations of the company from centralization to SOA to microservices, believing that the classmates with microservices are well known and that the practice of microservices to clouds has been explored by ants themselves in recent years.
-
-## Why to introduce Service Mesh
-
-Since ant has a complete set of microservice governance intermediaries, why do you need to introduce Service Mesh?
-
->![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*Sq7oR6eO2QAAAAAAAAAAAAAAARQnAQ)
-
-The service framework for ant self-research is SOFARPC as an example of a powerful SDK that includes a range of capabilities such as discovery of services, routing, melting out streams, etc.In a basic SOFA(Javaa) app, business code integrates SOFARP's SDK, both running in a process.After the large scale of sunk microservice, we faced some of the following problems with:
-
-**Upgrade cost**:SDK requires business code introduction. Each upgrade requires a change code to be published.Because of the large scale of applications, some major technological changes or safety problems are being repaired.It takes thousands of apps to upgrade each time it takes time.
-**Version Fragment**:is highly fragmented, due to the high cost of upgrades, which makes it difficult for us to use historical logic when writing our code and to evolve across technology.
-**Cross-language is unmanageable**:ant online applications mostly use Java as a technical stack, but there are many cross-language applications in the front office, AI, Big Data, for example C++/Python/Golang etc. Their service governance capacity is missing due to SDK without a corresponding language.
-
-We note that some concepts of Service Mesh in the cloud are beginning to emerge, so we are beginning to explore this direction.In the concept of Service Mesh, there are two concepts, one Control Plane Control and one Data Plane Dataplane.The core idea of the data plane is to decouple and to abstract some of the unconnected and complex logic (such as service discovery in RPC calls, service routing, melting breaks, security) into an independent process.As long as there is no change in the communications agreement between the operational and independent processes, the evolution of these capabilities can follow the autonomous upgrading of this independent process and the evolution of the entire Mesh can take place in a unified manner.Our cross-language applications, as long as the traffic passes through our Data Plane, are able to enjoy the capacities related to the governance of the services just mentioned, and the application of infrastructure capabilities to the bottom is transparent and truly cloud.
-
-## Ant Mesh landing process
-
-So, starting at the end of 2017, ant began to explore the technical direction of Service Mesh and presented a vision of a unified infrastructure with a sense of business upgrade.The main milestone is:
-
-The Technology Advance Research Service Mesh technology was launched at the end of 2017 and set the direction for the future;
-
-Beginning in early 2018 with Golang Self Research Sidecar MOSN and its source, mainly supporting RPC on a two-decimal scale pilot;
-
-2019 New Message Mesh and DB Mesh shape in 618, covering a number of core links and exponentially 618
-
-Two-11 years in 2019, covering hundreds of applications from all high-profile core links, supporting the Big Eleven at that time;
-
-Twenty and eleven years in 2020, more than 80% of online applications are connected to the Mesh system and can be upgraded from capacity development to full capacity for 2 months.
-
-## Ant Mesh Landing Architecture
-
-Mesh at ant landing size is about thousands of applications and hundreds of thousands of levels of containers, a scale that falls in industry to a few and two times without a previous path to learn, so as ant arrives in a complete system of research and development delivery to support the mesh of ants as he arrives.
-
->![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*eAlMT7SMTpMAAAAAAAAAAAAAARQnAQ)
-Ant Mesh structure is probably our control plane, as shown in the graph, and the service end of the service governance centre, PaaS, monitoring centre, etc. are deployed as some of the existing products.There are also our transport systems, including R&D platforms and PaaS platforms.The middle is our main player data plane MOSN, which manages RPC, messages, MVC, Tasks four streams, as well as basic capabilities for health screening, monitoring, configuration, security, and technical risks, and MOSN blocks some interaction between operations and basic platforms.DBMesh is an independent product in the ant and is not drawn in the graph.Then the top tier is some of our applications that currently support access to many languages such as Java, Nodejs.
-For applications, while infrastructure decoupling, access will require an additional upgrade cost, so in order to promote access to the app, ant makes the entire research and development delivery process, including by making the simplest access to the existing framework, by pushing forward in batches to manage risks and progress, and by allowing new applications default access to Mesh to do so.
-
-At the same time, as sincerity grows, each of the capacities faced some problems of collaboration in R&D, and even of mutual impact on performance and stability, so that for the development effectiveness of the Mesh itself, we have made improvements in modular isolation, dynamic plugging of new capacities, automatic regression, and so on, which can be completed within two months from development to roll-out across the site.
-
-## Explore on Cloud Native Apps Run
-
-**New issues and reflections on mass backwardness**
-
-Ant Mesh has now encountered some new problems with:
-cross-language SDK maintenance master:Canada RPC examples. Most of the logic is already sinking into MOSN, but there is still some communication decoding protocol logic in Java, this SDK has some maintenance costs, how many lightweight SDKs, how many languages a team cannot have research and development in all languages. The quality of the Institute's code in this lightweight SDK is a problem.
-
-A part of the application of the new:ant in business compatible with different environments is deployed both inside the ant and externally exported to financial institutions.When they are deployed to ant the control face of the ant and when the bank is received, the control of the bank is already in place.Most of the applications now contain a layer of their code and temporarily support the next when they meet unsupported components.
-
-The earliest scenes from Service Mesh to Multi-Mesh:ant are Service Mesh, MOSN intercept traffic through network connecting agents, and other intermediates interact with the server through the original SDK.Now MOSN is more than a Service Mosh, but multi-Mesh, because, with the exception of RPC, we have supported more mesh Mesh landing sites, including messages, configurations, caches, etc.Each sinking intermediate can be seen, and almost all have a lightweight SDK on the side of the app, which, in the context of the first issue just a moment ago, finds a very large amount of lightweight SDK that needs to be maintained.In order to keep the features do not interact with each other, each feature opens different ports, calls with MOSN via different protocol.e.g. RPC protocol for RPC, MQ protocol for messages, cached Redis protocol.Then the current MOSN is more than just a flow orientation. For example, the configuration is to expose the API to use business code.
-
->![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*80o8SYwyHJoAAAAAAAAAAAAAARQnAQ)
-
-To solve the problems and scenes we are thinking about the following points:
-
-Can the SDK be styled in different intermediaries, languages and languages?
-
-Can interoperability protocols be unified?
-
-3. Do we sink under our intermediate part to components or capabilities?
-
-Can the implementation of the bottom be replaced?
-
->![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*hsZBQJg0VnoAAAAAAAAAAAAAARQnAQ)
-
-## Ant Cloud Native Apps Runtime Structure
-
-Beginning last March, following several rounds of internal discussions and research into new ideas in industry, we introduced a concept of “cloud native apps” (hereinafter referred to as running on).By definition, we want this operation to include all distributive capabilities that the app cares for, help developers build your cloud native apps quickly, help apps and infrastructure to decouple more!
-
->![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*iqQoTYAma4YAAAAAAAAAAAAAARQnAQ)
-
-The core points of runtime design for cloud-native applications are as follows:
-
-\*\*First \*\*, due to experience of MOSN sizing and associated shipping systems, we decided to build up our cloud native app on the basis of MOSN kernel.
-
-\*\*Second \*\*, Abilities instead of Component Orientation, define the APIs for this running time.
-
-**Third**, the interaction between business code and the Runtime API uses a uniform gRPC protocol so that the side of the business can generate a client directly and directly call through proto file.
-
-**Four**'s component implementation after ability is replacable, for example, registration service provider may be SOFARegistry, or Nacos or Zookeper.
-
-**Running abstract capabilities**
-
->![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*hWIVR6ccduYAAAAAAAAAAAAAARQnAQ)
-
-To abstract some of the capabilities most needed for cloud apping, we set a few principles:
-
-1. Follow the APIs and Scenarios required for distributed apps instead of components;
- 2.APIs are intuitive, used in boxes, and are better than configured;
- 3.APIs are not bound to implement and differentiate using extension fields.
-
-With this principle, we abstract out the primary API, which is the app for mosn.proto, the appcallback.proto for the app when running, and the relevant actuator.proto for the app when running.For example, RPC calls, messages, read caches, read configurations are all applied to running, while RPC receipts, messages, incoming task schedules, are applied when running. Other control checks, component management, traffic controls are related to running wikes.
-
-Three examples of this proto can be seen at:
-
->![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*J76nQoLLYWgAAAAAAAAAAAAAARQnAQ)
-
-**Run Component Controls**
-
-On the other hand, we have two concepts in MOSN for the purpose of realizing replaceability when running. We call a distribution capability and then have a different component to perform this Service, a service that can be implemented with multiple components, and a component that can deliver multiple services.For example, the example in the graph is that the service with the message "MQ-pub" is implemented by SOFAMQ and Kafka Component, while Kafka Component implements both the message and health check service.
-When a transaction is actually requested via a gRPC-generated client, the data will be sent to Runtime via the gRPC protocol and distributed to the next specific implementation.In this way, the app needs to use only the same set of API, which can be implemented differently by the parameters in the request or when the configuration is running.
-
->![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*dK9rRLTvtlMAAAAAAAAAAAAAARQnAQ)
-
-**Compare between runtime and Mesh**
-
-Based on the above, when the cloud app is running and just just Mesh are easy to compare with:
-
->![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*xyu9T74SD9MAAAAAAAAAAAAAARQnAQ)
-
-Scene
-started research last year while the cloud native app is running. The following scenes are currently falling inside the ant area.
-
-**Isomer Technical Stack Access**
-
->![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*8UJhRbBg3zsAAAAAAAAAAAAAARQnAQ)
-
-In the case of ants, applications in different languages, in addition to the need for RPC service governance, messages, etc., the infrastructure capabilities such as the one-size-fits-all intermediate of the ant are desirable and Java and Nodejs have corresponding SDKs, while the other languages are not corresponding SDKs.After the application runs, these isomer languages can be used directly through GRPC Client to the ant infrastructure.
-
-**Unbind the manufacturer**
-
->![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*eVoqRbkTFFwAAAAAAAAAAAAAARQnAQ)
-
-As mentioned earlier, ant blockchains, wind control, intelligent support services, financial intermediaries, etc., are scenes where they are deployed on their main stations, where there is either Aliyun or cloud.After running, the app can combine a set of code with a mirror when running. By configuring it to determine which bottom layer of implementation to be called, without being bound to specific implementations.For example, the internal interface between ant is for products such as SOFARegistration and SOFAMQ, and on the cloud is for products such as Nacos, RocketMQ, to Zokeper, Kafka and others.This scenario is in the process of reaching us.Of course, this can also be used for legacy system governance, such as upgrading from SOFAMQ 1.0 to SOFAMQ 2.0, and then running apps need not be upgraded.
-
-\*\*FaaS Cold Pool Pool \*\*
-
-FaaS Cool is also a recent scene we are exploring and you know that the Function in FaaS needs to go from Pod creation to Download Function to Start, a process that will be lengthy.After running time, we can create Pod in advance and start up good running. Wait a very simple app logic when the app starts. Test it can be shortened from 5s to 1s.We will continue to explore this direction as well.
-
-## Planning and outlook
-
-**API**
-
-The most important part of the running time is the definition of the API. We already have a more complete set of APIs for the sake of getting inside, but we also see that many products in industry have similar demands, such as dapr, envoy, etc.So one of the next things we will do is to bring together communities to launch a set of recognized cloud native API.
-
->![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*d2BORogVotoAAAAAAAAAAAAAARQnAQ)
-
-**Continuous Open Source**
-
-We will also develop our internal running practice in the near future, with a release of 0.1 in May and June, and keep a small monthly release pace, aiming to publish 1.0 by the end of the year.
-
->![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*Kgr9QLc5TH4AAAAAAAAAAAAAARQnAQ)
-
-## Summary
-
-**Last Summary:**
-
-1.Service Mesh mode introduction is a key path to the application of the cloud;
-
-Any mesh that allows Mesh to be generated, but the problem of R&D efficiency remains partially present;
-
-3.Mesh Large-scale landfall is a matter of engineering and requires a complete suite of systems;
-
-4. Cloud native applications will be the future shape of basic technologies such as intermediaries, further decoupling and distributive capabilities;
-
-The cloud native app runs at the heart of the API, and the community is expected to build one standard together.
-
-Extend Reading
-
-- [Take you into Cloud Native Technology:Native Open Delivery Systems Exploration and Practices](https://mp.weixin.qq.com/s?_biz=MzUzU5Mjc1Nw===\&mid=2247488044\&idx=1\&sn=e6300d4b451723a5001cd3deb17fbc\&chksm=faa0f6cdd774e03ccd91300996747a8e7e109ecf810af147e08c663676946490\&scene=21)
-
-- [Taking a thousand miles one step at a time: A comprehensive overview of the QUIC protocol landing at Ant Group](https://mp.weixin.qq.com/s?__biz=MzUzMzU5Mjc1Nw==\&mid=2247487717\&idx=1\&sn=ca9452cdc10989f61afbac2f012ed712\&chksm=faa0ff3fcdd77629d8e5c8f6c42af3b4ea227ee3da3d5cdf297b970f51d18b8b1580aac786c3\&scene=21)
-
-- [Rust's emerging field showing its prowess: confidential computing](https://mp.weixin.qq.com/s?__biz=MzUzMzU5Mjc1Nw==\&mid=2247487576\&idx=1\&sn=0d0575395476db930dab4e0f75e863e5\&chksm=faa0ff82cdd77694a6fc42e47d6f20c20310b26cedc13f104f979acd1f02eb5a37ea9cdc8ea5\&scene=21)
-
-- [Protocol Extension Base on Wasm — protocol extension](https://mp.weixin.qq.com/s?_biz=MzUzU5Mjc1Nw===\&mid=2247487546\&idx=1\&sn=72c3f1e27ca4ace788e11ca20d5f9\&chksm=faa0ffe0cd776f6d17323466b500acee50a371663f18da34d8e4d72304d7681cf589b45\&scene=21)
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-blog/mosn-subproject-layotto-opening-a-new-chapter-in-service-grid-application-runtime/index.md b/docs/i18n/en-US/docusaurus-plugin-content-blog/mosn-subproject-layotto-opening-a-new-chapter-in-service-grid-application-runtime/index.md
deleted file mode 100644
index 52e21acddc..0000000000
--- a/docs/i18n/en-US/docusaurus-plugin-content-blog/mosn-subproject-layotto-opening-a-new-chapter-in-service-grid-application-runtime/index.md
+++ /dev/null
@@ -1,269 +0,0 @@
-# MOSN subproject Layotto:opens the service grid + new chapter when app runs
-
-> Author profile:
-> Magnetic Army. Fancy is an ancient one, cultivating for many years in the infrastructure domain, with in-depth practical experience of Service Mosh, and currently responsible for the development of projects such as MOSN, Layotto and others in the middle group of ant groups.
-> Layotto official GitHub address: [https://github.com/mosn/layotto](https://github.com/mosn/layotto)
-
-Click on a link to view the live video:[https://www.bilibili.com/video/BV1hq4y1L7FY/](https://www.bilibili.com/video/BV1hq4y1L7FY/)
-
-Service Mesh is already very popular in the area of microservices, and a growing number of companies are starting to fall inside, and ants have been investing heavily in this direction from the very beginning of Service Mesh programme. So far, the internal Mesh programme has covered thousands of applications, hundreds of thousands of containers and has been tested many times, the decoupling of business coupling brought about by Service Mosh, smooth upgrades and other advantages have greatly increased iterative efficiency in intermediaries.
-
-We have encountered new problems after mass landings, and this paper focuses on a review of service Mesh's internal landings and on sharing solutions to new problems encountered after service Mesh landing.
-
-## Service Mesh Review and Summary
-
-### Instrument for standardized international reporting of military expenditures
-
-> ![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*p8tGTbpLRegAAAAAAAAAAAAAARQnAQ)
-> Under the microservice architecture, infrastructure team typically provides a SDK that encapsulates the ability to govern the various services, while ensuring the proper functioning of the application, it is also clear that each infrastructure team iterates a new feature that requires the involvement of the business party to use it, especially in the bug version of the framework, often requiring a forceful upgrade of the business side, where every member of the infrastructure team has a deep sense of pain.
-
-The difficulties associated with upgrading are compounded by the very different versions of the SDK versions used by the application and the fact that the production environment runs in various versions of the SDK, which in turn makes it necessary to consider compatibility for the iterations of new functions as if they go ahead with the shacks, so that the maintenance of the code is very difficult and some ancestral logic becomes uncareful.
-
-The development pattern of the “heavy” SDKs makes the governance of the isomer language very weak and the cost of providing a functionally complete and continuously iterative SDK for all programming languages is imaginable.
-
-In 18 years, Service Mesh continued to explode in the country, a framework concept designed to decouple service governance capacity with business and allow them to interact through process-level communications.Under this architecture model, service governance capacity is isolated from the application and operated in independent processes, iterative upgrading is unrelated to business processes, which allows for rapid iterations of service governance capacity, and each version can be fully upgraded because of the low cost of upgrading, which has addressed the historical burden and the SDK “light” directly reduces the governance threshold for isomer languages and no longer suffers from the SDK that needs to develop the same service governance capability for each language.
-
-### Current status of service Mesh landings
-
-> ![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*rRG_TYlHMqYAAAAAAAAAAAAAARQnAQ)
-> 蚂蚁很快意识到了 Service Mesh 的价值,全力投入到这个方向,用 Go 语言开发了 MOSN 这样可以对标 envoy 的优秀数据面,全权负责服务路由,负载均衡,熔断限流等能力的建设,大大加快了公司内部落地 Service Mesh 的进度。
-
-Now that MOSN has overwritten thousands of apps and hundreds of thousands of containers inside ant ants, newly created apps have default access to MOSN to form closers.And MOSN handed over a satisfactory: in terms of resource occupancy and loss of performance that is of greatest concern to all.
-
-1. RT is less than 0.2 ms
-
-2. Increase CPU usage by 0% to 2%
-
-3. Memory consumption growth less than 15M
-
-The technical stack of the NodeJS, C+++ isomers is also continuously connected to MOSN due to the Service Mesh service management thresholds that lower the isomer language.
-
-After seeing the huge gains from RPC capacity Mih, internal ants also transformed MQ, Cache, Config and other middleware capabilities, sinking to MOSN, improving the iterative efficiency of the intermediate product as a whole.
-
-### C. New challenges
-
-1. Apply strong binding to infrastructure
-
-> ![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*nKxcTKLp4EoAAAAAAAAAAAAAARQnAQ)
-> A modern distributed application often relies on RPC, Cache, MQ, Config and other distributed capabilities to complete the processing of business logic.
-
-When RPC was initially seen, other capabilities were quickly sinking.Initially, they were developed in the most familiar way, leading to a lack of integrated planning management, as shown in the graph above, which relied on SDKs of a variety of infrastructure, and in which SDK interacted with MOSN in a unique way, often using private agreements provided by the original infrastructure, which led directly to a complex intermediate capability, but in essence the application was tied to the infrastructure, such as the need to upgrade the SDK from Redis to Memcache, which was more pronounced in the larger trend of the application cloud, assuming that if an application was to be deployed on the cloud, because the application relied on a variety of infrastructures, it would be necessary to move the entire infrastructure to the cloud before the application could be successfully deployed.
-So how to untie the application to the infrastructure so that it can be transplantable and that it can feel free to deploy across the platform is our first problem.
-
-2. Isomal language connectivity
-
-> ![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*oIdQQZmgtyUAAAAAAAAAAAAAARQnAQ)
-> It has been proved that Service Mesh does reduce the access threshold of heterogeneous languages, but after more and more basic capabilities sink to MOSN, we gradually realized that in order to allow applications to interact with MOSN, various SDKS need to develop communication protocols and serialization protocols. If you add in the need to provide the same functionality for a variety of heterogeneous languages, the difficulty of maintenance increases exponentially
-
-Service Mesh has made the SDK historic, but for the current scenario of programming languages and applications with strong infrastructural dependence, we find that the existing SDK is not thin enough, that the threshold for access to the isomer language is not low enough and that the threshold for further lowering the isomer language is the second problem we face.
-
-## Multi Runtime Theory Overview
-
-### A, what is Runtime?
-
-> ![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*hQT-Spc5rI4AAAAAAAAAAAAAARQnAQ)
-> In the early 20th century, Bilgin lbryam published a paper called
-> Multi-Runtime Microservices Architecture
-> This article discusses the shape of the next phase of microservices architecture.
-
-As shown in the graph above, the author abstracts the demand for distributed services and is divided into four chaos:
-
-1. Life Cycle (Lifecycle)
- mainly refers to compilation, packing, deployment and so forth, and is largely contracted by docker and kubernetes in the broad cloud of origins.
-
-2. Network (Networking)
- A reliable network is the basic guarantee of communication between microservices, and Service Mesh is trying to do so and the stability and usefulness of the current popular data face of MOSN and envoy have been fully tested.
-
-3. The status (State)
- services that are required for distribution systems, workflow, distribution single, dispatching, power equivalent, state error restoration, caching, etc. can be uniformly classified as bottom status management.
-
-4. Binding (Binding)
- requires not only communication with other systems but also integration of various external systems in distributed systems, and therefore has strong reliance on protocol conversion, multiple interactive models, error recovery processes, etc.
-
-After the need has been clarified, drawing on the ideas of Service Mesh, the author has summarized the evolution of the distributed services architecture as: below.
-
-> ![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*rwS2Q5yMp_sAAAAAAAAAAAAAARQnAQ)
-> Phase I is to decouple infrastructure capabilities from the application and to convert them into an independent residecar model that runs with the application.
-
-The second stage is to unify the capabilities offered by the sidecar into a single settlement run from the development of the basic component to the development of the various distributive capabilities to the development of the various distributive capacities, completely block the details of the substrate and, as a result of the ability orientation of the API, the application no longer needs to rely on SDK from a wide range of infrastructures, except for the deployment of the APIs that provide the capabilities.
-
-The author's thinking is consistent with what we want to resolve, and we have decided to use the Runtime concept to solve the new problems that Service Mesh has encountered to date.
-
-### B, Service Mesh vs Runtime
-
-> ![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*srPVSYTEHc4AAAAAAAAAAAAAARQnAQ)
-> In order to create a clearer understanding of Runtime, a summary of Service Mesh with regard to the positioning, interaction, communication protocols and capacity richness of the two concepts of Runtime is shown, as can be seen from Service Mosh, when Runtime provides a clearly defined and capable API, making the application more straightforward to interact with it.
-
-## MOSN sub-project Layotto
-
-### A, dapr research
-
-> ![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*Ab9HSYIK7CQAAAAAAAAAAAAAARQnAQ)
-> dapr is a well-known Runtime product in the community and has a high level of activity, so we first looked at the dapr case, finding that the dapr has the following advantage of:
-
-1. A variety of distributive capabilities are provided, and the API is clearly defined and generally meets the general usage scenario.
-
-2. Different delivery components are provided for each capability, essentially covering commonly used intermediate products that can be freely chosen by users as needed.
-
-When considering how to set up a dapr within a company, we propose two options, such as the chart: above
-
-1. Replace:with the current MOSN and replace with the dapr. There are two problems with:
-
-Dapr does not currently have the full range of service governance capabilities included in Service Mesh although it provides many distributive capabilities.
-
-b. MOSN has fallen on a large scale within the company and has been tested on numerous occasions with the direct replacement of MOSN stability by a dapr.
-
-2. In:, add a dapr container that will be deployed with MOSN in two sidecar mode.This option also has two problems with:
-
-The introduction of a new sidecar will require consideration of upgrading, monitoring, infusion and so forth, and the cost of transport will soar.
-
-b. The increased maintenance of a container implies an additional risk of being hacked and this reduces the availability of the current system.
-
-Similarly, if you are currently using envoy as a data face, you will also face the above problems.
-We therefore wish to combine Runtime with Service Mesh and deploy through a full sidecar to maximize the use of existing MSh capabilities while ensuring stability and the constant cost of delivery.In addition, we hope that, in addition to being associated with MOSN, the capacity of the RPF will be combined in the future with envoy to solve the problems in more scenarios, in which Layotto was born.
-
-### Layout B & Layout
-
-> ![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*sdGoSYB_XFUAAAAAAAAAAAAAARQnAQ)
-> As shown in the above chart, Layotto is above all over the infrastructure and provides a standard API for upper-tier applications with a uniform range of distributive capabilities.For Layotto applications, developers no longer need to care for differences in the implementation of substrate components, but just what competencies the app needs and then call on the adaptive API, which can be completely untied to the underlying infrastructure.
-
-For applications, interaction is divided into two blocks, one as a standard API for GRPC Clients calling Layotto and another as a GRPC Server to implement the Layotto callback and benefit from the gRPC excellent cross-language support capability, which no longer requires attention to communications, serialization, etc., and further reduces the threshold for the use of the technical stack of isomers.
-
-In addition to its application-oriented, Layotto also provides a unified interface to the platform that feeds the app along with the sidecar state of operation, facilitates SRE peer learning to understand the state of the app and make different initiatives for different states, taking into account existing platform integration with k8s and so we provide access to HTTP protocol.
-
-In addition to Layotto itself design, the project involves two standardized constructions, firstly to develop a set of terminological clocks; the application of a broad range of APIs is not an easy task. We have worked with the Ari and Dapr communities in the hope that the building of the Runtime API will be advanced, and secondly for the components of the capabilities already achieved in the dapr community, our principle is to reuse, redevelop and minimize wasting efforts over existing components and repeat rotations.
-
-In the end, Layotto is now built over MOSN, we would like Layotto to be able to run on envoy, so that you can increase Runtime capacity as long as you use Service Mesh, regardless of whether the data face is used by MOSN or envoy.
-
-### C, Layotto transplantation
-
-> ![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*2DrSQJ6GL8cAAAAAAAAAAAAAARQnAQ)
-> As shown in the graph above, once the standardisation of the Runtime API is completed, access to Layotto applications is naturally portable, applications can be deployed on private clouds and various public clouds without any modification, and since standard API is used, applications can be freely switched between Layotto and dapr without any modification.
-
-### Meaning of name
-
-> ![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*CCckTZ_gRsMAAAAAAAAAAAAAARQnAQ)
-> As can be seen from the above schematic chart, the Layotto project itself is intended to block the details of the infrastructure and to provide a variety of distributive capabilities to the upper level of application. This approach is as if it adds a layer of abstraction between the application and the infrastructure, so we draw on the OSI approach to defining a seven-tiered model of the network and want Layot to serve the eighth tier of the application, to be 8 in Italian, Layer otto is meant to simplify to become Layotto, along with Project Code L8, which is also the eighth tier and is the source of inspiration for
-
-An overview of the completion of the project is presented below, with details of the achievement of four of its main functions.
-
-### E. Configuration of original language
-
-> ![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*mfkRQZH3oNwAAAAAAAAAAAAAARQnAQ)
-> First is the configuration function commonly used in distributed systems, applications generally use the configuration center to switch or dynamically adjust the running state of the application.The implementation of the configuration module in Layotto consists of two parts. One is a reflection on how to define the API for this capability, and one is a specific implementation, each of which is seen below.
-
-It is not easy to define a configuration API that meets most of the actual production demands. Dapr currently lacks this capability, so we worked with Ali and the Dapr community to engage in intense discussions on how to define a version of a reasonable configuration API.
-
-As the outcome of the discussions has not yet been finalized, Layotto is therefore based on the first version of the draft we have submitted to the community, and a brief description of our draft is provided below.
-
-We first defined the basic element: for general configuration
-
-1. appId:indicates which app the configuration belongs to
-
-2. Key configured for key:
-
-3. Value of content:configuration
-
-4. group:configurations are configured. If an appId is too many configurations, we can group these configurations for maintenance.
-
-In addition, we added two advanced features to suit more complex configurations using Scene:
-
-1. label, used to label configurations, such as where the configuration belongs, and when conducting configuration queries, we'll use label + key to query configuration.
-
-2. tags, users give configuration additional information such as description, creator information, final modification time, etc. to facilitate configuration management, audits, etc.
-
-For the specific implementation of the configuration API as defined above, we currently support query, subscription, delete, create, and modify five kinds of actions in which subscriptions to configuration changes use the stream feature of GRPC and the components where the configuration capacity is implemented at the bottom, we have selected the domestically popular apollo and will add others later depending on demand.
-
-### F: Pub/Sub
-
-> ![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*YJs-R6WFhkgAAAAAAAAAAAAAARQnAQ)
-> for Pub/Sub capabilities, we have explored the current implementation of dapr and have found that we have largely met our needs, so we have directly reintroduced the Dapr API and components that have been suitably matched in Layotto, which has saved us a great deal of duplication and we would like to maintain a collaborative approach with the dapr community rather than repeat the rotation.
-
-Pub is an event interface provided by the App calls Layotto and the Sub function is one that implements ListTopicSubscriptions with OnTopicEvent in the form of a gRPC Server, one that tells Layotto apps that need to subscribe to which topics, and a callback event for Layotto receive a change in top.
-
-Dapr for the definition of Pub/Sub basically meets our needs, but there are still shortfalls in some scenarios, dapr uses CloudEvent standards, so the pub interface does not return value, which does not meet the need in our production scenes to require pub messages to return to the messageID that we have already submitted the needs to dapr communities, and we are waiting for feedback, taking into account mechanisms for community asynchronous collaboration, we may first increase the results and then explore with the community a better compatibility programme.
-
-### G and RPC original
-
-> ![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*i-JnSaeZbJ4AAAAAAAAAAAAAARQnAQ)
-> The capacity of RPC is not unfamiliar and may be the most basic needs under the microservice architecture, the definition of RPC interfaces, we also refer to the dapr community definition and therefore the interface definition is fully responsive to our needs and thus the interface definition is a direct reuse of dapr but the current RPC delivery programme provided by dapr is still weak, and MOSN is very mature over the years, This is a brave combination of Runtime with Service Mesh and MOSN itself as a component of our capacity to implement RPC and thereby Layotto submit to MOSN for actual data transfer upon receipt of RPC requests, The option could change routing rules through istio, downgraded flow and so on, which would amount to a direct replication of Service Mesh's capabilities. This would also indicate that Runtime is not about listing the Service Mesh, but rather a step forward on that basis.
-
-In terms of details, in order to better integrate with MOSN, we have added one Channel, default support for dubbo, bolt, HTTP three common RPC protocols to the RPC. If we still fail to meet the user scene, we have added Before/After filter to allow users to customize extensions and implement protocol conversions, etc.
-
-### H, Actuator
-
-> ![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*E_Q-T4d_bm4AAAAAAAAAAAAAARQnAQ)
-> In actual production environments, in addition to the various distributive capabilities required for the application, we often need to understand the operational state of the application, based on this need, we abstract an actuator interface, and we currently do not have the capability to do so at the moment, dapr and so we are designed on the basis of internal demand scension scenarios to expose the full range of information on the application at the startup and running stages, etc.
-
-Layotto divides the exposure information into an individual:
-
-1. Health:This module determines whether the app is healthy, e.g. a strongly dependent component needs to be unhealthy if initialization fails, and we refer to k8s for the type of health check to:
-
-a. Readiness:indicates that the app is ready to start and can start processing requests.
-
-b. Liveness:indicates the state of life of the app, which needs to be cut if it does not exist.
-
-2. Info:This module is expected to expose some of the dependencies of the app, such as the service on which the app depends, the subscription configuration, etc. for troubleshooting issues.
-
-Health exposure health status is divided into the following Atlash:
-
-1. INIT:indicates that the app is still running. If the app returns this value during the release process, the PaaS platform should continue waiting for the app to be successfully started.
-
-2. UP:indicates that the app is starting up normally, and if the app returns this value, the PasS platform can start loading traffic.
-
-3. DOWN:indicates that the app failed to boot, meaning PaaS needs to stop publishing and notify the app owner if the app returns this value during the release process.
-
-The search for Layotto is now largely complete in the Runtime direction, and we have addressed the current problems of infrastructure binding and the high cost of isomer language access using a standard interactive protocol such as gRPC to define a clearly defined API.As the future API standardises the application of Layotto can be deployed on various privately owned and publicly owned clouds, on the one hand, and free switching between Layotto, dapr and more efficient research and development, on the other.
-
-Currently, Serverless fields are also flown and there is no single solution, so Layotto makes some attempts in Serverless directions, in addition to the input in the Runtime direction described above.
-
-## Exploring WebAssembly
-
-### Introduction to the Web Assembly
-
-> ![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*-ACRSpqbuJ0AAAAAAAAAAAAAARQnAQ)
-> WebAssembly, abbreviated WASM, a collection of binary commands initially running on the browser to solve the JavaScript performance problems, but due to its good safety, isolation, and linguistic indifference, one quickly starts to get it to run outside the browser. With the advent of the WASI definition, only one WASM will be able to execute the WAS document anywhere.
-
-Since WebAssembly can run outside the browser, can we use it in Serverless fields?Some attempts had been made in that regard, but if such a solution were to be found to be a real one, it would be the first question of how to address the dependence of a functioning Web Assembly on infrastructure.
-
-### Principles of B and Web Assembly landing
-
-Currently MOSN runs on MOSN by integrating WASM Runtime to meet the need for custom extensions to MOSN.Layotto is also built over MOSN so we consider combining the two in order to implement the following graph:
-
-> ![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*U7UDRYyBOvIAAAAAAAAAAAAAARQnAQ)
-> Developers can develop their code using a variety of preferred languages such as Go/C+/Rust and then run them over MOSN to produce WASM files and call Layotto provide standard API via local function when WASM style applications need to rely on various distribution capabilities in processing requests, thereby directly resolving the dependency of WASM patterns.
-
-Layotto now provides Go with the implementation of the Rust version of WASM, while supporting the demo tier function only, is enough for us to see the potential value of such a programme.
-
-In addition, the WASM community is still in its early stages and there are many places to be refined, and we have submitted some PRs to the community to build up the backbone of the WASM technology.
-
-### C. WebAssembly Landscape Outlook
-
-> ![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*NzwKRY2GZPcAAAAAAAAAAAAAARQnAQ)
-> Although the use of WAS in Layotto is still in the experimental stage, we hope that it will eventually become a service unless it is developed through a variety of programming languages, as shown in the graph above, and then codify the WASM document, which will eventually run on Layotto+MOSN, while the application wiki management is governed by k8, docker, prometheus and others.
-
-## Community planning
-
-Finally, look at what Layotto does in the community.
-
-### A, Layotto vs Dapr
-
-> ![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*OpQTRqoMpK0AAAAAAAAAAAAAARQnAQ)
-> Charted Layotto in contrast to the existing capabilities in Layotto, our development process at Layotto, always aim to achieve the goal of a common building, based on the principle of re-use, secondary development, and for the capacity being built or to be built in the future, we plan to give priority to Layotto and then to the community to merge into standard API, so that in the short term it is possible that the Layotto API will take precedence over the community, but will certainly be unified in the long term, given the mechanism for community asynchronous collaboration.
-
-### The APP
-
-> ![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*nKxcTKLp4EoAAAAAAAAAAAAAARQnAQ)
-> We have had extensive discussions in the community about how to define a standard API and how Layotto can run on envoy, and we will continue to do so.
-
-### C, Road map
-
-> ![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*iUV3Q7S3VLEAAAAAAAAAAAAAARQnAQ)
-> Layotto currently support four major functionalities in support of RPC, Config, Pub/Sub, Actuator and is expected to devote attention to distribution locks and observations in September, and Layotto plugging in December, which it will be able to run on envoy, with the hope that further outputs will be produced for the WebCongress exploration.
-
-### Official open source
-
-> ![](https://gw.alipayobjects.com/mdn/rms_1c90e8/afts/img/A*S6mdTqAapLQAAAAAAAAAAAAAARQnAQ)
-> gave a detailed presentation of the Layotto project and most importantly the project is being officially opened today as a sub-project of MOSN and we have provided detailed documentation and demo examples to facilitate quick experience.
-
-The construction of the API standardization is a matter that needs to be promoted over the long term, while standardization means not meeting one or two scenarios, but the best possible fitness for most use scenarios, so we hope that more people can participate in the Layotto project, describe your use scenario, discuss the API definition options, come together to the community, ultimately reach the ultimate goal of Write once, Run any!
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-blog/options.json b/docs/i18n/en-US/docusaurus-plugin-content-blog/options.json
deleted file mode 100644
index 8337316afc..0000000000
--- a/docs/i18n/en-US/docusaurus-plugin-content-blog/options.json
+++ /dev/null
@@ -1,14 +0,0 @@
-{
- "title": {
- "message": "Blog",
- "description": "The title for the blog used in SEO"
- },
- "description": {
- "message": "Blog",
- "description": "The description for the blog used in SEO"
- },
- "sidebar.title": {
- "message": "All Post",
- "description": "The label for the left sidebar"
- }
-}
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-blog/tcpcopy_code_analyze.md b/docs/i18n/en-US/docusaurus-plugin-content-blog/tcpcopy_code_analyze.md
deleted file mode 100644
index e02beaf4ea..0000000000
--- a/docs/i18n/en-US/docusaurus-plugin-content-blog/tcpcopy_code_analyze.md
+++ /dev/null
@@ -1,170 +0,0 @@
-# Source Parse 4 Layer Traffic Governance, tcp traffic dump
-
-> Author profile:
-> Giggon, is an open source community lover committed to embracing open sources.
->
-> Writing on: April 26, 2022
-
-## Overview
-
-The purpose of this document is to analyze the implementation of tcp traffic dump
-
-## Prerequisite:
-
-Document content refers to the following version of the code
-
-[https://github.com/mosn/layotto](https://github.com/mosn/layotto)
-
-Layotto 0e97e97e970dc504e0298017bd956d2841c44c0810b (main)
-
-## Source analysis
-
-### Code in: [tcpcopy CODE](https://github.com/mosn/layotto/tree/main/pkg/filter/network/tcpcopy)
-
-### model.go analysis
-
-This is the core class of tcpcopy's configuration objects
-
-```go
-Type DumpConfig struct {-
- Switch `json:"twitch"` // dump switch. Values:'ON' or 'OFF'
- Interval int `json:"interval" //dump sampling interval Unit: Second
- Duration int `json:"duration"// Single Sampling Cycle Unit: Second
- CpuMaxate float64 `json:"cpu_max_rate"\/ cpu Maximum usage The ump feature will stop
- MemMaxRate float64 `json:"mem_max_rate"` // mem maximum usage. When this threshold is exceeded, The ump feature will stop
-}
-
-Type DumpUpadDynamic Architect 6
- Unique_sample_windowing string// Specify sample window
- BusinessType _type. usinessType // Business Type
- Port string // Port
- Binary_flow_data []byte// binary data
- Portrait_data string // User uploaded data
-}
-```
-
-### persistence.go analysis
-
-This is the dump persistent core processing class of tcpcopy
-
-```go
-// This method is called in OnData in tcpcopy.go
-func IsPersistence() bool {
- // 判断 dump 开关是否开启
- if !strategy.DumpSwitch {
- if log.DefaultLogger.GetLogLevel() >= log.DEBUG {
- log.DefaultLogger.Debugf("%s the dump switch is %t", model.LogDumpKey, strategy.DumpSwitch)
- }
- return false
- }
-
- // Check whether it is in the sampling window
- if atomic.LoadInt32(&strategy.DumpSampleFlag) == 0 {
- if log.DefaultLogger.GetLogLevel() >= log.DEBUG {
- log.DefaultLogger.Debugf("%s the dump sample flag is %d", model.LogDumpKey, strategy.DumpSampleFlag)
- }
- return false
- }
-
- // Check whether the dump function is stopped. Obtain the system load and check whether the processor and memory exceeds the threshold of the tcpcopy. If yes, stop the dump function.
- if !strategy.IsAvaliable() {
- if log.DefaultLogger.GetLogLevel() >= log.DEBUG {
- log.DefaultLogger.Debugf("%s the system usages are beyond max rate.", model.LogDumpKey)
- }
- return false
- }
-
- return true
-}
-
-// Persist data based on configuration information
-func persistence(config *model.DumpUploadDynamicConfig) {
- // 1.Persisting binary data
- if config.Binary_flow_data != nil && config.Port != "" {
- if GetTcpcopyLogger().GetLogLevel() >= log.INFO {
- GetTcpcopyLogger().Infof("[%s][%s]% x", config.Unique_sample_window, config.Port, config.Binary_flow_data)
- }
- }
- if config.Portrait_data != "" && config.BusinessType != "" {
- // 2. Persisting Binary data Persisting user-defined data
- if GetPortraitDataLogger().GetLogLevel() >= log.INFO {
- GetPortraitDataLogger().Infof("[%s][%s][%s]%s", config.Unique_sample_window, config.BusinessType, config.Port, config.Portrait_data)
- }
-
- // 3. Changes in configuration information in incrementally persistent memory
- buf, err := configmanager.DumpJSON()
- if err != nil {
- if log.DefaultLogger.GetLogLevel() >= log.DEBUG {
- log.DefaultLogger.Debugf("[dump] Failed to load mosn config mem.")
- }
- return
- }
- // 3.1. dump if the data changes
- tmpMd5ValueOfMemDump := common.CalculateMd5ForBytes(buf)
- memLogger := GetMemLogger()
- if tmpMd5ValueOfMemDump != md5ValueOfMemDump ||
- (tmpMd5ValueOfMemDump == md5ValueOfMemDump && common.GetFileSize(getMemConfDumpFilePath()) <= 0) {
- md5ValueOfMemDump = tmpMd5ValueOfMemDump
- if memLogger.GetLogLevel() >= log.INFO {
- memLogger.Infof("[%s]%s", config.Unique_sample_window, buf)
- }
- } else {
- if memLogger.GetLogLevel() >= log.INFO {
- memLogger.Infof("[%s]%+v", config.Unique_sample_window, incrementLog)
- }
- }
- }
-}
-```
-
-### tcpcopy.go analysis
-
-This is the core class of tcpcopy.
-
-```go
-// Sign up to NetWork
-func init() with MFA
- api. egisterNetwork("tcpcopy", CreateTccopyFactory)
-}
-
-// returns tcpcopy Factory
-func CreateTccopyFactory(cfg map[string]interface{}) (api. etworkFilterChainFactory, error) LO
- tcpConfig := &config{}
- // dump policy transition to static configuration
- if stg, ok := cfg["strategy"]; ok {
- ...
- }
- //TODO excerpt some other fields
- return &tcpcopyFactoryLU
- cfg: tcpConfig,
- }, nil
-}
-
-// for pkg/configmanager/parser. o Call to add or update Network filter factory
-func (f *tcpcopyFactory) Init(param interface{}) error error 56
- // Set listening address and port configuration
- ...
- return nil
-}
-
-// implements the OnData Interface of ReadFilter, processing
-func (f *tcpcopyFactory) OnData(data types.IoBuffer) (res api. ilterStatus) online
- // Determines whether the current requested data requires sampling dump
- if !persiste.Isistence() {
- return api.Continue
- }
-
- // Asynchronous sample dump
- config := model.NewDumpUpadDynamic Config(strategy. umpSampleUuid, "", f.cfg.port, data.Bytes(), "")
- persistence.GetDumpWorkPoolInstance().Schedule(config)
- return api.Continue
-}
-```
-
-Finally, we look back at the overall process progress:
-
-1. Starting from the initialization function init() of tccopy.go to CreateGRPCServerFilterFactory Incoming CreateTcpcopyFactory.
-
-2. Mosn created a filter chain (code position[factory.go](https://github.com/mosn/mosn/tree/master/pkg/filter/network/proxy/factory.go)) by circulating CreateFilterChain to add all filters to the chain structure, including tccopy.
-
-3. When the traffic passes through mosn will enter the tcpcopy.go OnData method for tcpcopump logical processing.
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current.json b/docs/i18n/en-US/docusaurus-plugin-content-docs/current.json
deleted file mode 100644
index 86fe27e7df..0000000000
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current.json
+++ /dev/null
@@ -1,487 +0,0 @@
-{
- "version.label": {
- "message": "Next",
- "description": "The label for version current"
- },
-
- "sidebar.mySidebar.category.快速开始": {
- "message": "Quick Start",
- "description": "The label for category 快速开始 in sidebar mySidebar"
- },
- "sidebar.mySidebar.category.使用Configuration API": {
- "message": "Use Configuration API",
- "description": "The label for category 使用Configuration API in sidebar mySidebar"
- },
- "sidebar.mySidebar.category.API插件": {
- "message": "API Plugin",
- "description": "The label for category API插件 in sidebar mySidebar"
- },
- "sidebar.mySidebar.category.作为 Istio 的数据面": {
- "message": "As the data plane for Istio",
- "description": "The label for category 作为 Istio 的数据面 in sidebar mySidebar"
- },
- "sidebar.mySidebar.category.在四层网络进行流量干预": {
- "message": "Performing traffic intervention at the four-layer network",
- "description": "The label for category 在四层网络进行流量干预 in sidebar mySidebar"
- },
- "sidebar.mySidebar.category.在七层网络进行流量干预": {
- "message": "Performing traffic intervention at the seven-layer network",
- "description": "The label for category 在七层网络进行流量干预 in sidebar mySidebar"
- },
- "sidebar.mySidebar.category.可观测性": {
- "message": "Observability",
- "description": "The label for category 可观测性 in sidebar mySidebar"
- },
- "sidebar.mySidebar.category.用户手册": {
- "message": "User Manual",
- "description": "The label for category 用户手册 in sidebar mySidebar"
- },
- "sidebar.mySidebar.category.功能介绍": {
- "message": "Feature Introduction",
- "description": "The label for category 功能介绍 in sidebar mySidebar"
- },
- "sidebar.mySidebar.category.可扩展性": {
- "message": "Scalability",
- "description": "The label for category 可扩展性 in sidebar mySidebar"
- },
- "sidebar.mySidebar.category.运维手册": {
- "message": "Operations Manual",
- "description": "The label for category 运维手册 in sidebar mySidebar"
- },
- "sidebar.mySidebar.category.如何配置 Layotto": {
- "message": "How To Config Layotto",
- "description": "The label for category 如何配置 Layotto in sidebar mySidebar"
- },
- "sidebar.mySidebar.category.组件配置说明": {
- "message": "Component Config Instruction",
- "description": "The label for category 组件配置说明 in sidebar mySidebar"
- },
- "sidebar.mySidebar.category.State": {
- "message": "State",
- "description": "The label for category State in sidebar mySidebar"
- },
- "sidebar.mySidebar.category.Pub/Sub": {
- "message": "Pub/Sub",
- "description": "The label for category Pub/Sub in sidebar mySidebar"
- },
- "sidebar.mySidebar.category.Distributed Lock": {
- "message": "Distributed Lock",
- "description": "The label for category Distributed Lock in sidebar mySidebar"
- },
- "sidebar.mySidebar.category.Sequencer": {
- "message": "Sequencer",
- "description": "The label for category Sequencer in sidebar mySidebar"
- },
- "sidebar.mySidebar.category.设计文档": {
- "message": "Design Doc",
- "description": "The label for category 设计文档 in sidebar mySidebar"
- },
- "sidebar.mySidebar.category.贡献指南": {
- "message": "Contribute Guide",
- "description": "The label for category 贡献指南 in sidebar mySidebar"
- },
- "sidebar.mySidebar.category.想要贡献文档?": {
- "message": "Want to contribute doc?",
- "description": "The label for category 想要贡献文档? in sidebar mySidebar"
- },
- "sidebar.mySidebar.category.想要修改proto文件或API定义?": {
- "message": "Want to modify proto files or API definitions?",
- "description": "The label for category 想要修改proto文件或API定义? in sidebar mySidebar"
- },
- "sidebar.mySidebar.category.如何给 issue 打 label": {
- "message": "How to label issue",
- "description": "The label for category 如何给 issue 打 label in sidebar mySidebar"
- },
- "sidebar.mySidebar.category.社区": {
- "message": "Community",
- "description": "The label for category 社区 in sidebar mySidebar"
- },
- "sidebar.mySidebar.link.java sdk": {
- "message": "java sdk",
- "description": "The label for link java sdk in sidebar mySidebar, linking to https://github.com/layotto/java-sdk"
- },
- "sidebar.mySidebar.link..net sdk": {
- "message": ".net sdk",
- "description": "The label for link .net sdk in sidebar mySidebar, linking to https://github.com/layotto/dotnet-sdk"
- },
- "sidebar.mySidebar.link.js sdk": {
- "message": "js sdk",
- "description": "The label for link js sdk in sidebar mySidebar, linking to https://github.com/layotto/js-sdk"
- },
- "sidebar.mySidebar.doc.首页": {
- "message": "Home Page",
- "description": "The label for the doc item 首页 in sidebar mySidebar, linking to the doc README"
- },
- "sidebar.mySidebar.doc.使用State API": {
- "message": "使用State API",
- "description": "The label for the doc item 使用State API in sidebar mySidebar, linking to the doc start/state/start"
- },
- "sidebar.mySidebar.doc.使用Apollo配置中心": {
- "message": "使用Apollo配置中心",
- "description": "The label for the doc item 使用Apollo配置中心 in sidebar mySidebar, linking to the doc start/configuration/start-apollo"
- },
- "sidebar.mySidebar.doc.使用Etcd配置中心": {
- "message": "使用Etcd配置中心",
- "description": "The label for the doc item 使用Etcd配置中心 in sidebar mySidebar, linking to the doc start/configuration/start"
- },
- "sidebar.mySidebar.doc.使用Nacos配置中心": {
- "message": "使用Nacos配置中心",
- "description": "The label for the doc item 使用Nacos配置中心 in sidebar mySidebar, linking to the doc start/configuration/start-nacos"
- },
- "sidebar.mySidebar.doc.发布、订阅消息": {
- "message": "发布、订阅消息",
- "description": "The label for the doc item 发布、订阅消息 in sidebar mySidebar, linking to the doc start/pubsub/start"
- },
- "sidebar.mySidebar.doc.(建设中) 使用 DelayQueue API": {
- "message": "(建设中) 使用 DelayQueue API",
- "description": "The label for the doc item (建设中) 使用 DelayQueue API in sidebar mySidebar, linking to the doc start/delay_queue/start"
- },
- "sidebar.mySidebar.doc.使用分布式锁 API": {
- "message": "使用分布式锁 API",
- "description": "The label for the doc item 使用分布式锁 API in sidebar mySidebar, linking to the doc start/lock/start"
- },
- "sidebar.mySidebar.doc.使用Sequencer API生成分布式自增id": {
- "message": "使用Sequencer API生成分布式自增id",
- "description": "The label for the doc item 使用Sequencer API生成分布式自增id in sidebar mySidebar, linking to the doc start/sequencer/start"
- },
- "sidebar.mySidebar.doc.使用 Secret API": {
- "message": "使用 Secret API",
- "description": "The label for the doc item 使用 Secret API in sidebar mySidebar, linking to the doc start/secret/start"
- },
- "sidebar.mySidebar.doc.进行RPC调用": {
- "message": "进行RPC调用",
- "description": "The label for the doc item 进行RPC调用 in sidebar mySidebar, linking to the doc start/rpc/helloworld"
- },
- "sidebar.mySidebar.doc.使用File API": {
- "message": "使用File API",
- "description": "The label for the doc item 使用File API in sidebar mySidebar, linking to the doc start/file/minio"
- },
- "sidebar.mySidebar.doc.使用 OSS API": {
- "message": "使用 OSS API",
- "description": "The label for the doc item 使用 OSS API in sidebar mySidebar, linking to the doc start/oss/oss"
- },
- "sidebar.mySidebar.doc.使用UDS通信": {
- "message": "使用UDS通信",
- "description": "The label for the doc item 使用UDS通信 in sidebar mySidebar, linking to the doc start/uds/start"
- },
- "sidebar.mySidebar.doc.(建设中)使用 sms API": {
- "message": "(建设中)使用 sms API",
- "description": "The label for the doc item (建设中)使用 sms API in sidebar mySidebar, linking to the doc start/sms/start"
- },
- "sidebar.mySidebar.doc.(建设中)使用 cryption API": {
- "message": "(建设中)使用 cryption API",
- "description": "The label for the doc item (建设中)使用 cryption API in sidebar mySidebar, linking to the doc start/cryption/start"
- },
- "sidebar.mySidebar.doc.(建设中)使用 phone API": {
- "message": "(建设中)使用 phone API",
- "description": "The label for the doc item (建设中)使用 phone API in sidebar mySidebar, linking to the doc start/phone/start"
- },
- "sidebar.mySidebar.doc.(建设中)使用 email API": {
- "message": "(建设中)使用 email API",
- "description": "The label for the doc item (建设中)使用 email API in sidebar mySidebar, linking to the doc start/email/start"
- },
- "sidebar.mySidebar.doc.使用 lifecycle API": {
- "message": "使用 lifecycle API",
- "description": "The label for the doc item 使用 lifecycle API in sidebar mySidebar, linking to the doc start/lifecycle/start"
- },
- "sidebar.mySidebar.doc.注册您自己的API": {
- "message": "注册您自己的API",
- "description": "The label for the doc item 注册您自己的API in sidebar mySidebar, linking to the doc start/api_plugin/helloworld"
- },
- "sidebar.mySidebar.doc.自动生成 API 插件": {
- "message": "自动生成 API 插件",
- "description": "The label for the doc item 自动生成 API 插件 in sidebar mySidebar, linking to the doc start/api_plugin/generate"
- },
- "sidebar.mySidebar.doc.集成 Istio 1.10.6 演示": {
- "message": "集成 Istio 1.10.6 演示",
- "description": "The label for the doc item 集成 Istio 1.10.6 演示 in sidebar mySidebar, linking to the doc start/istio/README"
- },
- "sidebar.mySidebar.doc.集成 Istio 1.5.x 演示": {
- "message": "集成 Istio 1.5.x 演示",
- "description": "The label for the doc item 集成 Istio 1.5.x 演示 in sidebar mySidebar, linking to the doc start/istio/start"
- },
- "sidebar.mySidebar.doc.Dump TCP 流量": {
- "message": "Dump TCP 流量",
- "description": "The label for the doc item Dump TCP 流量 in sidebar mySidebar, linking to the doc start/network_filter/tcpcopy"
- },
- "sidebar.mySidebar.doc.方法级别限流": {
- "message": "方法级别限流",
- "description": "The label for the doc item 方法级别限流 in sidebar mySidebar, linking to the doc start/stream_filter/flow_control"
- },
- "sidebar.mySidebar.doc.健康检查、查询运行时元数据": {
- "message": "健康检查、查询运行时元数据",
- "description": "The label for the doc item 健康检查、查询运行时元数据 in sidebar mySidebar, linking to the doc start/actuator/start"
- },
- "sidebar.mySidebar.doc.Trace, Metrics": {
- "message": "Trace, Metrics",
- "description": "The label for the doc item Trace, Metrics in sidebar mySidebar, linking to the doc start/trace/trace"
- },
- "sidebar.mySidebar.doc.Trace 接入 Skywalking": {
- "message": "Trace 接入 Skywalking",
- "description": "The label for the doc item Trace 接入 Skywalking in sidebar mySidebar, linking to the doc start/trace/skywalking"
- },
- "sidebar.mySidebar.doc.Trace 接入 Zipkin": {
- "message": "Trace 接入 Zipkin",
- "description": "The label for the doc item Trace 接入 Zipkin in sidebar mySidebar, linking to the doc start/trace/zipkin"
- },
- "sidebar.mySidebar.doc.Trace 接入 Jaeger": {
- "message": "Trace 接入 Jaeger",
- "description": "The label for the doc item Trace 接入 Jaeger in sidebar mySidebar, linking to the doc start/trace/jaeger"
- },
- "sidebar.mySidebar.doc.Metrics 接入 Prometheus": {
- "message": "Metrics 接入 Prometheus",
- "description": "The label for the doc item Metrics 接入 Prometheus in sidebar mySidebar, linking to the doc start/trace/prometheus"
- },
- "sidebar.mySidebar.doc.将业务逻辑通过 WASM 下沉进sidecar": {
- "message": "将业务逻辑通过 WASM 下沉进sidecar",
- "description": "The label for the doc item 将业务逻辑通过 WASM 下沉进sidecar in sidebar mySidebar, linking to the doc start/wasm/start"
- },
- "sidebar.mySidebar.doc.基于 WASM 跟 Runtime 实现的 Faas 模型": {
- "message": "基于 WASM 跟 Runtime 实现的 Faas 模型",
- "description": "The label for the doc item 基于 WASM 跟 Runtime 实现的 Faas 模型 in sidebar mySidebar, linking to the doc start/faas/start"
- },
- "sidebar.mySidebar.doc.线上实验室": {
- "message": "Online Lab",
- "description": "The label for the doc item 线上实验室 in sidebar mySidebar, linking to the doc start/lab"
- },
- "sidebar.mySidebar.doc.File API": {
- "message": "File API",
- "description": "The label for the doc item File API in sidebar mySidebar, linking to the doc building_blocks/file/file"
- },
- "sidebar.mySidebar.doc.Actuator API": {
- "message": "Actuator API",
- "description": "The label for the doc item Actuator API in sidebar mySidebar, linking to the doc building_blocks/actuator/actuator"
- },
- "sidebar.mySidebar.doc.State API": {
- "message": "State API",
- "description": "The label for the doc item State API in sidebar mySidebar, linking to the doc building_blocks/state/reference"
- },
- "sidebar.mySidebar.doc.Sequencer API": {
- "message": "Sequencer API",
- "description": "The label for the doc item Sequencer API in sidebar mySidebar, linking to the doc building_blocks/sequencer/reference"
- },
- "sidebar.mySidebar.doc.Distributed Lock API": {
- "message": "Distributed Lock API",
- "description": "The label for the doc item Distributed Lock API in sidebar mySidebar, linking to the doc building_blocks/lock/reference"
- },
- "sidebar.mySidebar.doc.Pub/Sub API": {
- "message": "Pub/Sub API",
- "description": "The label for the doc item Pub/Sub API in sidebar mySidebar, linking to the doc building_blocks/pubsub/reference"
- },
- "sidebar.mySidebar.doc.RPC API": {
- "message": "RPC API",
- "description": "The label for the doc item RPC API in sidebar mySidebar, linking to the doc building_blocks/rpc/reference"
- },
- "sidebar.mySidebar.doc.Configuration API": {
- "message": "Configuration API",
- "description": "The label for the doc item Configuration API in sidebar mySidebar, linking to the doc building_blocks/configuration/reference"
- },
- "sidebar.mySidebar.doc.API插件": {
- "message": "API插件",
- "description": "The label for the doc item API插件 in sidebar mySidebar, linking to the doc design/api_plugin/design"
- },
- "sidebar.mySidebar.doc.pluggable component 组件": {
- "message": "pluggable component 组件",
- "description": "The label for the doc item pluggable component 组件 in sidebar mySidebar, linking to the doc design/pluggable/usage"
- },
- "sidebar.mySidebar.doc.gRPC API 接口文档": {
- "message": "gRPC API 接口文档",
- "description": "The label for the doc item gRPC API 接口文档 in sidebar mySidebar, linking to the doc api_reference/README"
- },
- "sidebar.mySidebar.doc.go sdk": {
- "message": "go sdk",
- "description": "The label for the doc item go sdk in sidebar mySidebar, linking to the doc sdk_reference/go/start"
- },
- "sidebar.mySidebar.doc.Layotto 配置文件介绍": {
- "message": "Layotto 配置文件介绍",
- "description": "The label for the doc item Layotto 配置文件介绍 in sidebar mySidebar, linking to the doc configuration/overview"
- },
- "sidebar.mySidebar.doc.Redis": {
- "message": "Redis",
- "description": "The label for the doc item Redis in sidebar mySidebar, linking to the doc component_specs/sequencer/redis"
- },
- "sidebar.mySidebar.doc.其他组件": {
- "message": "其他组件",
- "description": "The label for the doc item 其他组件 in sidebar mySidebar, linking to the doc component_specs/pubsub/others"
- },
- "sidebar.mySidebar.doc.Etcd": {
- "message": "Etcd",
- "description": "The label for the doc item Etcd in sidebar mySidebar, linking to the doc component_specs/sequencer/etcd"
- },
- "sidebar.mySidebar.doc.Zookeeper": {
- "message": "Zookeeper",
- "description": "The label for the doc item Zookeeper in sidebar mySidebar, linking to the doc component_specs/sequencer/zookeeper"
- },
- "sidebar.mySidebar.doc.Consul": {
- "message": "Consul",
- "description": "The label for the doc item Consul in sidebar mySidebar, linking to the doc component_specs/lock/consul"
- },
- "sidebar.mySidebar.doc.MongoDB": {
- "message": "MongoDB",
- "description": "The label for the doc item MongoDB in sidebar mySidebar, linking to the doc component_specs/sequencer/mongo"
- },
- "sidebar.mySidebar.doc.Configuration": {
- "message": "Configuration",
- "description": "The label for the doc item Configuration in sidebar mySidebar, linking to the doc component_specs/configuration/etcd"
- },
- "sidebar.mySidebar.doc.File": {
- "message": "File",
- "description": "The label for the doc item File in sidebar mySidebar, linking to the doc component_specs/file/oss"
- },
- "sidebar.mySidebar.doc.Mysql": {
- "message": "Mysql",
- "description": "The label for the doc item Mysql in sidebar mySidebar, linking to the doc component_specs/sequencer/mysql"
- },
- "sidebar.mySidebar.doc.Snowflake": {
- "message": "Snowflake",
- "description": "The label for the doc item Snowflake in sidebar mySidebar, linking to the doc component_specs/sequencer/snowflake"
- },
- "sidebar.mySidebar.doc.Secret Store": {
- "message": "Secret Store",
- "description": "The label for the doc item Secret Store in sidebar mySidebar, linking to the doc component_specs/secret/common"
- },
- "sidebar.mySidebar.doc.自定义组件": {
- "message": "自定义组件",
- "description": "The label for the doc item 自定义组件 in sidebar mySidebar, linking to the doc component_specs/custom/common"
- },
- "sidebar.mySidebar.doc.如何部署、升级 Layotto": {
- "message": "如何部署、升级 Layotto",
- "description": "The label for the doc item 如何部署、升级 Layotto in sidebar mySidebar, linking to the doc operation/README"
- },
- "sidebar.mySidebar.doc.Layotto sidecar injector": {
- "message": "Layotto sidecar injector",
- "description": "The label for the doc item Layotto sidecar injector in sidebar mySidebar, linking to the doc operation/sidecar_injector"
- },
- "sidebar.mySidebar.doc.如何本地开发、本地调试": {
- "message": "如何本地开发、本地调试",
- "description": "The label for the doc item 如何本地开发、本地调试 in sidebar mySidebar, linking to the doc operation/local"
- },
- "sidebar.mySidebar.doc.动态配置下发、组件热重载": {
- "message": "动态配置下发、组件热重载",
- "description": "The label for the doc item 动态配置下发、组件热重载 in sidebar mySidebar, linking to the doc design/lifecycle/apply_configuration"
- },
- "sidebar.mySidebar.doc.Actuator设计文档": {
- "message": "Actuator设计文档",
- "description": "The label for the doc item Actuator设计文档 in sidebar mySidebar, linking to the doc design/actuator/actuator-design-doc"
- },
- "sidebar.mySidebar.doc.gRPC框架设计文档": {
- "message": "gRPC框架设计文档",
- "description": "The label for the doc item gRPC框架设计文档 in sidebar mySidebar, linking to the doc design/actuator/grpc-design-doc"
- },
- "sidebar.mySidebar.doc.Configuration API with Apollo": {
- "message": "Configuration API with Apollo",
- "description": "The label for the doc item Configuration API with Apollo in sidebar mySidebar, linking to the doc design/configuration/configuration-api-with-apollo"
- },
- "sidebar.mySidebar.doc.Pub/Sub API以及与dapr component的兼容性": {
- "message": "Pub/Sub API以及与dapr component的兼容性",
- "description": "The label for the doc item Pub/Sub API以及与dapr component的兼容性 in sidebar mySidebar, linking to the doc design/pubsub/pubsub-api-and-compability-with-dapr-component"
- },
- "sidebar.mySidebar.doc.RPC设计文档": {
- "message": "RPC设计文档",
- "description": "The label for the doc item RPC设计文档 in sidebar mySidebar, linking to the doc design/rpc/rpc_design_document"
- },
- "sidebar.mySidebar.doc.分布式锁API设计文档": {
- "message": "分布式锁API设计文档",
- "description": "The label for the doc item 分布式锁API设计文档 in sidebar mySidebar, linking to the doc design/lock/lock-api-design"
- },
- "sidebar.mySidebar.doc.Sequencer API设计文档": {
- "message": "Sequencer API设计文档",
- "description": "The label for the doc item Sequencer API设计文档 in sidebar mySidebar, linking to the doc design/sequencer/design"
- },
- "sidebar.mySidebar.doc.File API设计文档": {
- "message": "File API设计文档",
- "description": "The label for the doc item File API设计文档 in sidebar mySidebar, linking to the doc design/file/file-design"
- },
- "sidebar.mySidebar.doc.FaaS 设计文档": {
- "message": "FaaS 设计文档",
- "description": "The label for the doc item FaaS 设计文档 in sidebar mySidebar, linking to the doc design/faas/faas-poc-design"
- },
- "sidebar.mySidebar.doc.支持Dapr API": {
- "message": "支持Dapr API",
- "description": "The label for the doc item 支持Dapr API in sidebar mySidebar, linking to the doc design/api_plugin/dapr_api"
- },
- "sidebar.mySidebar.doc.OSS API设计文档": {
- "message": "OSS API设计文档",
- "description": "The label for the doc item OSS API设计文档 in sidebar mySidebar, linking to the doc design/oss/design"
- },
- "sidebar.mySidebar.doc.pluggable component 设计文档": {
- "message": "pluggable component 设计文档",
- "description": "The label for the doc item pluggable component 设计文档 in sidebar mySidebar, linking to the doc design/pluggable/design"
- },
- "sidebar.mySidebar.doc.Layotto 贡献指南": {
- "message": "Layotto 贡献指南",
- "description": "The label for the doc item Layotto 贡献指南 in sidebar mySidebar, linking to the doc development/CONTRIBUTING"
- },
- "sidebar.mySidebar.doc.新手攻略:从零开始成为 Layotto 贡献者": {
- "message": "新手攻略:从零开始成为 Layotto 贡献者",
- "description": "The label for the doc item 新手攻略:从零开始成为 Layotto 贡献者 in sidebar mySidebar, linking to the doc development/start-from-zero"
- },
- "sidebar.mySidebar.doc.文档贡献指南": {
- "message": "文档贡献指南",
- "description": "The label for the doc item 文档贡献指南 in sidebar mySidebar, linking to the doc development/contributing-doc"
- },
- "sidebar.mySidebar.doc.使用工具自动测试 Quickstart 文档": {
- "message": "使用工具自动测试 Quickstart 文档",
- "description": "The label for the doc item 使用工具自动测试 Quickstart 文档 in sidebar mySidebar, linking to the doc development/test-quickstart"
- },
- "sidebar.mySidebar.doc.想要开发新的组件?": {
- "message": "想要开发新的组件?",
- "description": "The label for the doc item 想要开发新的组件? in sidebar mySidebar, linking to the doc development/developing-component"
- },
- "sidebar.mySidebar.doc.组件引用开发指南": {
- "message": "组件引用开发指南",
- "description": "The label for the doc item 组件引用开发指南 in sidebar mySidebar, linking to the doc development/component_ref/component_ref"
- },
- "sidebar.mySidebar.doc.如何基于proto文件生成代码、接口文档": {
- "message": "如何基于proto文件生成代码、接口文档",
- "description": "The label for the doc item 如何基于proto文件生成代码、接口文档 in sidebar mySidebar, linking to the doc api_reference/how_to_generate_api_doc"
- },
- "sidebar.mySidebar.doc.proto文件注释规范": {
- "message": "proto文件注释规范",
- "description": "The label for the doc item proto文件注释规范 in sidebar mySidebar, linking to the doc api_reference/comment_spec_of_proto"
- },
- "sidebar.mySidebar.doc.新增API时的开发规范": {
- "message": "新增API时的开发规范",
- "description": "The label for the doc item 新增API时的开发规范 in sidebar mySidebar, linking to the doc development/developing-api"
- },
- "sidebar.mySidebar.doc.Layotto 四大 Github Workflows 说明": {
- "message": "Layotto 四大 Github Workflows 说明",
- "description": "The label for the doc item Layotto 四大 Github Workflows 说明 in sidebar mySidebar, linking to the doc development/github-workflows"
- },
- "sidebar.mySidebar.doc.Layotto 命令行工具指南": {
- "message": "Layotto 命令行工具指南",
- "description": "The label for the doc item Layotto 命令行工具指南 in sidebar mySidebar, linking to the doc development/commands"
- },
- "sidebar.mySidebar.doc.新手任务 (good first issue) 的 label 规范": {
- "message": "新手任务 (good first issue) 的 label 规范",
- "description": "The label for the doc item 新手任务 (good first issue) 的 label 规范 in sidebar mySidebar, linking to the doc development/label-spec"
- },
- "sidebar.mySidebar.doc.发布手册": {
- "message": "发布手册",
- "description": "The label for the doc item 发布手册 in sidebar mySidebar, linking to the doc development/release-guide"
- },
- "sidebar.mySidebar.doc.待解决的问题": {
- "message": "待解决的问题",
- "description": "The label for the doc item 待解决的问题 in sidebar mySidebar, linking to the doc development/problems-to-solve"
- },
- "sidebar.mySidebar.doc.社区会议": {
- "message": "社区会议",
- "description": "The label for the doc item 社区会议 in sidebar mySidebar, linking to the doc community/meeting"
- },
- "sidebar.mySidebar.doc.SOFAStack & MOSN 社区角色说明": {
- "message": "SOFAStack & MOSN 社区角色说明",
- "description": "The label for the doc item SOFAStack & MOSN 社区角色说明 in sidebar mySidebar, linking to the doc community/governance"
- },
- "sidebar.mySidebar.doc.Layotto社区晋升规则": {
- "message": "Layotto社区晋升规则",
- "description": "The label for the doc item Layotto社区晋升规则 in sidebar mySidebar, linking to the doc community/promote"
- },
- "sidebar.mySidebar.doc.Layotto社区成员": {
- "message": "Layotto社区成员",
- "description": "The label for the doc item Layotto社区成员 in sidebar mySidebar, linking to the doc community/people"
- },
- "sidebar.mySidebar.doc.视频合集": {
- "message": "Video Compilation",
- "description": "The label for the doc item 视频合集 in sidebar mySidebar, linking to the doc video/README"
- }
-}
diff --git a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/README.md b/docs/i18n/en-US/docusaurus-plugin-content-docs/current/README.md
deleted file mode 100644
index 6552476c09..0000000000
--- a/docs/i18n/en-US/docusaurus-plugin-content-docs/current/README.md
+++ /dev/null
@@ -1,175 +0,0 @@
-
-# Layotto (L8): To be the next layer of OSI layer 7
-
-
-[![Layotto Env Pipeline 🌊](https://github.com/mosn/layotto/actions/workflows/proto-checker.yml/badge.svg)](https://github.com/mosn/layotto/actions/workflows/proto-checker.yml)
-[![Layotto Dev Pipeline 🌊](https://github.com/mosn/layotto/actions/workflows/layotto-ci.yml/badge.svg)](https://github.com/mosn/layotto/actions/workflows/layotto-ci.yml)
-
-[![GoDoc](https://godoc.org/mosn.io/layotto?status.svg)](https://godoc.org/mosn.io/layotto)
-[![Go Report Card](https://goreportcard.com/badge/github.com/mosn/layotto)](https://goreportcard.com/report/mosn.io/layotto)
-[![codecov](https://codecov.io/gh/mosn/layotto/branch/main/graph/badge.svg?token=10RxwSV6Sz)](https://codecov.io/gh/mosn/layotto)
-[![Average time to resolve an issue](http://isitmaintained.com/badge/resolution/mosn/layotto.svg)](http://isitmaintained.com/project/mosn/layotto "Average time to resolve an issue")
-
-
-Layotto(/leɪˈɒtəʊ/) is an application runtime developed using Golang, which provides various distributed capabilities for applications, such as state management, configuration management, and event pub/sub capabilities to simplify application development.
-
-Layotto is built on the open source data plane [MOSN](https://github.com/mosn/mosn) .In addition to providing distributed building blocks, Layotto can also serve as the data plane of Service Mesh and has the ability to control traffic.
-
-## Motivation
-
-Layotto aims to combine [Multi-Runtime](https://www.infoq.com/articles/multi-runtime-microservice-architecture/) with Service Mesh into one sidecar. No matter which product you are using as the Service Mesh data plane (e.g. MOSN,Envoy or any other product), you can always attach Layotto to it and add Multi-Runtime capabilities without adding new sidecars.
-
-For example, by adding Runtime capabilities to MOSN, a Layotto process can both [serve as the data plane of istio](start/istio/) and provide various Runtime APIs (such as Configuration API, Pub/Sub API, etc.)
-
-In addition, we were surprised to find that a sidecar can do much more than that. We are trying to make Layotto even the runtime container of FaaS (Function as a service) with the magic power of [WebAssembly](https://en.wikipedia.org/wiki/WebAssembly) .
-
-## Features
-
-- Service Communication
-- Service Governance.Such as traffic hijacking and observation, service rate limiting, etc
-- [As the data plane of istio](start/istio/)
-- Configuration management
-- State management
-- Event publish and subscribe
-- Health check, query runtime metadata
-- Multilingual programming based on WASM
-
-## Project Architecture
-
-As shown in the architecture diagram below, Layotto uses the open source MOSN as the base to provide network layer management capabilities while providing distributed capabilities. The business logic can directly interact with Layotto through a lightweight SDK without paying attention to the specific back-end infrastructure.
-
-Layotto provides sdk in various languages. The sdk interacts with Layotto through grpc. Application developers only need to specify their own infrastructure type through the [configuration file](https://github.com/mosn/layotto/blob/main/configs/runtime_config.json) provided by Layotto. No coding changes are required, which greatly improves the portability of the program.
-
-
-
-## Quickstarts
-
-### Get started with Layotto
-
-You can try the quickstart demos below to get started with Layotto. In addition, you can experience the [online laboratory](start/lab.md)
-
-### API
-
-| API | status | quick start | desc |
-| -------------- | :----: | :-------------------------------------------------------------------: | -------------------------------------------------------------- |
-| State | ✅ | [demo](https://mosn.io/layotto/#/en/start/state/start) | Write/Query the data of the Key/Value model |
-| Pub/Sub | ✅ | [demo](https://mosn.io/layotto/#/en/start/pubsub/start) | Publish/Subscribe message through various Message Queue |
-| Service Invoke | ✅ | [demo](https://mosn.io/layotto/#/en/start/rpc/helloworld) | Call Service through MOSN (another istio data plane) |
-| Config | ✅ | [demo](https://mosn.io/layotto/#/en/start/configuration/start-apollo) | Write/Query/Subscribe the config through various Config Center |
-| Lock | ✅ | [demo](https://mosn.io/layotto/#/en/start/lock/start) | Distributed lock API |
-| Sequencer | ✅ | [demo](https://mosn.io/layotto/#/en/start/sequencer/start) | Generate distributed unique and incremental ID |
-| File | ✅ | TODO | File API implementation |
-| Binding | ✅ | TODO | Transparent data transmission API |
-
-
-### Service Mesh
-
-| feature | status | quick start | desc |
-| ------- | :----: |:----------------------------:| -------------------------- |
-| istio | ✅ | [demo](start/istio/start.md) | As the data plane of istio |
-
-### Extendability
-
-| feature | status | quick start | desc |
-| ---------- | :----: | :--------------------------------------------------------------: | -------------------------- |
-| API plugin | ✅ | [demo](https://mosn.io/layotto/#/en/start/api_plugin/helloworld) | You can add your own API ! |
-
-### Actuator
-
-| feature | status | quick start | desc |
-| -------------- | :----: | :-------------------------------------------------------: | --------------------------------------------------- |
-| Health Check | ✅ | [demo](https://mosn.io/layotto/#/en/start/actuator/start) | Query health state of app and components in Layotto |
-| Metadata Query | ✅ | [demo](https://mosn.io/layotto/#/en/start/actuator/start) | Query metadata in Layotto/app |
-
-### Traffic Control
-
-| feature | status | quick start | desc |
-| ------------ | :----: | :-------------------------------------------------------------------: | --------------------------------------------------------------- |
-| TCP Copy | ✅ | [demo](https://mosn.io/layotto/#/en/start/network_filter/tcpcopy) | Dump the tcp traffic received by Layotto into local file system |
-| Flow Control | ✅ | [demo](https://mosn.io/layotto/#/en/start/stream_filter/flow_control) | limit access to the APIs provided by Layotto |
-
-### Write your bussiness logic using WASM
-
-| feature | status | quick start | desc |
-| -------------- | :----: | :---------------------------------------------------: | -------------------------------------------------------------------- |
-| Go (TinyGo) | ✅ | [demo](https://mosn.io/layotto/#/en/start/wasm/start) | Compile Code written by TinyGo to \*.wasm and run in Layotto |
-| Rust | ✅ | [demo](https://mosn.io/layotto/#/en/start/wasm/start) | Compile Code written by Rust to \*.wasm and run in Layotto |
-| AssemblyScript | ✅ | [demo](https://mosn.io/layotto/#/en/start/wasm/start) | Compile Code written by AssemblyScript to \*.wasm and run in Layotto |
-
-### As a FaaS(Serverless) runtime (Layotto + WebAssembly + k8s)
-
-| feature | status | quick start | desc |
-| -------------- | :----: | :---------------------------------------------------: | ------------------------------------------------------------------------------------------ |
-| Go (TinyGo) | ✅ | [demo](https://mosn.io/layotto/#/en/start/faas/start) | Compile Code written by TinyGo to \*.wasm and run in Layotto And Scheduled by k8s. |
-| Rust | ✅ | [demo](https://mosn.io/layotto/#/en/start/faas/start) | Compile Code written by Rust to \*.wasm and run in Layotto And Scheduled by k8s. |
-| AssemblyScript | ✅ | [demo](https://mosn.io/layotto/#/en/start/faas/start) | Compile Code written by AssemblyScript to \*.wasm and run in Layotto And Scheduled by k8s. |
-
-## Presentations
-
-- [Layotto - A new chapter of Service Mesh and Application Runtime](https://www.youtube.com/watch?v=5v8gTrFUDk8)
-- [WebAssembly + Application Runtime = A New Era of FaaS?](https://www.youtube.com/watch?v=g01CJ4S9Qao)
-
-## Landscapes
-
-
+
+## {{.LongName}}
+{{.Description}}
+
+{{if .HasFields}}
+| Field | Type | Label | Description |
+| ----- | ---- | ----- | ----------- |
+{{range .Fields -}}
+ | {{.Name}} | [{{.LongType}}](#{{.FullType}}) | {{.Label}} | {{if (index .Options "deprecated"|default false)}}**Deprecated.** {{end}}{{nobr .Description}}{{if .DefaultValue}} Default: {{.DefaultValue}}{{end}} |
+{{end}}
+{{end}}
+
+{{if .HasExtensions}}
+| Extension | Type | Base | Number | Description |
+| --------- | ---- | ---- | ------ | ----------- |
+{{range .Extensions -}}
+ | {{.Name}} | {{.LongType}} | {{.ContainingLongType}} | {{.Number}} | {{nobr .Description}}{{if .DefaultValue}} Default: {{.DefaultValue}}{{end}} |
+{{end}}
+{{end}}
+
+{{end}}
+
+{{range .Enums}}
+
+
+## {{.LongName}}
+{{.Description}}
+
+| Name | Number | Description |
+| ---- | ------ | ----------- |
+{{range .Values -}}
+ | {{.Name}} | {{.Number}} | {{nobr .Description}} |
+{{end}}
+
+{{end}}
+
+{{if .HasExtensions}}
+
+
+## File-level Extensions
+| Extension | Type | Base | Number | Description |
+| --------- | ---- | ---- | ------ | ----------- |
+{{range .Extensions -}}
+ | {{.Name}} | {{.LongType}} | {{.ContainingLongType}} | {{.Number}} | {{nobr .Description}}{{if .DefaultValue}} Default: `{{.DefaultValue}}`{{end}} |
+{{end}}
+{{end}}
+
+{{end}}
\ No newline at end of file
diff --git a/docs/template/quickstart.tmpl b/docs/template/quickstart.tmpl
new file mode 100644
index 0000000000..3f05417993
--- /dev/null
+++ b/docs/template/quickstart.tmpl
@@ -0,0 +1,125 @@
+{{range .Files}} {{$file_name := .Name}} {{if .HasServices}} {{range .Services}}
+# {{.Name}} API demo
+
+This example shows how to invoke Layotto {{.Name}} API.
+
+## What is {{.Name}} API used for?
+
+{{.Description}}
+
+## step 1. Deploy Layotto
+
+### **With Docker**
+You can start Layotto with docker
+
+```bash
+docker run -v "$(pwd)/configs/config_standalone.json:/runtime/configs/config.json" -d -p 34904:34904 --name layotto layotto/layotto start
+```
+
+### **Compile locally (not for Windows)**
+You can compile and run Layotto locally.
+
+> [!TIP|label: Not for Windows users]
+> Layotto fails to compile under Windows. Windows users are recommended to deploy using docker
+
+After downloading the project code to the local, switch the code directory and compile:
+
+```shell
+cd ${project_path}/cmd/layotto
+```
+
+```shell @if.not.exist layotto
+go build
+```
+
+Once finished, the layotto binary will be generated in the directory.
+
+Run it:
+
+```shell @background
+./layotto start -c ../../configs/config_standalone.json
+```
+
+
+
+## step 2. Run the client program to invoke Layotto {{.Name}} API
+
+### **Go**
+Build and run the golang demo:
+
+```shell
+ cd ${project_path}/demo/{{$file_name}}/common/
+ go build -o client
+ ./client -s "demo"
+```
+
+If the following information is printed, the demo is successful:
+
+```bash
+TODO
+```
+
+### **Java**
+
+Download java sdk and examples:
+
+```shell @if.not.exist java-sdk
+git clone https://github.com/layotto/java-sdk
+```
+
+```shell
+cd java-sdk
+```
+
+Build the demo:
+
+```shell @if.not.exist examples-{{$file_name}}/target/examples-{{$file_name}}-jar-with-dependencies.jar
+# build example jar
+mvn -f examples-{{$file_name}}/pom.xml clean package
+```
+
+Run it:
+
+```shell
+java -jar examples-{{$file_name}}/target/examples-{{$file_name}}-jar-with-dependencies.jar
+```
+
+If the following information is printed, the demo is successful:
+
+```bash
+TODO
+```
+
+
+
+## step 3. Stop containers and release resources
+
+### **Destroy the Docker container**
+If you started Layotto with docker, you can destroy the container as follows:
+
+```bash
+docker rm -f layotto
+```
+
+
+
+## Next step
+### What does this client program do?
+The demo client program uses the SDK provided by Layotto to invoke the Layotto {{.Name}} API.
+
+The golang sdk is located in the `sdk` directory, and the java sdk is in https://github.com/layotto/java-sdk
+
+In addition to using sdk, you can also interact with Layotto directly through grpc in any language you like.
+
+### Details later, let's continue to experience other APIs
+Explore other Quickstarts through the navigation bar on the left.
+
+### Reference
+
+
+
+
+
+{{end}}
+{{end}}
+{{end}}
\ No newline at end of file
diff --git a/docs/template/quickstart_zh.tmpl b/docs/template/quickstart_zh.tmpl
new file mode 100644
index 0000000000..c98b064bf7
--- /dev/null
+++ b/docs/template/quickstart_zh.tmpl
@@ -0,0 +1,125 @@
+{{range .Files}} {{$file_name := .Name}} {{if .HasServices}} {{range .Services}}
+# {{.Name}} API demo
+
+本示例演示如何调用 Layotto {{.Name}} API.
+
+## 什么是 {{.Name}} API ?
+
+{{.Description}}
+
+## step 1. 运行 Layotto
+
+### **With Docker**
+您可以用 Docker 启动 Layotto
+
+```bash
+docker run -v "$(pwd)/configs/config_standalone.json:/runtime/configs/config.json" -d -p 34904:34904 --name layotto layotto/layotto start
+```
+
+### **本地编译(不适合 Windows)**
+您可以本地编译、运行 Layotto。
+> [!TIP|label: 不适合 Windows 用户]
+> Layotto 在 Windows 下会编译失败。建议 Windows 用户使用 docker 部署
+
+将项目代码下载到本地后,切换代码目录:
+
+```shell
+cd ${project_path}/cmd/layotto
+```
+
+构建:
+
+```shell @if.not.exist layotto
+go build
+```
+
+完成后目录下会生成 Layotto文件,运行它:
+
+```shell @background
+./layotto start -c ../../configs/config_standalone.json
+```
+
+
+
+## step 2. 运行客户端程序,调用 Layotto {{.Name}} API
+
+### **Go**
+
+构建、运行 go 语言 demo:
+
+```shell
+ cd ${project_path}/demo/{{$file_name}}/common/
+ go build -o client
+ ./client -s "demo"
+```
+
+打印出如下信息则代表调用成功:
+
+```bash
+TODO
+```
+
+### **Java**
+
+下载 java sdk 和示例代码:
+
+```shell @if.not.exist java-sdk
+git clone https://github.com/layotto/java-sdk
+```
+
+```shell
+cd java-sdk
+```
+
+构建 examples:
+
+```shell @if.not.exist examples-{{$file_name}}/target/examples-{{$file_name}}-jar-with-dependencies.jar
+# build example jar
+mvn -f examples-{{$file_name}}/pom.xml clean package
+```
+
+运行:
+
+```shell
+java -jar examples-{{$file_name}}/target/examples-{{$file_name}}-jar-with-dependencies.jar
+```
+
+打印出以下信息说明运行成功:
+
+```bash
+TODO
+```
+
+
+
+## step 3. 销毁容器,释放资源
+
+### **销毁 Docker container**
+如果您是用 Docker 启动的 Layotto,可以按以下方式销毁容器:
+
+```bash
+docker rm -f layotto
+```
+
+
+
+## 下一步
+### 这个客户端程序做了什么?
+示例客户端程序中使用了 Layotto 提供的多语言 sdk,调用Layotto {{.Name}} API。
+
+go sdk位于`sdk`目录下,java sdk 在 https://github.com/layotto/java-sdk
+
+除了使用sdk调用Layotto提供的API,您也可以用任何您喜欢的语言、通过grpc直接和Layotto交互。
+
+### 细节以后再说,继续体验其他API
+通过左侧的导航栏,继续体验别的API吧!
+
+### Reference
+
+
+
+
+
+{{end}}
+{{end}}
+{{end}}
\ No newline at end of file
diff --git a/docs/src/pages/index.md b/docs/zh/README.md
similarity index 85%
rename from docs/src/pages/index.md
rename to docs/zh/README.md
index defd6ba739..50c1aaec9b 100644
--- a/docs/src/pages/index.md
+++ b/docs/zh/README.md
@@ -1,7 +1,8 @@
-# Layotto (L8): To be the next layer of OSI layer 7
-
+
+
Layotto (L8): To be the next layer of OSI layer 7
+
-[![Layotto Env Pipeline 🌊](https://github.com/mosn/layotto/actions/workflows/proto-checker.yml/badge.svg)](https://github.com/mosn/layotto/actions/workflows/proto-checker.yml)
+[![Layotto Env Pipeline 🌊](https://github.com/mosn/layotto/actions/workflows/quickstart-checker.yml/badge.svg)](https://github.com/mosn/layotto/actions/workflows/quickstart-checker.yml)
[![Layotto Dev Pipeline 🌊](https://github.com/mosn/layotto/actions/workflows/layotto-ci.yml/badge.svg)](https://github.com/mosn/layotto/actions/workflows/layotto-ci.yml)
[![GoDoc](https://godoc.org/mosn.io/layotto?status.svg)](https://godoc.org/mosn.io/layotto)
@@ -9,7 +10,7 @@
[![codecov](https://codecov.io/gh/mosn/layotto/branch/main/graph/badge.svg?token=10RxwSV6Sz)](https://codecov.io/gh/mosn/layotto)
[![Average time to resolve an issue](http://isitmaintained.com/badge/resolution/mosn/layotto.svg)](http://isitmaintained.com/project/mosn/layotto "Average time to resolve an issue")
-
+