docs: add deploy document in sealos
@ -1,4 +1,4 @@
|
|||||||
{
|
{
|
||||||
"label": "Kubernetes",
|
"label": "Kubernetes",
|
||||||
"position": 50
|
"position": 5
|
||||||
}
|
}
|
@ -0,0 +1,112 @@
|
|||||||
|
---
|
||||||
|
sidebar_position: 2
|
||||||
|
title: Deployment in sealos
|
||||||
|
---
|
||||||
|
|
||||||
|
`Sealos` is an open-source Kubernetes deployment system that allows us to quickly create an on-demand, pay-as-you-go application cluster.
|
||||||
|
|
||||||
|
## First, enter Sealos and open "Application Management"
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## Create a new application
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
### Create dependencies
|
||||||
|
|
||||||
|
As an enterprise-level application, `tailchat` has the minimum dependencies of `mongodb`, `redis`, and `minio`. Let's create them one by one.
|
||||||
|
|
||||||
|
#### MongoDB
|
||||||
|
|
||||||
|
For convenience, we will fix one instance and bind it to local storage. The image used is `mongo:4`. Note that because we did not set a password for the database, do not provide network services to the public network. The container exposes port 27017, which is the default database service port. The content is as follows:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Click "Deploy Application" to submit the deployment. Wait patiently for a while, and you can see that the application has started up.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
> Note: that the initial allocation of 64m is too small for MongoDB, so I changed it to 128m by modifying the application. Resource allocation can be changed at any time, which is also a convenient feature of Sealos/Kubernetes.
|
||||||
|
|
||||||
|
#### Minio
|
||||||
|
|
||||||
|
Next, we will create Minio, an open-source object storage service. We can also quickly create it through Sealos's UI. The image used is `minio/minio`. Note that we need to make some adjustments:
|
||||||
|
|
||||||
|
- Expose port: 9000
|
||||||
|
- Change the run command to: `minio server /data`
|
||||||
|
- Set environment variables:
|
||||||
|
- MINIO_ROOT_USER: tailchat
|
||||||
|
- MINIO_ROOT_PASSWORD: com.msgbyte.tailchat
|
||||||
|
- Local storage: `/data`
|
||||||
|
|
||||||
|
The final result is as follows:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Click the "Deploy" button and you can see that the service has started up normally.
|
||||||
|
|
||||||
|
#### Redis
|
||||||
|
|
||||||
|
Finally, we need to deploy Redis as a content cache and message forwarding. The image used is `redis:alpine`, and the exposed port is `6379`. The final result is as follows:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
### Create Tailchat itself
|
||||||
|
|
||||||
|
At this point, all the dependencies required by Tailchat have been deployed, as shown below:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Now we can deploy the Tailchat itself. The Tailchat itself will be relatively complex, but because Sealos is purely UI-based, it will not be too complicated.
|
||||||
|
|
||||||
|
- Use image: `moonrailgun/tailchat`
|
||||||
|
- Expose port: `11000` (remember to open external access)
|
||||||
|
- Configure environment variables as follows:
|
||||||
|
```
|
||||||
|
SERVICEDIR=services,plugins
|
||||||
|
TRANSPORTER=redis://redis:6379
|
||||||
|
REDIS_URL=redis://redis:6379
|
||||||
|
MONGO_URL=mongodb://mongo/tailchat
|
||||||
|
MINIO_URL=minio:9000
|
||||||
|
MINIO_USER=tailchat
|
||||||
|
MINIO_PASS=com.msgbyte.tailchat
|
||||||
|
```
|
||||||
|
|
||||||
|
The final effect is as follows:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
After waiting patiently for a while, you can see that the Tailchat service has started up.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## Preview service
|
||||||
|
|
||||||
|
First, we can check the availability of the Tailchat service by adding `/health` to the external address provided by the service, such as `https://<xxxxxxxxxx>.cloud.sealos.io/health`. When it starts up, the Tailchat service will return content like this:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
This JSON string contains the image version used, node name, system usage, and microservice loading status. Here we can see that my common services, such as `user`/`chat.message`, and some services with plugin prefixes such as `plugin.registry`, have all started up normally, indicating that our server is running normally. Now we can directly access our external address and see that after a short loading time, the page opens normally and automatically jumps to the login page.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Register an account casually, and you can see that we can enter the main interface of Tailchat normally, as shown in the following figure:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
At this point, our service has successfully landed in Sealos.
|
||||||
|
|
||||||
|
## Scaling service
|
||||||
|
|
||||||
|
Of course, as a distributed architecture system, Tailchat naturally supports horizontal scaling. In Sealos, scaling is also very simple. Just modify the number of instances through the change operation:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
At this point, when we access `https://<xxxxxxxxxx>.cloud.sealos.io/health`, we can see that we can access different nodes.
|
||||||
|
|
||||||
|

|
@ -1,4 +1,4 @@
|
|||||||
{
|
{
|
||||||
"label": "Kubernetes",
|
"label": "Kubernetes",
|
||||||
"position": 50
|
"position": 5
|
||||||
}
|
}
|
After Width: | Height: | Size: 2.0 MiB |
After Width: | Height: | Size: 22 KiB |
After Width: | Height: | Size: 2.0 MiB |
After Width: | Height: | Size: 136 KiB |
After Width: | Height: | Size: 57 KiB |
After Width: | Height: | Size: 49 KiB |
After Width: | Height: | Size: 66 KiB |
After Width: | Height: | Size: 15 KiB |
After Width: | Height: | Size: 42 KiB |
After Width: | Height: | Size: 58 KiB |
After Width: | Height: | Size: 52 KiB |
After Width: | Height: | Size: 81 KiB |
After Width: | Height: | Size: 51 KiB |
After Width: | Height: | Size: 35 KiB |
After Width: | Height: | Size: 105 KiB |
After Width: | Height: | Size: 52 KiB |