Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Install failed #2

Open
bi119aTe5hXk opened this issue Jun 20, 2023 · 0 comments
Open

Install failed #2

bi119aTe5hXk opened this issue Jun 20, 2023 · 0 comments

Comments

@bi119aTe5hXk
Copy link

bi119aTe5hXk commented Jun 20, 2023

  1. '8091:${QBT_HTTP_PORT:-8091}' is comment out at docker-compose.yml. qbt uses port 8080 as the default WebUI, and mira uses 8091 to access qbt and throws the error "ConnectionRefusedError: [Errno 111] Connection refused" while doing "Apply qBittorrent username and password..."
Apply qBittorrent username and password...
Traceback (most recent call last):
  File "/root/mira-docker/venv/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn
    conn = connection.create_connection(
  File "/root/mira-docker/venv/lib/python3.10/site-packages/urllib3/util/connection.py", line 95, in create_connection
    raise err
  File "/root/mira-docker/venv/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
    sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/root/mira-docker/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen
    httplib_response = self._make_request(
  File "/root/mira-docker/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 398, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/root/mira-docker/venv/lib/python3.10/site-packages/urllib3/connection.py", line 239, in request
    super(HTTPConnection, self).request(method, url, body=body, headers=headers)
  1. In init_helper.py, the default video manager and download manager database names are mira_video and mira_download. But video-manager-init and download-manager-init are searching "mira_video_manager" and "mira_downloader" when doing "Attaching to albireo-init, download-manager-init, video-manager-init". And throw error: database mira_downloader/mira_video_manager does not exist.
WARN[0000] Found orphan containers ([postgres qbittorrent]) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
[+] Running 3/0
 ✔️ Container albireo-init           Created                                                                                                                                              0.0s
 ✔️ Container video-manager-init     Created                                                                                                                                              0.0s
 ✔️ Container download-manager-init  Created                                                                                                                                              0.0s
Attaching to albireo-init, download-manager-init, video-manager-init
video-manager-init     |
video-manager-init     | > mira-video-manager@1.3.1 migrate:sync:silent
video-manager-init     | > ts-node src/migrate.ts --sync --silent
video-manager-init     |
download-manager-init  |
download-manager-init  | > mira-download-manager@1.2.1 migrate:sync:silent
download-manager-init  | > ts-node src/migrate.ts --sync --silent
download-manager-init  |
albireo-init           | Traceback (most recent call last):
albireo-init           |   File "/usr/app/tools.py", line 92, in <module>
albireo-init           |     Base.metadata.create_all(SessionManager.engine)
albireo-init           |   File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/schema.py", line 4741, in create_all
albireo-init           |     ddl.SchemaGenerator, self, checkfirst=checkfirst, tables=tables
albireo-init           |   File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 3078, in _run_ddl_visitor
albireo-init           |     with self.begin() as conn:
albireo-init           |   File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2994, in begin
albireo-init           |     conn = self.connect(close_with_result=close_with_result)
albireo-init           |   File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 3166, in connect
albireo-init           |     return self._connection_cls(self, close_with_result=close_with_result)
albireo-init           |   File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 96, in __init__
albireo-init           |     else engine.raw_connection()
albireo-init           |   File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 3245, in raw_connection
albireo-init           |     return self._wrap_pool_connect(self.pool.connect, _connection)
albireo-init           |   File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 3216, in _wrap_pool_connect
albireo-init           |     e, dialect, self
albireo-init           |   File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2070, in _handle_dbapi_exception_noconnection
albireo-init           |     sqlalchemy_exception, with_traceback=exc_info[2], from_=e
albireo-init           |   File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 3212, in _wrap_pool_connect
albireo-init           |     return fn()
albireo-init           |   File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool/base.py", line 307, in connect
albireo-init           |     return _ConnectionFairy._checkout(self)
albireo-init           |   File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool/base.py", line 767, in _checkout
albireo-init           |     fairy = _ConnectionRecord.checkout(pool)
albireo-init           |   File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool/base.py", line 425, in checkout
albireo-init           |     rec = pool._do_get()
albireo-init           |   File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool/impl.py", line 146, in _do_get
albireo-init           |     self._dec_overflow()
  1. video-manager-init throw error "function uuid_generate_v4() does not exist"
{
  "err" : {
    "stack" : "DriverException: set names 'utf8';
    set session_replication_role = 'replica';
    create table "job" ("id" uuid not null default uuid_generate_v4(), "jobMessageId" varchar(255) not null, "jobMessage" jsonb not null, "actionMap" jsonb not null, "status" text check ("status" in ('Queueing', 'Running', 'Finished', 'UnrecoverableError', 'Pause', 'Canceled')) not null default 'Queueing', "jobExecutorId" varchar(255) null, "createTime" timestamp not null, "startTime" timestamp null, "finishedTime" timestamp null, "cleaned" boolean not null default false, constraint "job_pkey" primary key ("id"));
    create table "session" ("id" uuid not null default uuid_generate_v4(), "expire" timestamp not null, constraint "session_pkey" primary key ("id"));
    create table "vertex" ("id" uuid not null default uuid_generate_v4(), "jobId" varchar(255) not null, "status" text check ("status" in ('Pending', 'Running', 'Stopped', 'Finished', 'Error')) not null, "upstreamVertexIds" jsonb not null, "downstreamVertexIds" jsonb not null, "outputPath" text null, "action" jsonb not null, "actionType" text check ("actionType" in ('convert', 'copy', 'fragment', 'merge', 'extract')) not null, "startTime" timestamp null, "finishedTime" timestamp null, "error" jsonb null, constraint "vertex_pkey" primary key ("id"));
    create table "video_process_rule" ("id" uuid not null default uuid_generate_v4(), "name" text null, "bangumiId" varchar(255) null, "videoFileId" varchar(255) null, "condition" text null, "actions" jsonb not null, "priority" integer not null, constraint "video_process_rule_pkey" primary key ("id"));
    set session_replication_role = 'origin'; - function uuid_generate_v4() does not exist
        at PostgreSqlExceptionConverter.convertException (/app/node_modules/@mikro-orm/core/platforms/ExceptionConverter.js:8:16)
        at PostgreSqlExceptionConverter.convertException (/app/node_modules/@mikro-orm/postgresql/PostgreSqlExceptionConverter.js:42:22)
        at PostgreSqlDriver.convertException (/app/node_modules/@mikro-orm/core/drivers/DatabaseDriver.js:192:54)
        at /app/node_modules/@mikro-orm/core/drivers/DatabaseDriver.js:196:24
        at processTicksAndRejections (node:internal/process/task_queues:96:5)
        at async SchemaGenerator.execute (/app/node_modules/@mikro-orm/knex/schema/SchemaGenerator.js:365:25)
        at async SchemaGenerator.createSchema (/app/node_modules/@mikro-orm/knex/schema/SchemaGenerator.js:28:9)
        at async DatabaseServiceImpl.syncSchema (/app/node_modules/@irohalab/src/services/BasicDatabaseServiceImpl.ts:87:9)
        at async DatabaseServiceImpl.initSchema (/app/src/services/DatabaseServiceImpl.ts:63:13)
        at async /app/src/migrate.ts:91:30

        previous error: set names 'utf8';
    set session_replication_role = 'replica';
    create table "job" ("id" uuid not null default uuid_generate_v4(), "jobMessageId" varchar(255) not null, "jobMessage" jsonb not null, "actionMap" jsonb not null, "status" text check ("status" in ('Queueing', 'Running', 'Finished', 'UnrecoverableError', 'Pause', 'Canceled')) not null default 'Queueing', "jobExecutorId" varchar(255) null, "createTime" timestamp not null, "startTime" timestamp null, "finishedTime" timestamp null, "cleaned" boolean not null default false, constraint "job_pkey" primary key ("id"));
    create table "session" ("id" uuid not null default uuid_generate_v4(), "expire" timestamp not null, constraint "session_pkey" primary key ("id"));
    create table "vertex" ("id" uuid not null default uuid_generate_v4(), "jobId" varchar(255) not null, "status" text check ("status" in ('Pending', 'Running', 'Stopped', 'Finished', 'Error')) not null, "upstreamVertexIds" jsonb not null, "downstreamVertexIds" jsonb not null, "outputPath" text null, "action" jsonb not null, "actionType" text check ("actionType" in ('convert', 'copy', 'fragment', 'merge', 'extract')) not null, "startTime" timestamp null, "finishedTime" timestamp null, "error" jsonb null, constraint "vertex_pkey" primary key ("id"));
    create table "video_process_rule" ("id" uuid not null default uuid_generate_v4(), "name" text null, "bangumiId" varchar(255) null, "videoFileId" varchar(255) null, "condition" text null, "actions" jsonb not null, "priority" integer not null, constraint "video_process_rule_pkey" primary key ("id"));
    set session_replication_role = 'origin'; - function uuid_generate_v4() does not exist
        at Parser.parseErrorMessage (/app/node_modules/pg-protocol/src/parser.ts:357:11)
        at Parser.handlePacket (/app/node_modules/pg-protocol/src/parser.ts:186:21)
        at Parser.parse (/app/node_modules/pg-protocol/src/parser.ts:101:30)
        at Socket.<anonymous> (/app/node_modules/pg-protocol/src/index.ts:7:48)
        at Socket.emit (node:events:527:28)
        at Socket.emit (node:domain:475:12)
        at addChunk (node:internal/streams/readable:324:12)
        at readableAddChunk (node:internal/streams/readable:297:9)
        at Socket.Readable.push (node:internal/streams/readable:234:10)
        at TCP.onStreamRead (node:internal/stream_base_commons:190:23)",
    "message" : "set names 'utf8';
    set session_replication_role = 'replica';
    create table "job" ("id" uuid not null default uuid_generate_v4(), "jobMessageId" varchar(255) not null, "jobMessage" jsonb not null, "actionMap" jsonb not null, "status" text check ("status" in ('Queueing', 'Running', 'Finished', 'UnrecoverableError', 'Pause', 'Canceled')) not null default 'Queueing', "jobExecutorId" varchar(255) null, "createTime" timestamp not null, "startTime" timestamp null, "finishedTime" timestamp null, "cleaned" boolean not null default false, constraint "job_pkey" primary key ("id"));
    create table "session" ("id" uuid not null default uuid_generate_v4(), "expire" timestamp not null, constraint "session_pkey" primary key ("id"));
    create table "vertex" ("id" uuid not null default uuid_generate_v4(), "jobId" varchar(255) not null, "status" text check ("status" in ('Pending', 'Running', 'Stopped', 'Finished', 'Error')) not null, "upstreamVertexIds" jsonb not null, "downstreamVertexIds" jsonb not null, "outputPath" text null, "action" jsonb not null, "actionType" text check ("actionType" in ('convert', 'copy', 'fragment', 'merge', 'extract')) not null, "startTime" timestamp null, "finishedTime" timestamp null, "error" jsonb null, constraint "vertex_pkey" primary key ("id"));
    create table "video_process_rule" ("id" uuid not null default uuid_generate_v4(), "name" text null, "bangumiId" varchar(255) null, "videoFileId" varchar(255) null, "condition" text null, "actions" jsonb not null, "priority" integer not null, constraint "video_process_rule_pkey" primary key ("id"));
    set session_replication_role = 'origin'; - function uuid_generate_v4() does not exist",
    "position" : "108",
    "file" : "parse_func.c",
    "type" : "DriverException",
    "length" : 212,
    "code" : "42883",
    "severity" : "ERROR",
    "hint" : "No function matches the given name and argument types. You might need to add explicit type casts.",
    "routine" : "ParseFuncOrColumn",
    "line" : "620",
    "name" : "DriverException"
  },
  "level" : 50,
  "msg" : "set names 'utf8';
  set session_replication_role = 'replica';
  create table "job" ("id" uuid not null default uuid_generate_v4(), "jobMessageId" varchar(255) not null, "jobMessage" jsonb not null, "actionMap" jsonb not null, "status" text check ("status" in ('Queueing', 'Running', 'Finished', 'UnrecoverableError', 'Pause', 'Canceled')) not null default 'Queueing', "jobExecutorId" varchar(255) null, "createTime" timestamp not null, "startTime" timestamp null, "finishedTime" timestamp null, "cleaned" boolean not null default false, constraint "job_pkey" primary key ("id"));
  create table "session" ("id" uuid not null default uuid_generate_v4(), "expire" timestamp not null, constraint "session_pkey" primary key ("id"));
  create table "vertex" ("id" uuid not null default uuid_generate_v4(), "jobId" varchar(255) not null, "status" text check ("status" in ('Pending', 'Running', 'Stopped', 'Finished', 'Error')) not null, "upstreamVertexIds" jsonb not null, "downstreamVertexIds" jsonb not null, "outputPath" text null, "action" jsonb not null, "actionType" text check ("actionType" in ('convert', 'copy', 'fragment', 'merge', 'extract')) not null, "startTime" timestamp null, "finishedTime" timestamp null, "error" jsonb null, constraint "vertex_pkey" primary key ("id"));
  create table "video_process_rule" ("id" uuid not null default uuid_generate_v4(), "name" text null, "bangumiId" varchar(255) null, "videoFileId" varchar(255) null, "condition" text null, "actions" jsonb not null, "priority" integer not null, constraint "video_process_rule_pkey" primary key ("id"));
  set session_replication_role = 'origin'; - function uuid_generate_v4() does not exist",
  "time" : "2023-06-20T12:39:03.652Z"
}
  1. When executing docker-compose --profile prod up, return error: Error response from daemon: unknown log opt 'max_file' for json-file log driver
[+] Building 0.0s (0/0)                                                                                                         
[+] Running 1/0
 ⠋ Container mira-video-manager_api                       Creating                                                         0.0s 
 ✔ Container qbittorrent                                  Running                                                          0.0s 
 ⠋ Container mira-download-manager_server                 Creating                                                         0.0s 
 ⠋ Container albireo-scheduler                            Creating                                                         0.0s 
 ⠋ Container albireo-server                               Creating                                                         0.0s 
 ⠋ Container mira-video-manager_job-executor              Creating                                                         0.0s 
 ⠋ Container mira-video-manager_job-executor-file-server  Creat...                                                         0.0s 
 ⠋ Container mira-video-manager_job-scheduler             Creating                                                         0.0s 
 ⠋ Container mira-download-manager_core                   Creating                                                         0.0s 
Error response from daemon: unknown log opt 'max_file' for json-file log driver
  1. When executing docker-compose --profile prod up, docker will stuck at creating mira-video-manager_api forever. Only QBT can run. Try modify x-logging driver from json-file to journals can fix.

  2. Qbt API connect error. Seems /api/v2/auth/login does not accept the GET method. Maybe using POST will work?

mira-download-manager_server                 | {"level":40,"time":"2023-06-21T06:52:23.296Z","err":{"message":"Request failed with status code 405","stack":"Error: Request failed with status code 405\n    at createError (/app/node_modules/axios/lib/core/createError.js:16:15)\n    at settle (/app/node_modules/axios/lib/core/settle.js:17:12)\n    at IncomingMessage.handleStreamEnd (/app/node_modules/axios/lib/adapters/http.js:269:11)\n    at IncomingMessage.emit (node:events:539:35)\n    at IncomingMessage.emit (node:domain:475:12)\n    at endReadableNT (node:internal/streams/readable:1345:12)\n    at processTicksAndRejections (node:internal/process/task_queues:83:21)","config":{"url":"http://qbittorrent:18091/api/v2/auth/login","method":"get","headers":{"Accept":"application/json, text/plain, */*","User-Agent":"axios/0.21.2"},"params":{"username":"admin","password":"<qbt password>"},"transformRequest":[null],"transformResponse":[null],"timeout":0,"xsrfCookieName":"XSRF-TOKEN","xsrfHeaderName":"X-XSRF-TOKEN","maxContentLength":-1,"maxBodyLength":-1,"transitional":{"silentJSONParsing":true,"forcedJSONParsing":true,"clarifyTimeoutError":false}}},"msg":"Request failed with status code 405"}
  1. unknown error
mira-video-manager_job-executor              | {"level":50,"time":"2023-06-21T06:52:24.633Z","err":{"type":"Error","message":"Publication: video_manager_general is missing a vhost","stack":"Error: Publication: video_manager_general is missing a vhost\n    at validatePublication (/app/node_modules/rascal/lib/config/validate.js:105:35)\n    at /app/node_modules/lodash/lodash.js:4967:15\n    at baseForOwn (/app/node_modules/lodash/lodash.js:3032:24)\n    at /app/node_modules/lodash/lodash.js:4936:18\n    at Function.forEach (/app/node_modules/lodash/lodash.js:9410:14)\n    at validatePublications (/app/node_modules/rascal/lib/config/validate.js:100:7)\n    at /app/node_modules/rascal/lib/config/validate.js:8:5\n    at apply (/app/node_modules/lodash/lodash.js:489:27)\n    at wrapper (/app/node_modules/lodash/lodash.js:5101:16)\n    at /app/node_modules/async/dist/async.js:2018:20\n    at /app/node_modules/async/dist/async.js:1959:13\n    at replenish (/app/node_modules/async/dist/async.js:446:21)\n    at /app/node_modules/async/dist/async.js:451:13\n    at eachOfLimit$1 (/app/node_modules/async/dist/async.js:477:34)\n    at awaitable (/app/node_modules/async/dist/async.js:211:32)\n    at eachOfSeries (/app/node_modules/async/dist/async.js:813:16)"},"msg":"Publication: video_manager_general is missing a vhost"}
mira-video-manager_job-scheduler             | {"level":50,"time":"2023-06-21T06:52:24.693Z","err":{"type":"Error","message":"Publication: video_job is missing a vhost","stack":"Error: Publication: video_job is missing a vhost\n    at validatePublication (/app/node_modules/rascal/lib/config/validate.js:105:35)\n    at /app/node_modules/lodash/lodash.js:4967:15\n    at baseForOwn (/app/node_modules/lodash/lodash.js:3032:24)\n    at /app/node_modules/lodash/lodash.js:4936:18\n    at Function.forEach (/app/node_modules/lodash/lodash.js:9410:14)\n    at validatePublications (/app/node_modules/rascal/lib/config/validate.js:100:7)\n    at /app/node_modules/rascal/lib/config/validate.js:8:5\n    at apply (/app/node_modules/lodash/lodash.js:489:27)\n    at wrapper (/app/node_modules/lodash/lodash.js:5101:16)\n    at /app/node_modules/async/dist/async.js:2018:20\n    at /app/node_modules/async/dist/async.js:1959:13\n    at replenish (/app/node_modules/async/dist/async.js:446:21)\n    at /app/node_modules/async/dist/async.js:451:13\n    at eachOfLimit$1 (/app/node_modules/async/dist/async.js:477:34)\n    at awaitable (/app/node_modules/async/dist/async.js:211:32)\n    at eachOfSeries (/app/node_modules/async/dist/async.js:813:16)"},"msg":"Publication: video_job is missing a vhost"}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant