|
4 weeks ago | |
---|---|---|
.github | 1 month ago | |
whisper.cpp @ fc7b1ee521 | 1 month ago | |
.dockerignore | 2 years ago | |
.gitignore | 1 year ago | |
.gitmodules | 2 years ago | |
Dockerfile | 4 weeks ago | |
LICENSE | 1 month ago | |
README.md | 4 weeks ago | |
main.py | 4 weeks ago | |
renovate.json | 1 year ago | |
requirements.txt | 1 month ago | |
speech_recognition.py | 4 weeks ago |
Transcribes audio messages using OpenAI Whisper.
This bot is based on Simple-Matrix-Bot-Lib and whisper.cpp. It downloads audio messages from your homeserver, transcribes them locally and responds with the result as a text message.
The bot is available as an image on DockerHub.
You can deploy it using docker-compose
:
version: "3.7"
services:
matrix-stt-bot:
image: ftcaplan/matrix-stt-bot
restart: on-failure
volumes:
- ./data/:/data/
environment:
- "HOMESERVER=https://matrix.example.com"
- "USERNAME=@stt-bot:example.com"
- "PASSWORD=<password>"
- "ASR_MODEL=tiny"
- "ASR_LANGUAGE=en"
ASR_MODEL: You can choose a model by setting it with ASR_MODEL
.
Available models are for example tiny.en
, tiny
, base
, small
, medium
, and large-v3
. The full list is available on Hugging Face.
The default is ASR_MODEL=tiny
. The bot will download the model file on first run to reduce image size.
You can load your own ggml models by providing them at the following path: /data/models/ggml-$ASR_MODEL.bin
Authentication: You can authenticate using tokens instead of a password:
LOGIN_TOKEN=<login-token>
or ACCESS_TOKEN=<access-token>
instead of PASSWORD=<password>
.Allowlist:
ALLOWLIST
environment variable is defined, the bot will parse it and use it as the allowlist.ALLOWLIST=^@user1:example.com$,^@user2:example.com$
ALLOWLIST
is not defined, the bot will only allow commands from users of the bot's homeserver.