feat: добавить аддон mediaserver — Plex, *arr, Transmission, Hysteria2, Samba

- Plex, Sonarr, Radarr, Lidarr, Bazarr, Prowlarr, Overseerr, Transmission
- Hysteria2 v2 как sidecar в Prowlarr поде (SOCKS5 127.0.0.1:1080)
- Init-контейнер автоматически прописывает прокси в config.xml Prowlarr
- Один shared PVC (RWX NFS) для всего стека с subPath-монтированием
- Samba LoadBalancer для LAN-доступа к медиафайлам
- bjw-s/app-template (auto-detect latest version)
- make addon-mediaserver, vault секреты, playbooks/addons.yml, addons.yml
This commit is contained in:
Sergey Antropoff
2026-04-26 00:36:44 +03:00
parent eccc1c2a01
commit 5d7b32023e
20 changed files with 1870 additions and 1 deletions

View File

@@ -0,0 +1,286 @@
# MediaServer
Полный self-hosted медиасервер на базе K3S: Plex, *arr стек, Transmission, Prowlarr с Hysteria2 SOCKS5 прокси, Overseerr и Samba для LAN-доступа.
## Компоненты
| Сервис | Порт | Описание |
|---|---|---|
| **Plex** | 32400 | Медиасервер — стриминг фильмов, сериалов, музыки |
| **Transmission** | 9091 | BitTorrent клиент |
| **Sonarr** | 8989 | Менеджер ТВ-сериалов |
| **Radarr** | 7878 | Менеджер фильмов |
| **Lidarr** | 8686 | Менеджер музыки |
| **Bazarr** | 6767 | Менеджер субтитров |
| **Prowlarr** | 9696 | Агрегатор индексеров (с Hysteria2 sidecar) |
| **Overseerr** | 5055 | Запросы контента — пользовательский интерфейс |
| **Samba** | 445 | LAN SMB-шара — доступ с Windows/Mac/Linux |
## Архитектура хранилища
Все сервисы используют **один shared PVC** `mediaserver-data` (RWX — NFS). Структура директорий:
```
PVC: mediaserver-data (200Gi)
├── config/
│ ├── plex/
│ ├── transmission/
│ ├── sonarr/
│ ├── radarr/
│ ├── lidarr/
│ ├── bazarr/
│ ├── prowlarr/
│ └── overseerr/
└── data/
├── downloads/
│ ├── complete/ ← Transmission складывает завершённые
│ └── incomplete/ ← Transmission — в процессе
├── movies/ ← Radarr перемещает сюда
├── series/ ← Sonarr перемещает сюда
└── music/ ← Lidarr перемещает сюда
```
В каждом контейнере:
- `/config``subPath: config/<service>` (изолированные конфиги)
- `/data``subPath: data` (общая директория для всех сервисов)
## Hysteria2 прокси (Prowlarr)
Hysteria2 работает как **sidecar-контейнер в Prowlarr поде** — это НЕ cluster-wide VPN, только Prowlarr использует прокси.
```
Prowlarr Pod:
├── prowlarr container ← трекеры через SOCKS5 127.0.0.1:1080
└── hysteria2 container ← SOCKS5 сервер на 127.0.0.1:1080
```
Init-контейнер автоматически прописывает в `config.xml` Prowlarr:
```xml
<ProxyEnabled>True</ProxyEnabled>
<ProxyType>Socks5</ProxyType>
<ProxyHostname>127.0.0.1</ProxyHostname>
<ProxyPort>1080</ProxyPort>
<ProxyBypassLocalAddresses>True</ProxyBypassLocalAddresses>
```
## Установка
### 1. Vault секреты
Добавь в `group_vars/all/vault.yml`:
```yaml
# Plex claim — получить на https://plex.tv/claim (действует 4 минуты)
vault_plex_claim_token: "claim-xxxxxxxxxxxxxxxxx"
# Hysteria2 сервер (пропусти если не нужен прокси)
vault_hysteria2_server: "your-server.com:443"
vault_hysteria2_auth: "your-password"
# Samba пароль
vault_samba_password: "my-samba-password"
# Transmission пароль
vault_transmission_password: "my-torrent-password"
```
### 2. Включить аддон
```yaml
# group_vars/all/addons.yml
addon_mediaserver: true
```
### 3. Деплой
```bash
make addon-mediaserver
```
Без Hysteria2 прокси:
```bash
make addon-mediaserver ARGS="-e mediaserver_hysteria2_enabled=false"
```
## Настройка
### Основные параметры (`group_vars/all/main.yml` или ARGS)
```yaml
mediaserver_namespace: "mediaserver"
mediaserver_data_size: "200Gi" # размер shared PVC
mediaserver_storage_class: "" # пусто = default StorageClass (nfs-master01)
mediaserver_timezone: "Europe/Moscow"
mediaserver_puid: "1000"
mediaserver_pgid: "1000"
```
### Отключить отдельные компоненты
```bash
make addon-mediaserver ARGS="-e mediaserver_lidarr_enabled=false -e mediaserver_watchtower_enabled=false"
```
### Ingress для внешнего доступа
```yaml
mediaserver_plex_ingress_enabled: true
mediaserver_plex_ingress_host: "plex.example.com"
mediaserver_sonarr_ingress_enabled: true
mediaserver_sonarr_ingress_host: "sonarr.example.com"
mediaserver_ingress_annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
```
### Samba со статическим IP
```yaml
mediaserver_samba_static_ip: "192.168.1.110" # kube-vip выдаст этот IP
```
После деплоя подключение с LAN:
```
Windows: \\192.168.1.110\media
Mac: smb://192.168.1.110/media
Linux: mount -t cifs //192.168.1.110/media /mnt/media -o user=media
```
## Интеграция сервисов
После установки настрой в UI (однократно):
### Prowlarr → *arr
В Prowlarr UI добавь индексеры, затем в **Settings → Apps** добавь каждый *arr:
| App | URL | API Key |
|---|---|---|
| Sonarr | `http://sonarr:8989` | из Settings → General |
| Radarr | `http://radarr:7878` | из Settings → General |
| Lidarr | `http://lidarr:8686` | из Settings → General |
### Sonarr/Radarr/Lidarr → Transmission
В каждом *arr: **Settings → Download Clients → Add → Transmission**:
```
Host: transmission
Port: 9091
Username: admin
Password: (vault_transmission_password)
```
### Sonarr/Radarr → Пути
В **Settings → Media Management → Root Folders**:
| Сервис | Папка |
|---|---|
| Sonarr | `/data/series` |
| Radarr | `/data/movies` |
| Lidarr | `/data/music` |
В Transmission: папка завершённых загрузок → `/data/downloads/complete`
### Overseerr → Plex + *arr
В Overseerr UI при первом входе:
1. Plex URL: `http://plex:32400`
2. Добавь Radarr: `http://radarr:7878`
3. Добавь Sonarr: `http://sonarr:8989`
## Port-forward для начальной настройки
```bash
export KUBECONFIG=$(pwd)/kubeconfig
# Plex
kubectl -n mediaserver port-forward svc/plex 32400:32400
# Sonarr
kubectl -n mediaserver port-forward svc/sonarr 8989:8989
# Radarr
kubectl -n mediaserver port-forward svc/radarr 7878:7878
# Transmission
kubectl -n mediaserver port-forward svc/transmission 9091:9091
# Prowlarr
kubectl -n mediaserver port-forward svc/prowlarr 9696:9696
# Overseerr
kubectl -n mediaserver port-forward svc/overseerr 5055:5055
```
## Проверка Hysteria2
```bash
# Логи Hysteria2 sidecar
kubectl -n mediaserver logs -l app.kubernetes.io/name=prowlarr -c hysteria2 -f
# Проверить прокси конфиг Prowlarr
kubectl -n mediaserver exec -it deployment/prowlarr -c prowlarr -- cat /config/config.xml | grep -i proxy
# Тест SOCKS5 через curl внутри pod
kubectl -n mediaserver exec -it deployment/prowlarr -c prowlarr -- \
curl --socks5 127.0.0.1:1080 https://ifconfig.me
```
## Диагностика
```bash
# Все поды
kubectl -n mediaserver get pods -o wide
# Статус PVC
kubectl -n mediaserver get pvc
# IP Samba
kubectl -n mediaserver get svc samba
# Логи конкретного сервиса
kubectl -n mediaserver logs -l app.kubernetes.io/name=sonarr -f
# Перезапуск сервиса
kubectl -n mediaserver rollout restart deployment/radarr
```
## Обновление
```bash
# Обновить все компоненты (pull latest images)
make addon-mediaserver
# Только один сервис
make addon-mediaserver ARGS="--tags helm"
```
Автообновление образов — включи Watchtower:
```yaml
mediaserver_watchtower_enabled: true
mediaserver_watchtower_schedule: "0 0 4 * * *" # 4:00 утра каждый день
```
## Деинсталляция
```bash
export KUBECONFIG=$(pwd)/kubeconfig
# Удалить все Helm-релизы
for svc in plex transmission sonarr radarr lidarr bazarr prowlarr overseerr watchtower; do
helm -n mediaserver uninstall $svc 2>/dev/null || true
done
# Удалить Samba и PVC
kubectl -n mediaserver delete deploy samba
kubectl -n mediaserver delete svc samba
kubectl -n mediaserver delete secret hysteria2-config
# Удалить namespace (удалит и PVC если нет ReclaimPolicy: Retain)
kubectl delete namespace mediaserver
```

View File

@@ -0,0 +1,156 @@
---
mediaserver_namespace: "mediaserver"
# Shared PVC for all services
mediaserver_storage_class: "" # empty = default StorageClass (nfs-master01)
mediaserver_data_size: "200Gi"
mediaserver_pvc_name: "mediaserver-data"
# UID/GID for all linuxserver containers
mediaserver_puid: "1000"
mediaserver_pgid: "1000"
mediaserver_timezone: "Europe/Moscow"
# bjw-s/app-template chart
mediaserver_app_template_repo: "https://bjw-s.github.io/helm-charts"
mediaserver_app_template_version: "" # empty = auto-detect latest
# ─── Plex ─────────────────────────────────────────────────────────────────────
mediaserver_plex_enabled: true
# Claim token from https://plex.tv/claim (valid 4 min). Leave empty after first run.
mediaserver_plex_claim_token: "{{ vault_plex_claim_token | default('') }}"
mediaserver_plex_ingress_enabled: false
mediaserver_plex_ingress_host: "plex.local"
mediaserver_plex_resources:
requests:
cpu: "200m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "4Gi"
# ─── Transmission ─────────────────────────────────────────────────────────────
mediaserver_transmission_enabled: true
mediaserver_transmission_ingress_enabled: false
mediaserver_transmission_ingress_host: "transmission.local"
mediaserver_transmission_password: "{{ vault_transmission_password | default('transmission') }}"
mediaserver_transmission_peer_port: 51413
mediaserver_transmission_resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
# ─── Sonarr ───────────────────────────────────────────────────────────────────
mediaserver_sonarr_enabled: true
mediaserver_sonarr_ingress_enabled: false
mediaserver_sonarr_ingress_host: "sonarr.local"
mediaserver_sonarr_resources:
requests:
cpu: "50m"
memory: "128Mi"
limits:
cpu: "300m"
memory: "512Mi"
# ─── Radarr ───────────────────────────────────────────────────────────────────
mediaserver_radarr_enabled: true
mediaserver_radarr_ingress_enabled: false
mediaserver_radarr_ingress_host: "radarr.local"
mediaserver_radarr_resources:
requests:
cpu: "50m"
memory: "128Mi"
limits:
cpu: "300m"
memory: "512Mi"
# ─── Lidarr ───────────────────────────────────────────────────────────────────
mediaserver_lidarr_enabled: true
mediaserver_lidarr_ingress_enabled: false
mediaserver_lidarr_ingress_host: "lidarr.local"
mediaserver_lidarr_resources:
requests:
cpu: "50m"
memory: "64Mi"
limits:
cpu: "300m"
memory: "256Mi"
# ─── Bazarr ───────────────────────────────────────────────────────────────────
mediaserver_bazarr_enabled: true
mediaserver_bazarr_ingress_enabled: false
mediaserver_bazarr_ingress_host: "bazarr.local"
mediaserver_bazarr_resources:
requests:
cpu: "50m"
memory: "64Mi"
limits:
cpu: "200m"
memory: "256Mi"
# ─── Prowlarr ─────────────────────────────────────────────────────────────────
mediaserver_prowlarr_enabled: true
mediaserver_prowlarr_ingress_enabled: false
mediaserver_prowlarr_ingress_host: "prowlarr.local"
mediaserver_prowlarr_resources:
requests:
cpu: "50m"
memory: "64Mi"
limits:
cpu: "200m"
memory: "256Mi"
# ─── Hysteria2 sidecar (Prowlarr pod only) ────────────────────────────────────
# SOCKS5 proxy on 127.0.0.1:1080 — Prowlarr uses it to reach blocked trackers
mediaserver_hysteria2_enabled: true
# Server address — e.g. "example.com:443" or "1.2.3.4:443"
mediaserver_hysteria2_server: "{{ vault_hysteria2_server | default('') }}"
# Authentication password
mediaserver_hysteria2_auth: "{{ vault_hysteria2_auth | default('') }}"
# Optional obfuscation: type "salamander" + password, or leave both empty
mediaserver_hysteria2_obfs_type: ""
mediaserver_hysteria2_obfs_password: ""
# Skip TLS verification (self-signed certs on the server side)
mediaserver_hysteria2_insecure: false
mediaserver_hysteria2_resources:
requests:
cpu: "50m"
memory: "32Mi"
limits:
cpu: "200m"
memory: "128Mi"
# ─── Overseerr ────────────────────────────────────────────────────────────────
mediaserver_overseerr_enabled: true
mediaserver_overseerr_ingress_enabled: false
mediaserver_overseerr_ingress_host: "overseerr.local"
mediaserver_overseerr_resources:
requests:
cpu: "50m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
# ─── Samba ────────────────────────────────────────────────────────────────────
# LoadBalancer service — gets LAN IP from kube-vip
mediaserver_samba_enabled: true
mediaserver_samba_user: "media"
mediaserver_samba_password: "{{ vault_samba_password | default('media') }}"
mediaserver_samba_workgroup: "WORKGROUP"
mediaserver_samba_share_name: "media"
# Optional: set a specific LAN IP for Samba (kube-vip annotation)
# Leave empty to get a dynamic IP from kube-vip pool
mediaserver_samba_static_ip: ""
# ─── Watchtower (optional) ────────────────────────────────────────────────────
# Auto-updates container images in the mediaserver namespace
mediaserver_watchtower_enabled: false
mediaserver_watchtower_schedule: "0 0 4 * * *" # cron: 4am every day
# ─── Ingress ──────────────────────────────────────────────────────────────────
mediaserver_ingress_class: "nginx"
mediaserver_ingress_annotations: {}

View File

@@ -0,0 +1,466 @@
---
- name: Add bjw-s Helm repo
kubernetes.core.helm_repository:
name: bjw-s
repo_url: "{{ mediaserver_app_template_repo }}"
become: true
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
environment:
KUBECONFIG: "{{ k3s_kubeconfig_path }}"
- name: Update Helm repos
ansible.builtin.command: helm repo update
become: true
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
changed_when: false
environment:
KUBECONFIG: "{{ k3s_kubeconfig_path }}"
- name: Auto-detect latest app-template version
ansible.builtin.command: >
helm search repo bjw-s/app-template --output json
become: true
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
register: app_template_search
changed_when: false
when: mediaserver_app_template_version == ""
environment:
KUBECONFIG: "{{ k3s_kubeconfig_path }}"
- name: Set app-template version fact
ansible.builtin.set_fact:
_app_template_version: >-
{{ (mediaserver_app_template_version != '')
| ternary(mediaserver_app_template_version,
(app_template_search.stdout | from_json)[0]['version']) }}
run_once: true
- name: Show app-template version
ansible.builtin.debug:
msg: "Using bjw-s/app-template version: {{ _app_template_version }}"
run_once: true
- name: Create mediaserver namespace
ansible.builtin.command: >
k3s kubectl create namespace {{ mediaserver_namespace }}
--dry-run=client -o yaml | k3s kubectl apply -f -
become: true
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
changed_when: false
# ── Shared PVC ────────────────────────────────────────────────────────────────
- name: Template shared PVC
ansible.builtin.template:
src: pvc.yaml.j2
dest: /tmp/mediaserver-pvc.yaml
mode: '0644'
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
- name: Apply shared PVC
kubernetes.core.k8s:
src: /tmp/mediaserver-pvc.yaml
state: present
become: true
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
environment:
KUBECONFIG: "{{ k3s_kubeconfig_path }}"
# ── Directory init job ────────────────────────────────────────────────────────
- name: Template init-dirs Job
ansible.builtin.template:
src: init-dirs-job.yaml.j2
dest: /tmp/mediaserver-init-dirs-job.yaml
mode: '0644'
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
- name: Apply init-dirs Job
kubernetes.core.k8s:
src: /tmp/mediaserver-init-dirs-job.yaml
state: present
become: true
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
environment:
KUBECONFIG: "{{ k3s_kubeconfig_path }}"
- name: Wait for init-dirs Job to complete
ansible.builtin.command: >
k3s kubectl -n {{ mediaserver_namespace }}
wait job/mediaserver-init-dirs
--for=condition=complete --timeout=120s
become: true
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
changed_when: false
retries: 3
delay: 10
# ── Hysteria2 Secret (if enabled) ─────────────────────────────────────────────
- name: Template Hysteria2 Secret
ansible.builtin.template:
src: hysteria2-secret.yaml.j2
dest: /tmp/mediaserver-hysteria2-secret.yaml
mode: '0600'
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
when: mediaserver_prowlarr_enabled and mediaserver_hysteria2_enabled
- name: Apply Hysteria2 Secret
kubernetes.core.k8s:
src: /tmp/mediaserver-hysteria2-secret.yaml
state: present
become: true
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
environment:
KUBECONFIG: "{{ k3s_kubeconfig_path }}"
when: mediaserver_prowlarr_enabled and mediaserver_hysteria2_enabled
# ── Plex ──────────────────────────────────────────────────────────────────────
- name: Template Plex values
ansible.builtin.template:
src: plex-values.yaml.j2
dest: /tmp/mediaserver-plex-values.yaml
mode: '0644'
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
when: mediaserver_plex_enabled
- name: Deploy Plex via Helm
kubernetes.core.helm:
name: plex
chart_ref: bjw-s/app-template
chart_version: "{{ _app_template_version }}"
release_namespace: "{{ mediaserver_namespace }}"
create_namespace: false
wait: true
timeout: "5m0s"
values_files:
- /tmp/mediaserver-plex-values.yaml
become: true
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
environment:
KUBECONFIG: "{{ k3s_kubeconfig_path }}"
when: mediaserver_plex_enabled
# ── Transmission ──────────────────────────────────────────────────────────────
- name: Template Transmission values
ansible.builtin.template:
src: transmission-values.yaml.j2
dest: /tmp/mediaserver-transmission-values.yaml
mode: '0644'
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
when: mediaserver_transmission_enabled
- name: Deploy Transmission via Helm
kubernetes.core.helm:
name: transmission
chart_ref: bjw-s/app-template
chart_version: "{{ _app_template_version }}"
release_namespace: "{{ mediaserver_namespace }}"
create_namespace: false
wait: true
timeout: "5m0s"
values_files:
- /tmp/mediaserver-transmission-values.yaml
become: true
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
environment:
KUBECONFIG: "{{ k3s_kubeconfig_path }}"
when: mediaserver_transmission_enabled
# ── Sonarr ────────────────────────────────────────────────────────────────────
- name: Template Sonarr values
ansible.builtin.template:
src: sonarr-values.yaml.j2
dest: /tmp/mediaserver-sonarr-values.yaml
mode: '0644'
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
when: mediaserver_sonarr_enabled
- name: Deploy Sonarr via Helm
kubernetes.core.helm:
name: sonarr
chart_ref: bjw-s/app-template
chart_version: "{{ _app_template_version }}"
release_namespace: "{{ mediaserver_namespace }}"
create_namespace: false
wait: true
timeout: "5m0s"
values_files:
- /tmp/mediaserver-sonarr-values.yaml
become: true
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
environment:
KUBECONFIG: "{{ k3s_kubeconfig_path }}"
when: mediaserver_sonarr_enabled
# ── Radarr ────────────────────────────────────────────────────────────────────
- name: Template Radarr values
ansible.builtin.template:
src: radarr-values.yaml.j2
dest: /tmp/mediaserver-radarr-values.yaml
mode: '0644'
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
when: mediaserver_radarr_enabled
- name: Deploy Radarr via Helm
kubernetes.core.helm:
name: radarr
chart_ref: bjw-s/app-template
chart_version: "{{ _app_template_version }}"
release_namespace: "{{ mediaserver_namespace }}"
create_namespace: false
wait: true
timeout: "5m0s"
values_files:
- /tmp/mediaserver-radarr-values.yaml
become: true
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
environment:
KUBECONFIG: "{{ k3s_kubeconfig_path }}"
when: mediaserver_radarr_enabled
# ── Lidarr ────────────────────────────────────────────────────────────────────
- name: Template Lidarr values
ansible.builtin.template:
src: lidarr-values.yaml.j2
dest: /tmp/mediaserver-lidarr-values.yaml
mode: '0644'
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
when: mediaserver_lidarr_enabled
- name: Deploy Lidarr via Helm
kubernetes.core.helm:
name: lidarr
chart_ref: bjw-s/app-template
chart_version: "{{ _app_template_version }}"
release_namespace: "{{ mediaserver_namespace }}"
create_namespace: false
wait: true
timeout: "5m0s"
values_files:
- /tmp/mediaserver-lidarr-values.yaml
become: true
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
environment:
KUBECONFIG: "{{ k3s_kubeconfig_path }}"
when: mediaserver_lidarr_enabled
# ── Bazarr ────────────────────────────────────────────────────────────────────
- name: Template Bazarr values
ansible.builtin.template:
src: bazarr-values.yaml.j2
dest: /tmp/mediaserver-bazarr-values.yaml
mode: '0644'
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
when: mediaserver_bazarr_enabled
- name: Deploy Bazarr via Helm
kubernetes.core.helm:
name: bazarr
chart_ref: bjw-s/app-template
chart_version: "{{ _app_template_version }}"
release_namespace: "{{ mediaserver_namespace }}"
create_namespace: false
wait: true
timeout: "5m0s"
values_files:
- /tmp/mediaserver-bazarr-values.yaml
become: true
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
environment:
KUBECONFIG: "{{ k3s_kubeconfig_path }}"
when: mediaserver_bazarr_enabled
# ── Prowlarr (with Hysteria2 sidecar) ─────────────────────────────────────────
- name: Template Prowlarr values
ansible.builtin.template:
src: prowlarr-values.yaml.j2
dest: /tmp/mediaserver-prowlarr-values.yaml
mode: '0644'
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
when: mediaserver_prowlarr_enabled
- name: Deploy Prowlarr via Helm
kubernetes.core.helm:
name: prowlarr
chart_ref: bjw-s/app-template
chart_version: "{{ _app_template_version }}"
release_namespace: "{{ mediaserver_namespace }}"
create_namespace: false
wait: true
timeout: "5m0s"
values_files:
- /tmp/mediaserver-prowlarr-values.yaml
become: true
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
environment:
KUBECONFIG: "{{ k3s_kubeconfig_path }}"
when: mediaserver_prowlarr_enabled
# ── Overseerr ─────────────────────────────────────────────────────────────────
- name: Template Overseerr values
ansible.builtin.template:
src: overseerr-values.yaml.j2
dest: /tmp/mediaserver-overseerr-values.yaml
mode: '0644'
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
when: mediaserver_overseerr_enabled
- name: Deploy Overseerr via Helm
kubernetes.core.helm:
name: overseerr
chart_ref: bjw-s/app-template
chart_version: "{{ _app_template_version }}"
release_namespace: "{{ mediaserver_namespace }}"
create_namespace: false
wait: true
timeout: "5m0s"
values_files:
- /tmp/mediaserver-overseerr-values.yaml
become: true
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
environment:
KUBECONFIG: "{{ k3s_kubeconfig_path }}"
when: mediaserver_overseerr_enabled
# ── Samba ─────────────────────────────────────────────────────────────────────
- name: Template Samba manifest
ansible.builtin.template:
src: samba.yaml.j2
dest: /tmp/mediaserver-samba.yaml
mode: '0644'
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
when: mediaserver_samba_enabled
- name: Apply Samba manifest
kubernetes.core.k8s:
src: /tmp/mediaserver-samba.yaml
state: present
become: true
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
environment:
KUBECONFIG: "{{ k3s_kubeconfig_path }}"
when: mediaserver_samba_enabled
# ── Watchtower (optional) ─────────────────────────────────────────────────────
- name: Template Watchtower values
ansible.builtin.template:
src: watchtower-values.yaml.j2
dest: /tmp/mediaserver-watchtower-values.yaml
mode: '0644'
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
when: mediaserver_watchtower_enabled
- name: Deploy Watchtower via Helm
kubernetes.core.helm:
name: watchtower
chart_ref: bjw-s/app-template
chart_version: "{{ _app_template_version }}"
release_namespace: "{{ mediaserver_namespace }}"
create_namespace: false
wait: true
timeout: "3m0s"
values_files:
- /tmp/mediaserver-watchtower-values.yaml
become: true
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
environment:
KUBECONFIG: "{{ k3s_kubeconfig_path }}"
when: mediaserver_watchtower_enabled
# ── Status ────────────────────────────────────────────────────────────────────
- name: Wait for all Deployments to be ready
ansible.builtin.command: >
k3s kubectl -n {{ mediaserver_namespace }}
rollout status deployment --timeout=300s
become: true
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
changed_when: false
retries: 3
delay: 15
- name: Get Samba LoadBalancer IP
ansible.builtin.command: >
k3s kubectl -n {{ mediaserver_namespace }}
get svc samba -o jsonpath="{.status.loadBalancer.ingress[0].ip}"
become: true
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
register: samba_ip
changed_when: false
when: mediaserver_samba_enabled
- name: Get mediaserver pods
ansible.builtin.command: >
k3s kubectl -n {{ mediaserver_namespace }} get pods -o wide
become: true
delegate_to: "{{ groups['k3s_master'][0] }}"
run_once: true
register: ms_pods
changed_when: false
- name: MediaServer status
ansible.builtin.debug:
msg:
- "=== MediaServer deployed to namespace: {{ mediaserver_namespace }} ==="
- "Plex: http://<node-ip>:32400 (or Ingress if enabled)"
- "Transmission: http://transmission.{{ mediaserver_namespace }}.svc:9091 (user: admin)"
- "Sonarr: http://sonarr.{{ mediaserver_namespace }}.svc:8989"
- "Radarr: http://radarr.{{ mediaserver_namespace }}.svc:7878"
- "Lidarr: http://lidarr.{{ mediaserver_namespace }}.svc:8686"
- "Bazarr: http://bazarr.{{ mediaserver_namespace }}.svc:6767"
- "Prowlarr: http://prowlarr.{{ mediaserver_namespace }}.svc:9696 (proxy: 127.0.0.1:1080)"
- "Overseerr: http://overseerr.{{ mediaserver_namespace }}.svc:5055"
- "Samba LAN IP: {{ samba_ip.stdout | default('pending') }} (\\\\<ip>\\{{ mediaserver_samba_share_name }})"
- ""
- "To connect from LAN: smb://{{ samba_ip.stdout | default('<samba-ip>') }}/{{ mediaserver_samba_share_name }}"
- " User: {{ mediaserver_samba_user }} Password: (vault_samba_password)"
run_once: true
- name: MediaServer pods
ansible.builtin.debug:
msg: "{{ ms_pods.stdout_lines }}"
run_once: true

View File

@@ -0,0 +1,76 @@
---
controllers:
main:
type: deployment
replicas: 1
strategy: Recreate
containers:
main:
image:
repository: lscr.io/linuxserver/bazarr
tag: latest
pullPolicy: IfNotPresent
env:
PUID: "{{ mediaserver_puid }}"
PGID: "{{ mediaserver_pgid }}"
TZ: "{{ mediaserver_timezone }}"
resources:
requests:
cpu: "{{ mediaserver_bazarr_resources.requests.cpu }}"
memory: "{{ mediaserver_bazarr_resources.requests.memory }}"
limits:
cpu: "{{ mediaserver_bazarr_resources.limits.cpu }}"
memory: "{{ mediaserver_bazarr_resources.limits.memory }}"
probes:
liveness:
enabled: true
custom: true
spec:
httpGet:
path: /ping
port: 6767
initialDelaySeconds: 30
periodSeconds: 30
failureThreshold: 5
service:
main:
controller: main
type: ClusterIP
ports:
http:
port: 6767
protocol: TCP
ingress:
main:
enabled: {{ mediaserver_bazarr_ingress_enabled | lower }}
{% if mediaserver_bazarr_ingress_enabled %}
className: "{{ mediaserver_ingress_class }}"
{% if mediaserver_ingress_annotations %}
annotations:
{% for key, value in mediaserver_ingress_annotations.items() %}
{{ key }}: "{{ value }}"
{% endfor %}
{% endif %}
hosts:
- host: "{{ mediaserver_bazarr_ingress_host }}"
paths:
- path: /
pathType: Prefix
service:
name: main
port: http
{% endif %}
persistence:
data:
type: persistentVolumeClaim
existingClaim: {{ mediaserver_pvc_name }}
advancedMounts:
main:
main:
- path: /config
subPath: config/bazarr
- path: /data
subPath: data

View File

@@ -0,0 +1,26 @@
---
apiVersion: v1
kind: Secret
metadata:
name: hysteria2-config
namespace: {{ mediaserver_namespace }}
type: Opaque
stringData:
config.yaml: |
server: {{ mediaserver_hysteria2_server }}
auth: {{ mediaserver_hysteria2_auth }}
{% if mediaserver_hysteria2_obfs_type != "" %}
obfs:
type: {{ mediaserver_hysteria2_obfs_type }}
{{ mediaserver_hysteria2_obfs_type }}:
password: {{ mediaserver_hysteria2_obfs_password }}
{% endif %}
{% if mediaserver_hysteria2_insecure %}
tls:
insecure: true
{% endif %}
socks5:
listen: 127.0.0.1:1080
transport:
udp:
hopInterval: 30s

View File

@@ -0,0 +1,47 @@
---
apiVersion: batch/v1
kind: Job
metadata:
name: mediaserver-init-dirs
namespace: {{ mediaserver_namespace }}
spec:
ttlSecondsAfterFinished: 120
template:
spec:
restartPolicy: OnFailure
securityContext:
runAsUser: {{ mediaserver_puid }}
runAsGroup: {{ mediaserver_pgid }}
fsGroup: {{ mediaserver_pgid }}
containers:
- name: init
image: busybox:latest
command:
- /bin/sh
- -c
- |
set -e
mkdir -p /data/config/plex
mkdir -p /data/config/transmission
mkdir -p /data/config/sonarr
mkdir -p /data/config/radarr
mkdir -p /data/config/lidarr
mkdir -p /data/config/bazarr
mkdir -p /data/config/prowlarr
mkdir -p /data/config/overseerr
mkdir -p /data/data/downloads/complete
mkdir -p /data/data/downloads/incomplete
mkdir -p /data/data/movies
mkdir -p /data/data/series
mkdir -p /data/data/music
echo "Directory structure created successfully"
ls -la /data/
ls -la /data/config/
ls -la /data/data/
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: {{ mediaserver_pvc_name }}

View File

@@ -0,0 +1,76 @@
---
controllers:
main:
type: deployment
replicas: 1
strategy: Recreate
containers:
main:
image:
repository: lscr.io/linuxserver/lidarr
tag: latest
pullPolicy: IfNotPresent
env:
PUID: "{{ mediaserver_puid }}"
PGID: "{{ mediaserver_pgid }}"
TZ: "{{ mediaserver_timezone }}"
resources:
requests:
cpu: "{{ mediaserver_lidarr_resources.requests.cpu }}"
memory: "{{ mediaserver_lidarr_resources.requests.memory }}"
limits:
cpu: "{{ mediaserver_lidarr_resources.limits.cpu }}"
memory: "{{ mediaserver_lidarr_resources.limits.memory }}"
probes:
liveness:
enabled: true
custom: true
spec:
httpGet:
path: /ping
port: 8686
initialDelaySeconds: 30
periodSeconds: 30
failureThreshold: 5
service:
main:
controller: main
type: ClusterIP
ports:
http:
port: 8686
protocol: TCP
ingress:
main:
enabled: {{ mediaserver_lidarr_ingress_enabled | lower }}
{% if mediaserver_lidarr_ingress_enabled %}
className: "{{ mediaserver_ingress_class }}"
{% if mediaserver_ingress_annotations %}
annotations:
{% for key, value in mediaserver_ingress_annotations.items() %}
{{ key }}: "{{ value }}"
{% endfor %}
{% endif %}
hosts:
- host: "{{ mediaserver_lidarr_ingress_host }}"
paths:
- path: /
pathType: Prefix
service:
name: main
port: http
{% endif %}
persistence:
data:
type: persistentVolumeClaim
existingClaim: {{ mediaserver_pvc_name }}
advancedMounts:
main:
main:
- path: /config
subPath: config/lidarr
- path: /data
subPath: data

View File

@@ -0,0 +1,74 @@
---
controllers:
main:
type: deployment
replicas: 1
strategy: Recreate
containers:
main:
image:
repository: lscr.io/linuxserver/overseerr
tag: latest
pullPolicy: IfNotPresent
env:
PUID: "{{ mediaserver_puid }}"
PGID: "{{ mediaserver_pgid }}"
TZ: "{{ mediaserver_timezone }}"
resources:
requests:
cpu: "{{ mediaserver_overseerr_resources.requests.cpu }}"
memory: "{{ mediaserver_overseerr_resources.requests.memory }}"
limits:
cpu: "{{ mediaserver_overseerr_resources.limits.cpu }}"
memory: "{{ mediaserver_overseerr_resources.limits.memory }}"
probes:
liveness:
enabled: true
custom: true
spec:
httpGet:
path: /api/v1/status
port: 5055
initialDelaySeconds: 30
periodSeconds: 30
failureThreshold: 5
service:
main:
controller: main
type: ClusterIP
ports:
http:
port: 5055
protocol: TCP
ingress:
main:
enabled: {{ mediaserver_overseerr_ingress_enabled | lower }}
{% if mediaserver_overseerr_ingress_enabled %}
className: "{{ mediaserver_ingress_class }}"
{% if mediaserver_ingress_annotations %}
annotations:
{% for key, value in mediaserver_ingress_annotations.items() %}
{{ key }}: "{{ value }}"
{% endfor %}
{% endif %}
hosts:
- host: "{{ mediaserver_overseerr_ingress_host }}"
paths:
- path: /
pathType: Prefix
service:
name: main
port: http
{% endif %}
persistence:
data:
type: persistentVolumeClaim
existingClaim: {{ mediaserver_pvc_name }}
advancedMounts:
main:
main:
- path: /config
subPath: config/overseerr

View File

@@ -0,0 +1,89 @@
---
controllers:
main:
type: deployment
replicas: 1
strategy: Recreate
containers:
main:
image:
repository: lscr.io/linuxserver/plex
tag: latest
pullPolicy: IfNotPresent
env:
PUID: "{{ mediaserver_puid }}"
PGID: "{{ mediaserver_pgid }}"
TZ: "{{ mediaserver_timezone }}"
VERSION: docker
{% if mediaserver_plex_claim_token != "" %}
PLEX_CLAIM: "{{ mediaserver_plex_claim_token }}"
{% endif %}
resources:
requests:
cpu: "{{ mediaserver_plex_resources.requests.cpu }}"
memory: "{{ mediaserver_plex_resources.requests.memory }}"
limits:
cpu: "{{ mediaserver_plex_resources.limits.cpu }}"
memory: "{{ mediaserver_plex_resources.limits.memory }}"
probes:
liveness:
enabled: true
custom: true
spec:
httpGet:
path: /identity
port: 32400
initialDelaySeconds: 60
periodSeconds: 30
failureThreshold: 5
readiness:
enabled: true
custom: true
spec:
httpGet:
path: /identity
port: 32400
initialDelaySeconds: 30
periodSeconds: 15
service:
main:
controller: main
type: ClusterIP
ports:
http:
port: 32400
protocol: TCP
ingress:
main:
enabled: {{ mediaserver_plex_ingress_enabled | lower }}
{% if mediaserver_plex_ingress_enabled %}
className: "{{ mediaserver_ingress_class }}"
{% if mediaserver_ingress_annotations %}
annotations:
{% for key, value in mediaserver_ingress_annotations.items() %}
{{ key }}: "{{ value }}"
{% endfor %}
{% endif %}
hosts:
- host: "{{ mediaserver_plex_ingress_host }}"
paths:
- path: /
pathType: Prefix
service:
name: main
port: http
{% endif %}
persistence:
data:
type: persistentVolumeClaim
existingClaim: {{ mediaserver_pvc_name }}
advancedMounts:
main:
main:
- path: /config
subPath: config/plex
- path: /data
subPath: data

View File

@@ -0,0 +1,168 @@
---
controllers:
main:
type: deployment
replicas: 1
strategy: Recreate
# Init container writes Prowlarr proxy config before the app starts
initContainers:
init-proxy-config:
image:
repository: busybox
tag: latest
command:
- /bin/sh
- -c
- |
CONFIG_DIR="/config"
CONFIG_FILE="$CONFIG_DIR/config.xml"
mkdir -p "$CONFIG_DIR"
if [ ! -f "$CONFIG_FILE" ]; then
cat > "$CONFIG_FILE" <<XMLEOF
<Config>
<LogLevel>info</LogLevel>
<ProxyEnabled>True</ProxyEnabled>
<ProxyType>Socks5</ProxyType>
<ProxyHostname>127.0.0.1</ProxyHostname>
<ProxyPort>1080</ProxyPort>
<ProxyBypassLocalAddresses>True</ProxyBypassLocalAddresses>
<ProxyBypassFilter>localhost,127.0.0.1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16</ProxyBypassFilter>
</Config>
XMLEOF
echo "Proxy config.xml created"
else
# Update proxy fields in existing config
sed -i 's|<ProxyEnabled>.*</ProxyEnabled>|<ProxyEnabled>True</ProxyEnabled>|g' "$CONFIG_FILE"
if grep -q "<ProxyType>" "$CONFIG_FILE"; then
sed -i 's|<ProxyType>.*</ProxyType>|<ProxyType>Socks5</ProxyType>|g' "$CONFIG_FILE"
else
sed -i 's|</Config>| <ProxyType>Socks5</ProxyType>\n</Config>|' "$CONFIG_FILE"
fi
if grep -q "<ProxyHostname>" "$CONFIG_FILE"; then
sed -i 's|<ProxyHostname>.*</ProxyHostname>|<ProxyHostname>127.0.0.1</ProxyHostname>|g' "$CONFIG_FILE"
else
sed -i 's|</Config>| <ProxyHostname>127.0.0.1</ProxyHostname>\n</Config>|' "$CONFIG_FILE"
fi
if grep -q "<ProxyPort>" "$CONFIG_FILE"; then
sed -i 's|<ProxyPort>.*</ProxyPort>|<ProxyPort>1080</ProxyPort>|g' "$CONFIG_FILE"
else
sed -i 's|</Config>| <ProxyPort>1080</ProxyPort>\n</Config>|' "$CONFIG_FILE"
fi
echo "Proxy config.xml updated"
fi
securityContext:
runAsUser: {{ mediaserver_puid }}
runAsGroup: {{ mediaserver_pgid }}
volumeMounts:
- name: data
mountPath: /config
subPath: config/prowlarr
containers:
# Main Prowlarr container
main:
image:
repository: lscr.io/linuxserver/prowlarr
tag: latest
pullPolicy: IfNotPresent
env:
PUID: "{{ mediaserver_puid }}"
PGID: "{{ mediaserver_pgid }}"
TZ: "{{ mediaserver_timezone }}"
resources:
requests:
cpu: "{{ mediaserver_prowlarr_resources.requests.cpu }}"
memory: "{{ mediaserver_prowlarr_resources.requests.memory }}"
limits:
cpu: "{{ mediaserver_prowlarr_resources.limits.cpu }}"
memory: "{{ mediaserver_prowlarr_resources.limits.memory }}"
probes:
liveness:
enabled: true
custom: true
spec:
httpGet:
path: /ping
port: 9696
initialDelaySeconds: 30
periodSeconds: 30
failureThreshold: 5
# Hysteria2 SOCKS5 proxy sidecar — only Prowlarr uses it via 127.0.0.1:1080
hysteria2:
image:
repository: ghcr.io/apernet/hysteria
tag: app-latest
pullPolicy: IfNotPresent
command:
- /app/hysteria
- client
- --config
- /etc/hysteria/config.yaml
resources:
requests:
cpu: "{{ mediaserver_hysteria2_resources.requests.cpu }}"
memory: "{{ mediaserver_hysteria2_resources.requests.memory }}"
limits:
cpu: "{{ mediaserver_hysteria2_resources.limits.cpu }}"
memory: "{{ mediaserver_hysteria2_resources.limits.memory }}"
probes:
liveness:
enabled: false
readiness:
enabled: false
startup:
enabled: false
service:
main:
controller: main
type: ClusterIP
ports:
http:
port: 9696
protocol: TCP
ingress:
main:
enabled: {{ mediaserver_prowlarr_ingress_enabled | lower }}
{% if mediaserver_prowlarr_ingress_enabled %}
className: "{{ mediaserver_ingress_class }}"
{% if mediaserver_ingress_annotations %}
annotations:
{% for key, value in mediaserver_ingress_annotations.items() %}
{{ key }}: "{{ value }}"
{% endfor %}
{% endif %}
hosts:
- host: "{{ mediaserver_prowlarr_ingress_host }}"
paths:
- path: /
pathType: Prefix
service:
name: main
port: http
{% endif %}
persistence:
data:
type: persistentVolumeClaim
existingClaim: {{ mediaserver_pvc_name }}
advancedMounts:
main:
main:
- path: /config
subPath: config/prowlarr
init-proxy-config:
- path: /config
subPath: config/prowlarr
hysteria2-config:
type: secret
name: hysteria2-config
advancedMounts:
main:
hysteria2:
- path: /etc/hysteria/config.yaml
subPath: config.yaml
readOnly: true

View File

@@ -0,0 +1,15 @@
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ mediaserver_pvc_name }}
namespace: {{ mediaserver_namespace }}
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: {{ mediaserver_data_size }}
{% if mediaserver_storage_class != "" %}
storageClassName: {{ mediaserver_storage_class }}
{% endif %}

View File

@@ -0,0 +1,76 @@
---
controllers:
main:
type: deployment
replicas: 1
strategy: Recreate
containers:
main:
image:
repository: lscr.io/linuxserver/radarr
tag: latest
pullPolicy: IfNotPresent
env:
PUID: "{{ mediaserver_puid }}"
PGID: "{{ mediaserver_pgid }}"
TZ: "{{ mediaserver_timezone }}"
resources:
requests:
cpu: "{{ mediaserver_radarr_resources.requests.cpu }}"
memory: "{{ mediaserver_radarr_resources.requests.memory }}"
limits:
cpu: "{{ mediaserver_radarr_resources.limits.cpu }}"
memory: "{{ mediaserver_radarr_resources.limits.memory }}"
probes:
liveness:
enabled: true
custom: true
spec:
httpGet:
path: /ping
port: 7878
initialDelaySeconds: 30
periodSeconds: 30
failureThreshold: 5
service:
main:
controller: main
type: ClusterIP
ports:
http:
port: 7878
protocol: TCP
ingress:
main:
enabled: {{ mediaserver_radarr_ingress_enabled | lower }}
{% if mediaserver_radarr_ingress_enabled %}
className: "{{ mediaserver_ingress_class }}"
{% if mediaserver_ingress_annotations %}
annotations:
{% for key, value in mediaserver_ingress_annotations.items() %}
{{ key }}: "{{ value }}"
{% endfor %}
{% endif %}
hosts:
- host: "{{ mediaserver_radarr_ingress_host }}"
paths:
- path: /
pathType: Prefix
service:
name: main
port: http
{% endif %}
persistence:
data:
type: persistentVolumeClaim
existingClaim: {{ mediaserver_pvc_name }}
advancedMounts:
main:
main:
- path: /config
subPath: config/radarr
- path: /data
subPath: data

View File

@@ -0,0 +1,86 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: samba
namespace: {{ mediaserver_namespace }}
labels:
app: samba
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: samba
template:
metadata:
labels:
app: samba
spec:
securityContext:
runAsUser: 0 # Samba requires root to manage shares
containers:
- name: samba
image: dperson/samba:latest
imagePullPolicy: IfNotPresent
env:
- name: TZ
value: "{{ mediaserver_timezone }}"
# -u user;password -s share_name;path;browsable;readonly;guest;users
args:
- -u
- "{{ mediaserver_samba_user }};{{ mediaserver_samba_password }}"
- -s
- "{{ mediaserver_samba_share_name }};/data;yes;no;no;{{ mediaserver_samba_user }}"
- -W
- "{{ mediaserver_samba_workgroup }}"
- -g
- "ea support = yes"
ports:
- name: smb
containerPort: 445
protocol: TCP
- name: netbios
containerPort: 139
protocol: TCP
volumeMounts:
- name: data
mountPath: /data
subPath: data
resources:
requests:
cpu: "50m"
memory: "64Mi"
limits:
cpu: "200m"
memory: "256Mi"
volumes:
- name: data
persistentVolumeClaim:
claimName: {{ mediaserver_pvc_name }}
---
apiVersion: v1
kind: Service
metadata:
name: samba
namespace: {{ mediaserver_namespace }}
labels:
app: samba
{% if mediaserver_samba_static_ip != "" %}
annotations:
kube-vip.io/loadbalancerIPs: "{{ mediaserver_samba_static_ip }}"
{% endif %}
spec:
type: LoadBalancer
selector:
app: samba
ports:
- name: smb
port: 445
targetPort: 445
protocol: TCP
- name: netbios
port: 139
targetPort: 139
protocol: TCP

View File

@@ -0,0 +1,76 @@
---
controllers:
main:
type: deployment
replicas: 1
strategy: Recreate
containers:
main:
image:
repository: lscr.io/linuxserver/sonarr
tag: latest
pullPolicy: IfNotPresent
env:
PUID: "{{ mediaserver_puid }}"
PGID: "{{ mediaserver_pgid }}"
TZ: "{{ mediaserver_timezone }}"
resources:
requests:
cpu: "{{ mediaserver_sonarr_resources.requests.cpu }}"
memory: "{{ mediaserver_sonarr_resources.requests.memory }}"
limits:
cpu: "{{ mediaserver_sonarr_resources.limits.cpu }}"
memory: "{{ mediaserver_sonarr_resources.limits.memory }}"
probes:
liveness:
enabled: true
custom: true
spec:
httpGet:
path: /ping
port: 8989
initialDelaySeconds: 30
periodSeconds: 30
failureThreshold: 5
service:
main:
controller: main
type: ClusterIP
ports:
http:
port: 8989
protocol: TCP
ingress:
main:
enabled: {{ mediaserver_sonarr_ingress_enabled | lower }}
{% if mediaserver_sonarr_ingress_enabled %}
className: "{{ mediaserver_ingress_class }}"
{% if mediaserver_ingress_annotations %}
annotations:
{% for key, value in mediaserver_ingress_annotations.items() %}
{{ key }}: "{{ value }}"
{% endfor %}
{% endif %}
hosts:
- host: "{{ mediaserver_sonarr_ingress_host }}"
paths:
- path: /
pathType: Prefix
service:
name: main
port: http
{% endif %}
persistence:
data:
type: persistentVolumeClaim
existingClaim: {{ mediaserver_pvc_name }}
advancedMounts:
main:
main:
- path: /config
subPath: config/sonarr
- path: /data
subPath: data

View File

@@ -0,0 +1,90 @@
---
controllers:
main:
type: deployment
replicas: 1
strategy: Recreate
containers:
main:
image:
repository: lscr.io/linuxserver/transmission
tag: latest
pullPolicy: IfNotPresent
env:
PUID: "{{ mediaserver_puid }}"
PGID: "{{ mediaserver_pgid }}"
TZ: "{{ mediaserver_timezone }}"
TRANSMISSION_WEB_HOME: /combustion-release/
USER: admin
PASS: "{{ mediaserver_transmission_password }}"
PEERPORT: "{{ mediaserver_transmission_peer_port }}"
resources:
requests:
cpu: "{{ mediaserver_transmission_resources.requests.cpu }}"
memory: "{{ mediaserver_transmission_resources.requests.memory }}"
limits:
cpu: "{{ mediaserver_transmission_resources.limits.cpu }}"
memory: "{{ mediaserver_transmission_resources.limits.memory }}"
probes:
liveness:
enabled: true
custom: true
spec:
httpGet:
path: /transmission/web/
port: 9091
initialDelaySeconds: 30
periodSeconds: 30
failureThreshold: 5
service:
main:
controller: main
type: ClusterIP
ports:
http:
port: 9091
protocol: TCP
peer:
controller: main
type: ClusterIP
ports:
peer-tcp:
port: {{ mediaserver_transmission_peer_port }}
protocol: TCP
peer-udp:
port: {{ mediaserver_transmission_peer_port }}
protocol: UDP
ingress:
main:
enabled: {{ mediaserver_transmission_ingress_enabled | lower }}
{% if mediaserver_transmission_ingress_enabled %}
className: "{{ mediaserver_ingress_class }}"
{% if mediaserver_ingress_annotations %}
annotations:
{% for key, value in mediaserver_ingress_annotations.items() %}
{{ key }}: "{{ value }}"
{% endfor %}
{% endif %}
hosts:
- host: "{{ mediaserver_transmission_ingress_host }}"
paths:
- path: /
pathType: Prefix
service:
name: main
port: http
{% endif %}
persistence:
data:
type: persistentVolumeClaim
existingClaim: {{ mediaserver_pvc_name }}
advancedMounts:
main:
main:
- path: /config
subPath: config/transmission
- path: /data
subPath: data

View File

@@ -0,0 +1,34 @@
---
controllers:
main:
type: deployment
replicas: 1
containers:
main:
image:
repository: containrrr/watchtower
tag: latest
pullPolicy: IfNotPresent
env:
TZ: "{{ mediaserver_timezone }}"
WATCHTOWER_SCHEDULE: "{{ mediaserver_watchtower_schedule }}"
WATCHTOWER_CLEANUP: "true"
WATCHTOWER_INCLUDE_RESTARTING: "true"
WATCHTOWER_NAMESPACE_LABEL: "mediaserver"
resources:
requests:
cpu: "10m"
memory: "32Mi"
limits:
cpu: "100m"
memory: "128Mi"
service:
main:
enabled: false
ingress:
main:
enabled: false
persistence: {}