docs: sync addon docs with explicit external/internal service modes

Обновлена документация под новые аддоны (gitlab, redis, mongodb, kafka, kafka-ui, rabbitmq) и новую модель явного выбора зависимостей. Добавлены и унифицированы описания переключателей *_database_mode и *_redis_mode, обновлена таблица зависимостей аддонов, примеры конфигурации и список vault-секретов.
This commit is contained in:
Sergey Antropoff
2026-04-29 23:21:04 +03:00
parent dde2fc8a8a
commit 38aaadbfb1
128 changed files with 2881 additions and 902 deletions

109
addons/ceph-rock/README.md Normal file
View File

@@ -0,0 +1,109 @@
# Ceph-Rock / Rook-Ceph
Distributed storage на базе Ceph, управляемый Rook-оператором. Предоставляет:
- **Block storage** (RWO) — `rook-ceph-block` StorageClass
- **Filesystem storage** (RWX) — `rook-ceph-filesystem` StorageClass
Требует минимум **3 ноды** с незанятыми дисками для OSD.
## Быстрый старт
```yaml
# group_vars/all/addons.yml
addon_ceph_rock: true
```
```bash
make addon-ceph-rock
```
## Параметры
| Переменная | Умолч. | Описание |
|---|---|---|
| `rook_ceph_mon_count` | `3` | Количество MON |
| `rook_ceph_block_replica_count` | `3` | Реплики блочного хранилища |
| `rook_ceph_devices` | `[]` | Список raw-устройств для OSD |
| `rook_ceph_use_all_devices` | `false` | Авто-использовать все свободные диски |
| `rook_ceph_block_storage_class` | `rook-ceph-block` | Имя StorageClass (RWO) |
| `rook_ceph_filesystem_storage_class` | `rook-ceph-filesystem` | Имя StorageClass (RWX) |
## Single-node конфигурация
```yaml
rook_ceph_mon_count: 1
rook_ceph_allow_multiple_mon_per_node: true
rook_ceph_block_replica_count: 1
```
## Использование конкретных дисков
```yaml
rook_ceph_devices:
- "/dev/sdb"
- "/dev/sdc"
```
## Использование в PVC
### Block storage (RWO)
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-db-data
spec:
accessModes: [ReadWriteOnce]
storageClassName: rook-ceph-block
resources:
requests:
storage: 20Gi
```
### Filesystem storage (RWX)
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-data
spec:
accessModes: [ReadWriteMany]
storageClassName: rook-ceph-filesystem
resources:
requests:
storage: 50Gi
```
## Dashboard
Включён по умолчанию. Доступ:
```bash
kubectl -n rook-ceph port-forward svc/rook-ceph-mgr-dashboard 7000
# http://localhost:7000
# Логин: admin
kubectl -n rook-ceph get secret rook-ceph-dashboard-password \
-o jsonpath='{.data.password}' | base64 -d
```
Или через Ingress:
```yaml
rook_ceph_dashboard_ingress_enabled: true
rook_ceph_dashboard_ingress_host: "ceph.example.com"
```
## Диагностика
```bash
kubectl -n rook-ceph get cephcluster
kubectl -n rook-ceph get pods
kubectl exec -n rook-ceph deployment/rook-ceph-tools -- ceph status
kubectl exec -n rook-ceph deployment/rook-ceph-tools -- ceph osd status
```
## Официальные ресурсы
- Официальный сайт: [https://rook.io/](https://rook.io/)
- Официальная документация: [https://rook.io/docs/rook/latest-release/Storage-Configuration/Ceph-CSI/ceph-csi-drivers/](https://rook.io/docs/rook/latest-release/Storage-Configuration/Ceph-CSI/ceph-csi-drivers/)
- Версии Helm chart / ПО: [https://artifacthub.io/packages/helm/rook-release/rook-ceph](https://artifacthub.io/packages/helm/rook-release/rook-ceph)

View File

@@ -0,0 +1,7 @@
---
- name: Install Ceph-Rock
hosts: k3s_master[0]
gather_facts: false
become: true
roles:
- role: "{{ playbook_dir }}/role"

View File

@@ -0,0 +1,66 @@
---
# Версия Helm-чарта Rook
rook_ceph_version: "1.14.0"
# Namespace Rook/Ceph
rook_ceph_namespace: "rook-ceph"
# Helm-репозиторий Rook
rook_ceph_chart_repo: "https://charts.rook.io/release"
# Версия образа Ceph
# Образ Ceph daemon
rook_ceph_image: "quay.io/ceph/ceph:v18.2.2"
# Мониторы (MON) — для single-node: count=1, allowMultiplePerNode=true
# Количество MON
rook_ceph_mon_count: 3
# Разрешить несколько MON на одной ноде (lab)
rook_ceph_allow_multiple_mon_per_node: false
# Путь на хосте для хранения данных Ceph (OSD в filestore-режиме)
# Для использования raw block-устройств — оставь пустым и задай rook_ceph_devices
# Каталог данных на нодах (directory store)
rook_ceph_data_dir: "/var/lib/rook"
# Raw block-устройства для OSD (например ["/dev/sdb", "/dev/sdc"])
# Если пусто — Ceph использует директорию rook_ceph_data_dir
# Список блочных устройств для OSD
rook_ceph_devices: []
# Использовать все доступные (не смонтированные) диски автоматически
rook_ceph_use_all_devices: false
# Dashboard
# Включить Ceph Dashboard
rook_ceph_dashboard_enabled: true
# StorageClasses
# Имя block pool
rook_ceph_block_pool_name: "replicapool"
# Число реплик в пуле
rook_ceph_block_replica_count: 3 # уменьши до 1 для single-node
# Имя StorageClass для RBD
rook_ceph_block_storage_class: "rook-ceph-block"
# Сделать RBD StorageClass default
rook_ceph_block_storage_class_default: false
# Имя CephFS
rook_ceph_filesystem_name: "ceph-filesystem"
# Имя StorageClass для CephFS
rook_ceph_filesystem_storage_class: "rook-ceph-filesystem"
# Ingress для Ceph Dashboard
# Публиковать Ceph Dashboard через Ingress
rook_ceph_dashboard_ingress_enabled: false
# Хост Dashboard
rook_ceph_dashboard_ingress_host: "ceph.local"
# IngressClass
rook_ceph_dashboard_ingress_class: "{{ ingress_nginx_class_name | default('nginx') }}"
# TLS
rook_ceph_dashboard_ingress_tls: false
# Issuer cert-manager
rook_ceph_dashboard_ingress_cert_issuer: "{{ cert_manager_default_issuer_name | default('letsencrypt-prod') }}"
# Метрики
# Экспорт метрик Ceph для Prometheus
rook_ceph_metrics_enabled: true
# ServiceMonitor создаётся только когда addon_prometheus_stack: true

View File

@@ -0,0 +1,30 @@
---
- name: Converge — ceph-rock (rook-ceph) template tests
hosts: all
become: false
gather_facts: false
vars:
rook_ceph_namespace: rook-ceph
rook_ceph_image: "quay.io/ceph/ceph:v18.2.2"
rook_ceph_mon_count: 3
rook_ceph_allow_multiple_mon_per_node: false
rook_ceph_data_dir: "/var/lib/rook"
rook_ceph_devices: []
rook_ceph_use_all_devices: false
rook_ceph_dashboard_enabled: true
rook_ceph_block_pool_name: "replicapool"
rook_ceph_block_replica_count: 3
rook_ceph_block_storage_class: "ceph-block"
rook_ceph_block_storage_class_default: false
rook_ceph_filesystem_name: "ceph-filesystem"
rook_ceph_filesystem_storage_class: "ceph-filesystem"
rook_ceph_metrics_enabled: false
addon_prometheus_stack: false
tasks:
- name: Render ceph-cluster.yaml.j2
ansible.builtin.template:
src: "{{ playbook_dir }}/../../templates/ceph-cluster.yaml.j2"
dest: /tmp/ceph-cluster.yaml
mode: "0644"

View File

@@ -0,0 +1,28 @@
---
driver:
name: docker
platforms:
- name: master01
image: geerlingguy/docker-ubuntu2204-ansible:latest
pre_build_image: true
groups:
- k3s_master
provisioner:
name: ansible
playbooks:
converge: converge.yml
verify: verify.yml
config_options:
defaults:
interpreter_python: auto_silent
verifier:
name: ansible
lint: |
set -e
yamllint .
ansible-lint

View File

@@ -0,0 +1,50 @@
---
- name: Verify — ceph-rock templates
hosts: all
become: false
gather_facts: false
tasks:
- name: Read rendered ceph-cluster manifest
ansible.builtin.slurp:
src: /tmp/ceph-cluster.yaml
register: manifest_raw
- name: Set content fact
ansible.builtin.set_fact:
content: "{{ manifest_raw.content | b64decode }}"
- name: Assert CephCluster kind is present
ansible.builtin.assert:
that: "'kind: CephCluster' in content"
fail_msg: "kind: CephCluster не найден"
- name: Assert CephBlockPool kind is present
ansible.builtin.assert:
that: "'kind: CephBlockPool' in content"
fail_msg: "kind: CephBlockPool не найден"
- name: Assert StorageClass is present (for RBD)
ansible.builtin.assert:
that: "'kind: StorageClass' in content"
fail_msg: "kind: StorageClass не найден"
- name: Assert CephFilesystem kind is present
ansible.builtin.assert:
that: "'kind: CephFilesystem' in content"
fail_msg: "kind: CephFilesystem не найден"
- name: Assert mon count is 3
ansible.builtin.assert:
that: "'count: 3' in content"
fail_msg: "mon count: 3 не найден"
- name: Assert namespace is rook-ceph
ansible.builtin.assert:
that: "'namespace: rook-ceph' in content"
fail_msg: "namespace: rook-ceph не найден"
- name: Assert ceph image is set
ansible.builtin.assert:
that: "'quay.io/ceph/ceph' in content"
fail_msg: "ceph image не найден"

View File

@@ -0,0 +1,98 @@
---
- name: Add Rook Helm repo
kubernetes.core.helm_repository:
name: rook-release
repo_url: "{{ rook_ceph_chart_repo }}"
environment:
KUBECONFIG: "{{ k3s_kubeconfig_path }}"
- name: Install Rook-Ceph operator via Helm
kubernetes.core.helm:
name: rook-ceph
chart_ref: rook-release/rook-ceph
chart_version: "{{ rook_ceph_version }}"
release_namespace: "{{ rook_ceph_namespace }}"
create_namespace: true
wait: true
timeout: "10m0s"
values:
tolerations:
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
discover:
tolerations:
- operator: "Exists"
csi:
provisionerTolerations:
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
pluginTolerations:
- operator: "Exists"
monitoring:
enabled: "{{ rook_ceph_metrics_enabled | bool and addon_prometheus_stack | default(false) | bool }}"
environment:
KUBECONFIG: "{{ k3s_kubeconfig_path }}"
- name: Wait for Rook operator to be ready
ansible.builtin.command: >
k3s kubectl -n {{ rook_ceph_namespace }}
rollout status deployment/rook-ceph-operator --timeout=180s
changed_when: false
retries: 3
delay: 10
- name: Template CephCluster + StorageClasses manifest
ansible.builtin.template:
src: ceph-cluster.yaml.j2
dest: /tmp/ceph-cluster.yaml
mode: '0644'
- name: Apply CephCluster + StorageClasses
ansible.builtin.command: k3s kubectl apply -f /tmp/ceph-cluster.yaml
changed_when: true
- name: Wait for Ceph monitors to be ready (может занять несколько минут)
ansible.builtin.command: >
k3s kubectl -n {{ rook_ceph_namespace }}
wait cephcluster/rook-ceph
--for=jsonpath='{.status.phase}'=Ready
--timeout=600s
changed_when: false
retries: 5
delay: 30
failed_when: false
- name: Create Ceph Dashboard Ingress
ansible.builtin.template:
src: ceph-dashboard-ingress.yaml.j2
dest: /tmp/ceph-dashboard-ingress.yaml
mode: '0644'
when: rook_ceph_dashboard_ingress_enabled | bool
- name: Apply Ceph Dashboard Ingress
ansible.builtin.command: k3s kubectl apply -f /tmp/ceph-dashboard-ingress.yaml
changed_when: true
when: rook_ceph_dashboard_ingress_enabled | bool
- name: Get Ceph Dashboard admin password
ansible.builtin.command: >
k3s kubectl -n {{ rook_ceph_namespace }}
get secret rook-ceph-dashboard-password
-o jsonpath='{.data.password}'
register: _ceph_dashboard_password
changed_when: false
failed_when: false
- name: Show Rook-Ceph access info
ansible.builtin.debug:
msg:
- "Rook-Ceph установлен в namespace: {{ rook_ceph_namespace }}"
- "StorageClass (block RWO): {{ rook_ceph_block_storage_class }}"
- "StorageClass (filesystem RWX): {{ rook_ceph_filesystem_storage_class }}"
- "Replicas: {{ rook_ceph_block_replica_count }} (для single-node задай rook_ceph_block_replica_count=1)"
- "{% if rook_ceph_dashboard_ingress_enabled %}Dashboard: http{{ 's' if rook_ceph_dashboard_ingress_tls else '' }}://{{ rook_ceph_dashboard_ingress_host }}{% else %}Dashboard: kubectl port-forward svc/rook-ceph-mgr-dashboard -n {{ rook_ceph_namespace }} 7000:7000{% endif %}"
- "Dashboard логин: admin / {{ _ceph_dashboard_password.stdout | b64decode if _ceph_dashboard_password.rc == 0 else '(пока создаётся)' }}"
- "Статус кластера: kubectl -n {{ rook_ceph_namespace }} get cephcluster"
- "Toolbox: kubectl -n {{ rook_ceph_namespace }} exec -it deploy/rook-ceph-tools -- bash"

View File

@@ -0,0 +1,127 @@
---
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
name: rook-ceph
namespace: {{ rook_ceph_namespace }}
spec:
cephVersion:
image: "{{ rook_ceph_image }}"
allowUnsupported: false
dataDirHostPath: "{{ rook_ceph_data_dir }}"
mon:
count: {{ rook_ceph_mon_count }}
allowMultiplePerNode: {{ rook_ceph_allow_multiple_mon_per_node | lower }}
mgr:
count: 1
modules:
- name: pg_autoscaler
enabled: true
dashboard:
enabled: {{ rook_ceph_dashboard_enabled | lower }}
ssl: false
monitoring:
enabled: {{ (rook_ceph_metrics_enabled | bool and addon_prometheus_stack | default(false) | bool) | lower }}
network:
connections:
encryption:
enabled: false
compression:
enabled: false
storage:
useAllNodes: true
useAllDevices: {{ rook_ceph_use_all_devices | lower }}
{% if rook_ceph_devices %}
devices:
{% for dev in rook_ceph_devices %}
- name: "{{ dev }}"
{% endfor %}
{% else %}
directories:
- path: "{{ rook_ceph_data_dir }}"
{% endif %}
placement:
all:
tolerations:
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
disruptionManagement:
managePodBudgets: true
osdMaintenanceTimeout: 30
---
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
name: {{ rook_ceph_block_pool_name }}
namespace: {{ rook_ceph_namespace }}
spec:
failureDomain: host
replicated:
size: {{ rook_ceph_block_replica_count }}
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: {{ rook_ceph_block_storage_class }}
annotations:
storageclass.kubernetes.io/is-default-class: "{{ rook_ceph_block_storage_class_default | lower }}"
provisioner: {{ rook_ceph_namespace }}.rbd.csi.ceph.com
parameters:
clusterID: {{ rook_ceph_namespace }}
pool: {{ rook_ceph_block_pool_name }}
imageFormat: "2"
imageFeatures: layering
csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: {{ rook_ceph_namespace }}
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: {{ rook_ceph_namespace }}
csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
csi.storage.k8s.io/node-stage-secret-namespace: {{ rook_ceph_namespace }}
reclaimPolicy: Delete
allowVolumeExpansion: true
---
apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
name: {{ rook_ceph_filesystem_name }}
namespace: {{ rook_ceph_namespace }}
spec:
metadataPool:
replicated:
size: {{ rook_ceph_block_replica_count }}
dataPools:
- name: data0
replicated:
size: {{ rook_ceph_block_replica_count }}
preserveFilesystemOnDelete: true
metadataServer:
activeCount: 1
activeStandby: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: {{ rook_ceph_filesystem_storage_class }}
provisioner: {{ rook_ceph_namespace }}.cephfs.csi.ceph.com
parameters:
clusterID: {{ rook_ceph_namespace }}
fsName: {{ rook_ceph_filesystem_name }}
pool: {{ rook_ceph_filesystem_name }}-data0
csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: {{ rook_ceph_namespace }}
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: {{ rook_ceph_namespace }}
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
csi.storage.k8s.io/node-stage-secret-namespace: {{ rook_ceph_namespace }}
reclaimPolicy: Delete
allowVolumeExpansion: true

View File

@@ -0,0 +1,29 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rook-ceph-mgr-dashboard
namespace: {{ rook_ceph_namespace }}
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
{% if rook_ceph_dashboard_ingress_tls %}
cert-manager.io/cluster-issuer: "{{ rook_ceph_dashboard_ingress_cert_issuer }}"
{% endif %}
spec:
ingressClassName: "{{ rook_ceph_dashboard_ingress_class }}"
rules:
- host: "{{ rook_ceph_dashboard_ingress_host }}"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: rook-ceph-mgr-dashboard
port:
number: 7000
{% if rook_ceph_dashboard_ingress_tls %}
tls:
- secretName: ceph-dashboard-tls
hosts:
- "{{ rook_ceph_dashboard_ingress_host }}"
{% endif %}