# Ceph-Rock / Rook-Ceph Distributed storage на базе Ceph, управляемый Rook-оператором. Предоставляет: - **Block storage** (RWO) — `rook-ceph-block` StorageClass - **Filesystem storage** (RWX) — `rook-ceph-filesystem` StorageClass Требует минимум **3 ноды** с незанятыми дисками для OSD. ## Быстрый старт ```yaml # group_vars/all/addons.yml addon_ceph_rock: true ``` ```bash make addon-ceph-rock ``` ## Параметры | Переменная | Умолч. | Описание | |---|---|---| | `rook_ceph_mon_count` | `3` | Количество MON | | `rook_ceph_block_replica_count` | `3` | Реплики блочного хранилища | | `rook_ceph_devices` | `[]` | Список raw-устройств для OSD | | `rook_ceph_use_all_devices` | `false` | Авто-использовать все свободные диски | | `rook_ceph_block_storage_class` | `rook-ceph-block` | Имя StorageClass (RWO) | | `rook_ceph_filesystem_storage_class` | `rook-ceph-filesystem` | Имя StorageClass (RWX) | ## Single-node конфигурация ```yaml rook_ceph_mon_count: 1 rook_ceph_allow_multiple_mon_per_node: true rook_ceph_block_replica_count: 1 ``` ## Использование конкретных дисков ```yaml rook_ceph_devices: - "/dev/sdb" - "/dev/sdc" ``` ## Использование в PVC ### Block storage (RWO) ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-db-data spec: accessModes: [ReadWriteOnce] storageClassName: rook-ceph-block resources: requests: storage: 20Gi ``` ### Filesystem storage (RWX) ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: shared-data spec: accessModes: [ReadWriteMany] storageClassName: rook-ceph-filesystem resources: requests: storage: 50Gi ``` ## Dashboard Включён по умолчанию. Доступ: ```bash kubectl -n rook-ceph port-forward svc/rook-ceph-mgr-dashboard 7000 # http://localhost:7000 # Логин: admin kubectl -n rook-ceph get secret rook-ceph-dashboard-password \ -o jsonpath='{.data.password}' | base64 -d ``` Или через Ingress: ```yaml rook_ceph_dashboard_ingress_enabled: true rook_ceph_dashboard_ingress_host: "ceph.example.com" ``` ## Диагностика ```bash kubectl -n rook-ceph get cephcluster kubectl -n rook-ceph get pods kubectl exec -n rook-ceph deployment/rook-ceph-tools -- ceph status kubectl exec -n rook-ceph deployment/rook-ceph-tools -- ceph osd status ``` ## Официальные ресурсы - Официальный сайт: [https://rook.io/](https://rook.io/) - Официальная документация: [https://rook.io/docs/rook/latest-release/Storage-Configuration/Ceph-CSI/ceph-csi-drivers/](https://rook.io/docs/rook/latest-release/Storage-Configuration/Ceph-CSI/ceph-csi-drivers/) - Версии Helm chart / ПО: [https://artifacthub.io/packages/helm/rook-release/rook-ceph](https://artifacthub.io/packages/helm/rook-release/rook-ceph)