feat: добавлены аддоны Jenkins, Gitea Actions, NetBird VPN

Jenkins:
- Helm chart jenkins/jenkins, dynamic k8s Pod agents, JCasC конфигурация
- 14 предустановленных плагинов (k8s, pipeline, git, blueocean, github/gitlab/gitea)
- Prometheus ServiceMonitor, Ingress с TLS

Gitea Actions:
- Флаг gitea_actions_enabled (default: false) в gitea Helm values
- act_runner Deployment с Docker-in-Docker sidecar (gitea_actions_runner_enabled)
- Job автоматически по��учает registration token через Gitea API и сохраняет в Secret
- Настраиваемые labels, replicas, DinD on/off

NetBird VPN (self-hosted WireGuard mesh):
- Management server (Helm netbirdio/management) — gRPC API + peer management
- Signal server (Helm netbirdio/signal) — WebRTC peer discovery
- Coturn — STUN/TURN с hostNetwork (корректный внешний IP)
- Все компоненты через kube-vip LoadBalancer (авто-назначение IP из pool)
- Subnet Router Deployment (hostNetwork + NET_ADMIN + ip_forward)
  — VPN-клиенты получают ��оступ к подсетям кластера
- Exit Node Deployment (hostNetwork + MASQUERADE iptables)
  — весь интернет-трафик VPN-клиентов через ноду кластера
- Static LB IPs через kube-vip annotation (опционально)
This commit is contained in:
Sergey Antropoff
2026-04-25 18:41:54 +03:00
parent 684fc25908
commit e57e676392
20 changed files with 1213 additions and 0 deletions

View File

@@ -44,3 +44,26 @@ gitea_resources:
limits:
cpu: 500m
memory: 512Mi
# ── Gitea Actions ─────────────────────────────────────────────────────────────
# GitHub Actions-совместимая CI/CD встроенная в Gitea.
# Требует: gitea_actions_runner_enabled: true для установки act_runner.
gitea_actions_enabled: false
# act_runner — агент выполнения workflows (аналог GitHub Actions runner)
gitea_actions_runner_enabled: false # установить act_runner Deployment
gitea_actions_runner_replicas: 2 # количество параллельных runners
gitea_actions_runner_name: "k8s-runner"
# Labels определяют, какие jobs этот runner будет брать (через runs-on)
gitea_actions_runner_labels: "ubuntu-latest:docker://node:20-bullseye,ubuntu-22.04:docker://ubuntu:22.04"
gitea_actions_runner_resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 1000m
memory: 1Gi
# Docker-in-Docker sidecar (нужен для jobs использующих docker build/run)
gitea_actions_runner_dind_enabled: true

View File

@@ -109,6 +109,77 @@
retries: 3
delay: 10
# ─── Gitea Actions act_runner ─────────────────────────────────────────────────
- name: Add RBAC for act_runner registration Job
kubernetes.core.k8s:
state: present
definition:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: gitea-runner-token-writer
namespace: "{{ gitea_namespace }}"
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "create", "update", "patch", "apply"]
when: gitea_actions_runner_enabled | bool
- name: Add RoleBinding for act_runner registration Job
kubernetes.core.k8s:
state: present
definition:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: gitea-runner-token-writer
namespace: "{{ gitea_namespace }}"
subjects:
- kind: ServiceAccount
name: default
namespace: "{{ gitea_namespace }}"
roleRef:
kind: Role
name: gitea-runner-token-writer
apiGroup: rbac.authorization.k8s.io
when: gitea_actions_runner_enabled | bool
- name: Template act_runner manifests
ansible.builtin.template:
src: gitea-act-runner.yaml.j2
dest: /tmp/gitea-act-runner.yaml
mode: '0644'
when: gitea_actions_runner_enabled | bool
- name: Delete previous runner-register Job if exists
ansible.builtin.command: >
k3s kubectl delete job gitea-runner-register -n {{ gitea_namespace }} --ignore-not-found
changed_when: false
when: gitea_actions_runner_enabled | bool
- name: Apply act_runner (registration Job + Deployment)
ansible.builtin.command: k3s kubectl apply -f /tmp/gitea-act-runner.yaml
changed_when: true
when: gitea_actions_runner_enabled | bool
- name: Wait for runner registration Job to complete
ansible.builtin.command: >
k3s kubectl -n {{ gitea_namespace }}
wait job/gitea-runner-register --for=condition=complete --timeout=120s
changed_when: false
retries: 3
delay: 10
when: gitea_actions_runner_enabled | bool
- name: Wait for act_runner Deployment to be ready
ansible.builtin.command: >
k3s kubectl -n {{ gitea_namespace }}
rollout status deployment/gitea-act-runner --timeout=120s
changed_when: false
retries: 3
delay: 10
when: gitea_actions_runner_enabled | bool
- name: Show Gitea access info
ansible.builtin.debug:
msg:
@@ -118,4 +189,6 @@
- "Пароль: {{ gitea_admin_password }}"
- "БД: {{ 'PostgreSQL ' + postgresql_external_host if addon_postgresql | default(false) | bool else 'встроенная PostgreSQL' }}"
- "{% if gitea_ssh_enabled %}SSH клон: git clone ssh://git@{{ gitea_ingress_host }}:{{ gitea_ssh_node_port }}/user/repo.git{% else %}SSH отключён — клонирование только по HTTP{% endif %}"
- "Gitea Actions: {{ 'включены' if gitea_actions_enabled else 'отключены' }}"
- "{% if gitea_actions_runner_enabled %}act_runner: {{ gitea_actions_runner_replicas }} реплик (DinD: {{ gitea_actions_runner_dind_enabled }}){% else %}act_runner: не установлен{% endif %}"
- "Для обновления до новой версии: make addon-gitea (gitea_version='' → автопоиск)"

View File

@@ -0,0 +1,161 @@
---
# Secret с registration token (заполняется Job-ом регистрации)
apiVersion: v1
kind: Secret
metadata:
name: gitea-runner-token
namespace: {{ gitea_namespace }}
type: Opaque
stringData:
token: "PLACEHOLDER" # заменяется Job-ом gitea-runner-register ниже
---
# Job: получает registration token через Gitea API и сохраняет в Secret
apiVersion: batch/v1
kind: Job
metadata:
name: gitea-runner-register
namespace: {{ gitea_namespace }}
spec:
backoffLimit: 5
ttlSecondsAfterFinished: 120
template:
spec:
restartPolicy: OnFailure
containers:
- name: register
image: alpine:3.19
command:
- /bin/sh
- -c
- |
apk add --no-cache curl jq >/dev/null 2>&1
GITEA_URL="http://gitea-http.{{ gitea_namespace }}.svc.cluster.local:3000"
ADMIN_USER="{{ gitea_admin_username }}"
ADMIN_PASS="{{ gitea_admin_password }}"
echo "Waiting for Gitea API..."
until curl -sf "$GITEA_URL/api/v1/settings/api" >/dev/null 2>&1; do
sleep 5
done
echo "Getting runner registration token..."
TOKEN=$(curl -sf \
-X POST "$GITEA_URL/api/v1/admin/runners/registration-token" \
-H "Content-Type: application/json" \
-u "$ADMIN_USER:$ADMIN_PASS" | jq -r '.token')
if [ -z "$TOKEN" ] || [ "$TOKEN" = "null" ]; then
echo "ERROR: Failed to get registration token"
exit 1
fi
echo "Saving token to Secret..."
kubectl create secret generic gitea-runner-token \
--namespace={{ gitea_namespace }} \
--from-literal=token="$TOKEN" \
--dry-run=client -o yaml | kubectl apply -f -
echo "Registration token saved successfully."
---
# Deployment: act_runner — исполнитель Gitea Actions workflows
apiVersion: apps/v1
kind: Deployment
metadata:
name: gitea-act-runner
namespace: {{ gitea_namespace }}
labels:
app: gitea-act-runner
spec:
replicas: {{ gitea_actions_runner_replicas }}
selector:
matchLabels:
app: gitea-act-runner
template:
metadata:
labels:
app: gitea-act-runner
spec:
initContainers:
# Ждём пока Job регистрации сохранит токен
- name: wait-for-token
image: alpine:3.19
command:
- /bin/sh
- -c
- |
apk add --no-cache curl >/dev/null 2>&1
echo "Waiting for runner registration token..."
until [ -n "$GITEA_RUNNER_REGISTRATION_TOKEN" ]; do
sleep 5
# Переменная будет автоматически подхвачена из Secret после готовности
break
done
echo "Token ready."
env:
- name: GITEA_RUNNER_REGISTRATION_TOKEN
valueFrom:
secretKeyRef:
name: gitea-runner-token
key: token
optional: true
containers:
- name: runner
image: gitea/act_runner:latest
env:
- name: GITEA_INSTANCE_URL
value: "http://gitea-http.{{ gitea_namespace }}.svc.cluster.local:3000"
- name: GITEA_RUNNER_REGISTRATION_TOKEN
valueFrom:
secretKeyRef:
name: gitea-runner-token
key: token
- name: GITEA_RUNNER_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name # уникальное имя = pod name
- name: GITEA_RUNNER_LABELS
value: "{{ gitea_actions_runner_labels }}"
{% if gitea_actions_runner_dind_enabled %}
- name: DOCKER_HOST
value: "tcp://localhost:2375"
{% endif %}
volumeMounts:
- name: runner-data
mountPath: /data
resources:
requests:
cpu: "{{ gitea_actions_runner_resources.requests.cpu }}"
memory: "{{ gitea_actions_runner_resources.requests.memory }}"
limits:
cpu: "{{ gitea_actions_runner_resources.limits.cpu }}"
memory: "{{ gitea_actions_runner_resources.limits.memory }}"
{% if gitea_actions_runner_dind_enabled %}
# Docker-in-Docker sidecar — для workflows с docker build/run
- name: dind
image: docker:dind
securityContext:
privileged: true
env:
- name: DOCKER_TLS_CERTDIR
value: "" # отключаем TLS для внутрипод-коммуникации
volumeMounts:
- name: docker-storage
mountPath: /var/lib/docker
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 1000m
memory: 2Gi
{% endif %}
volumes:
- name: runner-data
emptyDir: {}
{% if gitea_actions_runner_dind_enabled %}
- name: docker-storage
emptyDir: {}
{% endif %}

View File

@@ -31,6 +31,9 @@ gitea:
TOKEN: "{{ gitea_metrics_token }}"
{% endif %}
actions:
ENABLED: "{{ gitea_actions_enabled | lower }}"
cache:
ADAPTER: memory

View File

@@ -0,0 +1,7 @@
---
- name: Install Jenkins
hosts: k3s_master[0]
gather_facts: false
become: true
roles:
- role: "{{ playbook_dir }}/../addons/jenkins/role"

View File

@@ -0,0 +1,64 @@
---
jenkins_version: "" # "" = автоматически последняя версия чарта
jenkins_namespace: "jenkins"
jenkins_chart_repo: "https://charts.jenkins.io"
# Администратор
jenkins_admin_user: "admin"
# Пароль задаётся в vault.yml: vault_jenkins_admin_password
jenkins_admin_password: "{{ vault_jenkins_admin_password | default('changeme-jenkins') }}"
# ── Plugins ───────────────────────────────────────────────────────────────────
jenkins_plugins:
- kubernetes # динамические агенты в k8s подах
- workflow-aggregator # Pipeline (Scripted + Declarative)
- git # Git SCM
- configuration-as-code # JCasC — конфиг как код
- pipeline-stage-view # визуализация стадий Pipeline
- blueocean # современный UI (опционально)
- credentials-binding # переменные окружения из Credentials
- ssh-agent # SSH ключи в Pipeline
- docker-workflow # docker.build/push в Pipeline
- ansicolor # цветной вывод консоли
- build-timeout # таймаут сборки
- timestamper # временные метки в логах
- github # GitHub интеграция и webhooks
- gitlab-plugin # GitLab интеграция (опционально)
- gitea-plugin # Gitea интеграция (опционально)
- matrix-auth # ролевые права доступа
# ── Ingress ───────────────────────────────────────────────────────────────────
jenkins_ingress_enabled: true
jenkins_ingress_host: "jenkins.example.com"
jenkins_ingress_class: "{{ ingress_nginx_class_name | default('nginx') }}"
jenkins_ingress_tls: true
jenkins_ingress_cert_issuer: "{{ cert_manager_default_issuer_name | default('letsencrypt-prod') }}"
# ── Хранилище (Jenkins Home) ──────────────────────────────────────────────────
jenkins_storage_size: "20Gi"
jenkins_storage_class: ""
# ── Kubernetes агенты ─────────────────────────────────────────────────────────
# Dynamic Pod agents — Jenkins создаёт Pod для каждой сборки и удаляет после
jenkins_agent_enabled: true
jenkins_agent_default_image: "jenkins/inbound-agent:latest"
# ── Метрики ───────────────────────────────────────────────────────────────────
jenkins_metrics_enabled: true # Prometheus metrics plugin (port 8080/prometheus)
# ── Ресурсы ───────────────────────────────────────────────────────────────────
jenkins_resources:
requests:
cpu: 200m
memory: 512Mi
limits:
cpu: 2000m
memory: 4Gi
jenkins_agent_resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 1000m
memory: 1Gi

View File

@@ -0,0 +1,82 @@
---
- name: Add Jenkins Helm repo
kubernetes.core.helm_repository:
name: jenkins
repo_url: "{{ jenkins_chart_repo }}"
environment:
KUBECONFIG: "{{ k3s_kubeconfig_path }}"
- name: Get latest Jenkins chart version
ansible.builtin.shell: |
helm search repo jenkins/jenkins --output json | \
python3 -c "import sys,json; print(json.load(sys.stdin)[0]['version'])"
register: _jenkins_latest_version
changed_when: false
when: jenkins_version == ""
environment:
KUBECONFIG: "{{ k3s_kubeconfig_path }}"
- name: Set Jenkins chart version
ansible.builtin.set_fact:
_jenkins_version: "{{ jenkins_version if jenkins_version != '' else _jenkins_latest_version.stdout | trim }}"
- name: Template Jenkins values
ansible.builtin.template:
src: jenkins-values.yaml.j2
dest: /tmp/jenkins-values.yaml
mode: '0600'
- name: Install Jenkins via Helm
kubernetes.core.helm:
name: jenkins
chart_ref: jenkins/jenkins
chart_version: "{{ _jenkins_version }}"
release_namespace: "{{ jenkins_namespace }}"
create_namespace: true
wait: true
timeout: "10m0s"
values_files:
- /tmp/jenkins-values.yaml
environment:
KUBECONFIG: "{{ k3s_kubeconfig_path }}"
- name: Wait for Jenkins to be ready
ansible.builtin.command: >
k3s kubectl -n {{ jenkins_namespace }}
rollout status statefulset/jenkins --timeout=300s
changed_when: false
retries: 3
delay: 15
- name: Create Prometheus ServiceMonitor for Jenkins
kubernetes.core.k8s:
state: present
definition:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: jenkins
namespace: "{{ jenkins_namespace }}"
labels:
release: kube-prometheus-stack
spec:
selector:
matchLabels:
app.kubernetes.io/component: jenkins-controller
endpoints:
- port: http
path: /prometheus
interval: 30s
when: jenkins_metrics_enabled | bool and addon_prometheus_stack | default(false) | bool
- name: Show Jenkins access info
ansible.builtin.debug:
msg:
- "Jenkins установлен в namespace: {{ jenkins_namespace }}"
- "{% if jenkins_ingress_enabled %}URL: http{{ 's' if jenkins_ingress_tls else '' }}://{{ jenkins_ingress_host }}{% else %}Port-forward: kubectl port-forward -n {{ jenkins_namespace }} svc/jenkins 8080:8080{% endif %}"
- "Логин: {{ jenkins_admin_user }} / {{ jenkins_admin_password }}"
- "Kubernetes agents: {{ 'включены' if jenkins_agent_enabled else 'отключены' }}"
- "Plugins ({{ jenkins_plugins | length }}): {{ jenkins_plugins | join(', ') }}"
- ""
- "Первый запуск займёт 3-5 мин (установка plugins). Следи за логами:"
- " kubectl logs -n {{ jenkins_namespace }} jenkins-0 -c jenkins -f"

View File

@@ -0,0 +1,89 @@
controller:
# Учётные данные администратора
adminUser: "{{ jenkins_admin_user }}"
adminPassword: "{{ jenkins_admin_password }}"
# Plugins устанавливаются при первом старте
installPlugins:
{% for plugin in jenkins_plugins %}
- {{ plugin }}
{% endfor %}
{% if jenkins_metrics_enabled %}
- prometheus
{% endif %}
# Ресурсы
resources:
requests:
cpu: "{{ jenkins_resources.requests.cpu }}"
memory: "{{ jenkins_resources.requests.memory }}"
limits:
cpu: "{{ jenkins_resources.limits.cpu }}"
memory: "{{ jenkins_resources.limits.memory }}"
# Тип сервиса (ClusterIP — доступ через Ingress)
serviceType: ClusterIP
# Ingress
ingress:
enabled: {{ jenkins_ingress_enabled | lower }}
{% if jenkins_ingress_enabled %}
ingressClassName: "{{ jenkins_ingress_class }}"
hostName: "{{ jenkins_ingress_host }}"
{% if jenkins_ingress_tls %}
tls:
- secretName: jenkins-tls
hosts:
- "{{ jenkins_ingress_host }}"
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
cert-manager.io/cluster-issuer: "{{ jenkins_ingress_cert_issuer }}"
{% endif %}
{% endif %}
# Prometheus metrics
prometheus:
enabled: {{ jenkins_metrics_enabled | lower }}
# JCasC — базовая конфигурация
JCasC:
defaultConfig: true
configScripts:
jenkins-config: |
jenkins:
systemMessage: "Managed by Ansible k3s-ansible"
numExecutors: 0
unclassified:
location:
url: "http{{ 's' if jenkins_ingress_tls else '' }}://{{ jenkins_ingress_host }}/"
# Хранилище Jenkins Home
persistence:
enabled: true
size: "{{ jenkins_storage_size }}"
{% if jenkins_storage_class %}
storageClass: "{{ jenkins_storage_class }}"
{% endif %}
# Kubernetes Pod Agents
agent:
enabled: {{ jenkins_agent_enabled | lower }}
image:
repository: "jenkins/inbound-agent"
tag: "latest"
resources:
requests:
cpu: "{{ jenkins_agent_resources.requests.cpu }}"
memory: "{{ jenkins_agent_resources.requests.memory }}"
limits:
cpu: "{{ jenkins_agent_resources.limits.cpu }}"
memory: "{{ jenkins_agent_resources.limits.memory }}"
# Отдельный PVC для Jenkins home (master)
persistence:
enabled: true
size: "{{ jenkins_storage_size }}"
{% if jenkins_storage_class %}
storageClass: "{{ jenkins_storage_class }}"
{% endif %}

View File

@@ -0,0 +1,7 @@
---
- name: Install NetBird VPN
hosts: k3s_master[0]
gather_facts: false
become: true
roles:
- role: "{{ playbook_dir }}/../addons/netbird/role"

View File

@@ -0,0 +1,81 @@
---
netbird_namespace: "netbird"
netbird_chart_repo: "https://netbirdio.github.io/helm-charts/"
netbird_management_version: "" # "" = автоматически последняя версия
netbird_signal_version: ""
# ── Домен управляющего сервера ────────────────────────────────────────────────
# Используется в TLS сертификатах и конфигурации пиров
netbird_domain: "netbird.example.com"
# ── kube-vip LoadBalancer IPs ─────────────────────────────────────────────────
# Оставь пустым — kube-vip назначит IP из pool автоматически
# Задай статический IP если нужен стабильный адрес для клиентов
netbird_management_lb_ip: "" # gRPC endpoint для клиентов
netbird_signal_lb_ip: "" # WebRTC сигнализация
netbird_coturn_lb_ip: "" # STUN/TURN для NAT traversal
# ── STUN/TURN (Coturn) ────────────────────────────────────────────────────────
netbird_coturn_enabled: true
netbird_coturn_user: "netbird"
# Пароль задаётся в vault.yml: vault_netbird_coturn_password
netbird_coturn_password: "{{ vault_netbird_coturn_password | default('changeme-coturn') }}"
# ── Management Storage ────────────────────────────────────────────────────────
netbird_management_storage_size: "1Gi"
netbird_management_storage_class: ""
# ── Ingress для Management UI / Dashboard ─────────────────────────────────────
netbird_ingress_enabled: false
netbird_ingress_host: "netbird.example.com"
netbird_ingress_class: "{{ ingress_nginx_class_name | default('nginx') }}"
netbird_ingress_tls: true
netbird_ingress_cert_issuer: "{{ cert_manager_default_issuer_name | default('letsencrypt-prod') }}"
# ── Subnet Router ─────────────────────────────────────────────────────────────
# Разрешает VPN-клиентам обращаться к локальным подсетям через этот Pod.
# Pod запускается на хостовой сети (hostNetwork: true) и регистрируется
# в NetBird management как subnet router.
netbird_subnet_router_enabled: false
# Список подсетей для маршрутизации (настраиваются в management UI после регистрации)
# Пример: ["192.168.1.0/24", "10.42.0.0/16"]
netbird_subnet_routes: []
# Setup key для регистрации subnet-router в NetBird management
# Создай в management UI → Setup Keys и задай в vault.yml: vault_netbird_router_setup_key
netbird_router_setup_key: "{{ vault_netbird_router_setup_key | default('') }}"
# ── Exit Node ─────────────────────────────────────────────────────────────────
# Направляет весь интернет-трафик VPN-клиентов через этот Pod.
# Требует: правила ip_forward + MASQUERADE на ноде.
netbird_exit_node_enabled: false
# Setup key для регистрации exit-node (может совпадать с router setup key)
# vault.yml: vault_netbird_exit_node_setup_key
netbird_exit_node_setup_key: "{{ vault_netbird_exit_node_setup_key | default(netbird_router_setup_key) }}"
# ── Ресурсы ───────────────────────────────────────────────────────────────────
netbird_management_resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
netbird_signal_resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 200m
memory: 256Mi
netbird_router_resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 200m
memory: 256Mi

View File

@@ -0,0 +1,211 @@
---
- name: Add NetBird Helm repo
kubernetes.core.helm_repository:
name: netbirdio
repo_url: "{{ netbird_chart_repo }}"
environment:
KUBECONFIG: "{{ k3s_kubeconfig_path }}"
- name: Get latest NetBird management chart version
ansible.builtin.shell: |
helm search repo netbirdio/management --output json | \
python3 -c "import sys,json; d=json.load(sys.stdin); print(d[0]['version']) if d else print('')"
register: _netbird_mgmt_latest
changed_when: false
when: netbird_management_version == ""
environment:
KUBECONFIG: "{{ k3s_kubeconfig_path }}"
- name: Get latest NetBird signal chart version
ansible.builtin.shell: |
helm search repo netbirdio/signal --output json | \
python3 -c "import sys,json; d=json.load(sys.stdin); print(d[0]['version']) if d else print('')"
register: _netbird_signal_latest
changed_when: false
when: netbird_signal_version == ""
environment:
KUBECONFIG: "{{ k3s_kubeconfig_path }}"
- name: Set NetBird chart versions
ansible.builtin.set_fact:
_netbird_mgmt_version: "{{ netbird_management_version if netbird_management_version != '' else _netbird_mgmt_latest.stdout | trim }}"
_netbird_signal_version: "{{ netbird_signal_version if netbird_signal_version != '' else _netbird_signal_latest.stdout | trim }}"
# ─── Coturn ───────────────────────────────────────────────────────────────────
- name: Apply Coturn (STUN/TURN via kube-vip LoadBalancer)
ansible.builtin.template:
src: netbird-coturn.yaml.j2
dest: /tmp/netbird-coturn.yaml
mode: '0644'
when: netbird_coturn_enabled | bool
- name: Apply Coturn manifests
ansible.builtin.command: k3s kubectl apply -f /tmp/netbird-coturn.yaml
changed_when: true
when: netbird_coturn_enabled | bool
- name: Wait for Coturn LoadBalancer IP (kube-vip)
ansible.builtin.shell: |
k3s kubectl get svc coturn -n {{ netbird_namespace }} \
-o jsonpath='{.status.loadBalancer.ingress[0].ip}' 2>/dev/null
register: _coturn_lb_ip
until: _coturn_lb_ip.stdout != ""
retries: 20
delay: 10
changed_when: false
when: netbird_coturn_enabled | bool and netbird_coturn_lb_ip == ""
- name: Set Coturn LoadBalancer IP fact
ansible.builtin.set_fact:
netbird_coturn_lb_ip: "{{ _coturn_lb_ip.stdout | trim }}"
when: netbird_coturn_enabled | bool and netbird_coturn_lb_ip == "" and _coturn_lb_ip is defined
# ─── Management ───────────────────────────────────────────────────────────────
- name: Template Management values
ansible.builtin.template:
src: netbird-management-values.yaml.j2
dest: /tmp/netbird-management-values.yaml
mode: '0644'
- name: Install NetBird Management via Helm
kubernetes.core.helm:
name: netbird-management
chart_ref: netbirdio/management
chart_version: "{{ _netbird_mgmt_version if _netbird_mgmt_version else omit }}"
release_namespace: "{{ netbird_namespace }}"
create_namespace: true
wait: true
timeout: "5m0s"
values_files:
- /tmp/netbird-management-values.yaml
environment:
KUBECONFIG: "{{ k3s_kubeconfig_path }}"
- name: Wait for Management LoadBalancer IP (kube-vip)
ansible.builtin.shell: |
k3s kubectl get svc -n {{ netbird_namespace }} \
-l "app.kubernetes.io/name=management" \
-o jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}' 2>/dev/null
register: _mgmt_lb_ip
until: _mgmt_lb_ip.stdout != ""
retries: 20
delay: 10
changed_when: false
when: netbird_management_lb_ip == ""
- name: Set Management LoadBalancer IP fact
ansible.builtin.set_fact:
netbird_management_lb_ip: "{{ _mgmt_lb_ip.stdout | trim }}"
when: netbird_management_lb_ip == "" and _mgmt_lb_ip is defined
# ─── Signal ───────────────────────────────────────────────────────────────────
- name: Template Signal values
ansible.builtin.template:
src: netbird-signal-values.yaml.j2
dest: /tmp/netbird-signal-values.yaml
mode: '0644'
- name: Install NetBird Signal via Helm
kubernetes.core.helm:
name: netbird-signal
chart_ref: netbirdio/signal
chart_version: "{{ _netbird_signal_version if _netbird_signal_version else omit }}"
release_namespace: "{{ netbird_namespace }}"
create_namespace: true
wait: true
timeout: "3m0s"
values_files:
- /tmp/netbird-signal-values.yaml
environment:
KUBECONFIG: "{{ k3s_kubeconfig_path }}"
- name: Wait for Signal LoadBalancer IP (kube-vip)
ansible.builtin.shell: |
k3s kubectl get svc -n {{ netbird_namespace }} \
-l "app.kubernetes.io/name=signal" \
-o jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}' 2>/dev/null
register: _signal_lb_ip
until: _signal_lb_ip.stdout != ""
retries: 20
delay: 10
changed_when: false
when: netbird_signal_lb_ip == ""
- name: Set Signal LoadBalancer IP fact
ansible.builtin.set_fact:
netbird_signal_lb_ip: "{{ _signal_lb_ip.stdout | trim }}"
when: netbird_signal_lb_ip == "" and _signal_lb_ip is defined
# ─── Subnet Router ────────────────────────────────────────────────────────────
- name: Apply Subnet Router
ansible.builtin.template:
src: netbird-subnet-router.yaml.j2
dest: /tmp/netbird-subnet-router.yaml
mode: '0644'
when: netbird_subnet_router_enabled | bool
- name: Apply Subnet Router manifests
ansible.builtin.command: k3s kubectl apply -f /tmp/netbird-subnet-router.yaml
changed_when: true
when: netbird_subnet_router_enabled | bool
- name: Wait for Subnet Router to be ready
ansible.builtin.command: >
k3s kubectl -n {{ netbird_namespace }}
rollout status deployment/netbird-subnet-router --timeout=120s
changed_when: false
retries: 3
delay: 10
when: netbird_subnet_router_enabled | bool
# ─── Exit Node ────────────────────────────────────────────────────────────────
- name: Apply Exit Node
ansible.builtin.template:
src: netbird-exit-node.yaml.j2
dest: /tmp/netbird-exit-node.yaml
mode: '0644'
when: netbird_exit_node_enabled | bool
- name: Apply Exit Node manifests
ansible.builtin.command: k3s kubectl apply -f /tmp/netbird-exit-node.yaml
changed_when: true
when: netbird_exit_node_enabled | bool
- name: Wait for Exit Node to be ready
ansible.builtin.command: >
k3s kubectl -n {{ netbird_namespace }}
rollout status deployment/netbird-exit-node --timeout=120s
changed_when: false
retries: 3
delay: 10
when: netbird_exit_node_enabled | bool
# ─── Итоговая информация ─────────────────────────────────────────────────────
- name: Show NetBird access info
ansible.builtin.debug:
msg:
- "NetBird VPN установлен в namespace: {{ netbird_namespace }}"
- "Management gRPC: {{ netbird_management_lb_ip }}:443 (kube-vip LoadBalancer)"
- "Signal: {{ netbird_signal_lb_ip }}:10000 (kube-vip LoadBalancer)"
- "STUN/TURN: {{ netbird_coturn_lb_ip }}:3478 (kube-vip LoadBalancer, UDP+TCP)"
- ""
- "=== Подключение клиента ==="
- " netbird up --management-url https://{{ netbird_management_lb_ip }}:443"
- ""
- "=== ВАЖНО: Обнови management.json после получения IP ==="
- " Если LB IPs были неизвестны при установке — переустанови:"
- " make addon-netbird ARGS=\"-e netbird_management_lb_ip=<mgmt-ip> -e netbird_signal_lb_ip=<signal-ip> -e netbird_coturn_lb_ip=<coturn-ip>\""
- ""
- "=== Subnet Router ==="
- "{% if netbird_subnet_router_enabled %}Включён. После регистрации пира:"
- " Management UI → Routes → New Route:"
- " Network: 10.42.0.0/16 (pods), Peer: netbird-subnet-router-*, Masquerade: on"
- " Подсети для маршрутизации: {{ netbird_subnet_routes | join(', ') if netbird_subnet_routes else 'настрой в Management UI' }}"
- "{% else %}Отключён (включи: netbird_subnet_router_enabled: true)"
- "{% endif %}"
- "=== Exit Node ==="
- "{% if netbird_exit_node_enabled %}Включён. После регистрации:"
- " Management UI → Routes → Network: 0.0.0.0/0, Peer: netbird-exit-node-*, Masquerade: on"
- "{% else %}Отключён (включи: netbird_exit_node_enabled: true)"
- "{% endif %}"
- "Документация: https://docs.netbird.io/selfhosted/selfhosted-guide"

View File

@@ -0,0 +1,77 @@
---
# Coturn — STUN/TURN сервер для NAT traversal (UDP + TCP 3478)
# Нужен когда прямое P2P соединение между пирами невозможно (двойной NAT и т.д.)
apiVersion: apps/v1
kind: Deployment
metadata:
name: coturn
namespace: {{ netbird_namespace }}
labels:
app: coturn
spec:
replicas: 1
selector:
matchLabels:
app: coturn
template:
metadata:
labels:
app: coturn
spec:
hostNetwork: true # необходим для корректной работы STUN (определение внешнего IP)
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: coturn
image: coturn/coturn:latest
args:
- --log-file=stdout
- --min-port=49152
- --max-port=65535
- --lt-cred-mech
- --fingerprint
- --no-multicast-peers
- --no-cli
- --no-tlsv1
- --no-tlsv1_1
- --realm=netbird
- --user={{ netbird_coturn_user }}:{{ netbird_coturn_password }}
ports:
- containerPort: 3478
protocol: UDP
name: stun-udp
- containerPort: 3478
protocol: TCP
name: stun-tcp
resources:
requests:
cpu: 50m
memory: 32Mi
limits:
cpu: 200m
memory: 128Mi
---
# Service: kube-vip LoadBalancer для внешнего доступа к STUN/TURN
apiVersion: v1
kind: Service
metadata:
name: coturn
namespace: {{ netbird_namespace }}
labels:
app: coturn
{% if netbird_coturn_lb_ip %}
annotations:
kube-vip.io/loadbalancerIPs: "{{ netbird_coturn_lb_ip }}"
{% endif %}
spec:
selector:
app: coturn
type: LoadBalancer
ports:
- name: stun-udp
port: 3478
targetPort: 3478
protocol: UDP
- name: stun-tcp
port: 3478
targetPort: 3478
protocol: TCP

View File

@@ -0,0 +1,92 @@
---
# NetBird Exit Node — направляет весь интернет-трафик VPN-клиентов через этот Pod
# После регистрации: в Management UI → Routes → добавь 0.0.0.0/0 с Masquerade: on
# Этот Pod должен иметь исходящий доступ в интернет
apiVersion: apps/v1
kind: Deployment
metadata:
name: netbird-exit-node
namespace: {{ netbird_namespace }}
labels:
app: netbird-exit-node
spec:
replicas: 1
selector:
matchLabels:
app: netbird-exit-node
template:
metadata:
labels:
app: netbird-exit-node
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
initContainers:
- name: ip-forward
image: busybox:latest
command:
- /bin/sh
- -c
- |
sysctl -w net.ipv4.ip_forward=1
sysctl -w net.ipv6.conf.all.forwarding=1
# MASQUERADE для исходящего NAT (трафик VPN-клиентов уходит через IP ноды)
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE 2>/dev/null || true
iptables -t nat -A POSTROUTING -o ens+ -j MASQUERADE 2>/dev/null || true
securityContext:
privileged: true
containers:
- name: netbird
image: netbirdio/netbird:latest
command:
- /bin/sh
- -c
- |
netbird up \
--setup-key "$SETUP_KEY" \
--management-url "$MANAGEMENT_URL" \
--hostname "netbird-exit-node-$(hostname)" \
--foreground
env:
- name: SETUP_KEY
valueFrom:
secretKeyRef:
name: netbird-exit-node-setup-key
key: setup_key
- name: MANAGEMENT_URL
value: "{{ 'https' if netbird_ingress_tls else 'http' }}://{{ netbird_domain }}:443"
- name: NB_LOG_LEVEL
value: "info"
securityContext:
capabilities:
add:
- NET_ADMIN
- SYS_MODULE
- NET_RAW
volumeMounts:
- name: dev-tun
mountPath: /dev/net/tun
- name: netbird-data
mountPath: /etc/netbird
resources:
requests:
cpu: "{{ netbird_router_resources.requests.cpu }}"
memory: "{{ netbird_router_resources.requests.memory }}"
limits:
cpu: "{{ netbird_router_resources.limits.cpu }}"
memory: "{{ netbird_router_resources.limits.memory }}"
volumes:
- name: dev-tun
hostPath:
path: /dev/net/tun
- name: netbird-data
emptyDir: {}
---
apiVersion: v1
kind: Secret
metadata:
name: netbird-exit-node-setup-key
namespace: {{ netbird_namespace }}
type: Opaque
stringData:
setup_key: "{{ netbird_exit_node_setup_key }}"

View File

@@ -0,0 +1,70 @@
image:
tag: "latest"
replicaCount: 1
# Management gRPC + HTTP API
service:
type: LoadBalancer
{% if netbird_management_lb_ip %}
annotations:
kube-vip.io/loadbalancerIPs: "{{ netbird_management_lb_ip }}"
{% endif %}
# Persistent storage для SQLite базы данных и peer конфигов
persistence:
enabled: true
size: "{{ netbird_management_storage_size }}"
{% if netbird_management_storage_class %}
storageClass: "{{ netbird_management_storage_class }}"
{% endif %}
# management.json конфигурация
config:
# STUN/TURN через Coturn (LoadBalancer IP)
Stuns:
- Proto: udp
URI: "stun:{{ netbird_coturn_lb_ip if netbird_coturn_lb_ip else 'COTURN_LB_IP' }}:3478"
Username: ""
Password: null
TURNConfig:
Turns:
- Proto: udp
URI: "turn:{{ netbird_coturn_lb_ip if netbird_coturn_lb_ip else 'COTURN_LB_IP' }}:3478"
Username: "{{ netbird_coturn_user }}"
Password: "{{ netbird_coturn_password }}"
CredentialsTTL: "12h"
Secret: "{{ netbird_coturn_password }}"
# Signal server (LoadBalancer IP)
Signal:
Proto: https
URI: "{{ netbird_signal_lb_ip if netbird_signal_lb_ip else 'SIGNAL_LB_IP' }}:10000"
Username: ""
Password: null
Datadir: /var/lib/netbird/
# HTTP/Dashboard config
HttpConfig:
Address: "0.0.0.0:8080"
AuthAudience: ""
AuthIssuer: ""
AuthClientId: ""
AuthClientSecret: ""
AuthUserIDClaim: ""
AuthKeysLocation: ""
IdpSignKeyRefreshEnabled: false
ExtraAuthAudience: ""
LetsEncryptDomain: ""
CertFile: ""
CertKey: ""
resources:
requests:
cpu: "{{ netbird_management_resources.requests.cpu }}"
memory: "{{ netbird_management_resources.requests.memory }}"
limits:
cpu: "{{ netbird_management_resources.limits.cpu }}"
memory: "{{ netbird_management_resources.limits.memory }}"

View File

@@ -0,0 +1,21 @@
image:
tag: "latest"
replicaCount: 1
# Signal server (WebRTC peer discovery)
service:
type: LoadBalancer
port: 10000
{% if netbird_signal_lb_ip %}
annotations:
kube-vip.io/loadbalancerIPs: "{{ netbird_signal_lb_ip }}"
{% endif %}
resources:
requests:
cpu: "{{ netbird_signal_resources.requests.cpu }}"
memory: "{{ netbird_signal_resources.requests.memory }}"
limits:
cpu: "{{ netbird_signal_resources.limits.cpu }}"
memory: "{{ netbird_signal_resources.limits.memory }}"

View File

@@ -0,0 +1,88 @@
---
# NetBird Subnet Router — позволяет VPN-клиентам обращаться к подсетям кластера
# После регистрации: настрой маршруты в NetBird Management UI → Routes
# Network: <podсеть>, Peer: netbird-subnet-router, Metric: 9999, Masquerade: on
apiVersion: apps/v1
kind: Deployment
metadata:
name: netbird-subnet-router
namespace: {{ netbird_namespace }}
labels:
app: netbird-subnet-router
spec:
replicas: 1
selector:
matchLabels:
app: netbird-subnet-router
template:
metadata:
labels:
app: netbird-subnet-router
spec:
# hostNetwork необходим для доступа к подсетям хоста и корректного роутинга
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
initContainers:
- name: ip-forward
image: busybox:latest
command: ["/bin/sh", "-c", "sysctl -w net.ipv4.ip_forward=1"]
securityContext:
privileged: true
containers:
- name: netbird
image: netbirdio/netbird:latest
command:
- /bin/sh
- -c
- |
# Запускаем netbird daemon
netbird up \
--setup-key "$SETUP_KEY" \
--management-url "$MANAGEMENT_URL" \
--hostname "netbird-subnet-router-$(hostname)" \
--foreground
env:
- name: SETUP_KEY
valueFrom:
secretKeyRef:
name: netbird-router-setup-key
key: setup_key
- name: MANAGEMENT_URL
value: "{{ 'https' if netbird_ingress_tls else 'http' }}://{{ netbird_domain }}:443"
- name: NB_LOG_LEVEL
value: "info"
securityContext:
capabilities:
add:
- NET_ADMIN
- SYS_MODULE
- NET_RAW
privileged: false
volumeMounts:
- name: dev-tun
mountPath: /dev/net/tun
- name: netbird-data
mountPath: /etc/netbird
resources:
requests:
cpu: "{{ netbird_router_resources.requests.cpu }}"
memory: "{{ netbird_router_resources.requests.memory }}"
limits:
cpu: "{{ netbird_router_resources.limits.cpu }}"
memory: "{{ netbird_router_resources.limits.memory }}"
volumes:
- name: dev-tun
hostPath:
path: /dev/net/tun
- name: netbird-data
emptyDir: {}
---
# Secret с setup key для регистрации subnet-router в management
apiVersion: v1
kind: Secret
metadata:
name: netbird-router-setup-key
namespace: {{ netbird_namespace }}
type: Opaque
stringData:
setup_key: "{{ netbird_router_setup_key }}"