Makefile - Уникальные имена контейнеров на каждый вызов make (ANSIBLE_RUN_ID); переопределение через ANSIBLE_CONTAINER_NAME / MOLECULE_CONTAINER_NAME; отдельное имя для Molecule, чтобы k3s-ansible и molecule не конфликтовали. - Старые цели molecule-prometheus и molecule-istio переведены на molecule-addon (prometheus-stack, istio); добавлены явные molecule-addon-prometheus-stack и molecule-addon-istio; в molecule-addon-all включены prometheus-stack и istio (полный набор аддонов), скорректированы подписи. - Phony-таргет dashboard (без внесения кода в dashboard/ в этот коммит). Сценарии Molecule (converge/verify) — десятки аддонов - Добавлены/выровнены переменные и шаблоны под текущие роли (harbor, hysteria2, ingress-*, jenkins, mediaserver, netbird, nextcloud, splitgw, vault, vaultwarden и др.). - Helm/файлы на хост: delegate_to: localhost, run_once где уместно (technitium-dns, yandex-dns-controller); verify на localhost для file-based проверок. - Уточнения проверок: metrics-server, minio, promtail, pushgateway, velero (bool из фактов/строк), splitgw (JSON, поиск портов/DNS-правил в структуре). - В meta ролей: prometheus_stack + namespace, istio + namespace; у istio согласованы converge/verify (в т.ч. метрики, ослаблены жёсткие assert под шаблоны Kiali). - csi-nfs: комментарий к volume_binding_mode (Immediate / WaitForFirstConsumer). Инфраструктура - .gitignore: каталог dashboard/ (локальная копия не в репозитории). - docker-compose: убрано фиксированное container_name для параллельных ; TZ по умолчанию Europe/Moscow. - roles/k3s/tasks/prereqs.yml: повторные попытки update_cache и apt install при кратковременных сбоях зеркал/сети.
technitium-dns
Highly-available internal DNS based on Technitium DNS Server.
Deploys Primary + optional Secondary instance, each behind a kube-vip LoadBalancer service with a static IP. A CronJob syncs all Primary zones to Secondary automatically every 5 minutes via the Technitium REST API.
Architecture
Clients (Keenetic / DHCP)
│
├─ DNS 192.168.1.53 → technitium-dns-primary (Deployment, RWO PVC)
└─ DNS 192.168.1.54 → technitium-dns-secondary (Deployment, RWO PVC)
CronJob sync (*/5 min): primary REST API → list zones → create missing Secondary zones on secondary → forceSync
Web UI (Ingress):
http://dns.home.local → primary :5380
http://dns-secondary.home.local → secondary :5380
ExternalDNS (optional, disabled by default):
Watches Ingress/Service → RFC 2136 DDNS → primary → AXFR → secondary
Quick start
1. Set vault password
# group_vars/all/vault.yml (encrypted with ansible-vault)
technitium_dns_admin_password: "your-strong-password"
2. Enable and configure
# group_vars/all/addons.yml
addon_technitium_dns: true
technitium_dns_primary_ip: "192.168.1.53" # kube-vip managed IP
technitium_dns_secondary_ip: "192.168.1.54"
technitium_dns_domain: "home.local"
technitium_dns_primary_host: "dns.home.local"
technitium_dns_secondary_host: "dns-secondary.home.local"
3. Deploy
make addon-technitium-dns
# or:
ansible-playbook playbooks/addons.yml --tags technitium-dns
4. Create the internal zone (first time only)
Open http://dns.home.local/ → login as admin → Zones → Add Zone → Primary → enter home.local.
Then add A records for your services under home.local.
Keenetic router — DNS configuration
In Keenetic web interface: Internet → DNS servers
| Field | Value |
|---|---|
| Primary DNS | 192.168.1.53 |
| Secondary DNS | 192.168.1.54 |
Or via Keenetic CLI:
ip name-server 192.168.1.53
ip name-server 192.168.1.54
Zone sync (Primary → Secondary)
The technitium-dns-sync CronJob runs every 5 minutes. It:
- Logs in to both instances with the shared admin password.
- Lists all
PrimaryandForwarderzones on primary. - Creates missing zones on secondary as
Secondarytype pointing toprimary.ip. - Calls
forceSyncZonefor every zone.
Manual trigger:
kubectl create job --from=cronjob/technitium-dns-sync sync-manual-1 \
-n technitium-dns
kubectl -n technitium-dns logs -l app.kubernetes.io/component=sync -f
ExternalDNS (optional)
Automatically creates DNS records on primary from Ingress and Service resources via RFC 2136 DDNS. Secondary picks up changes via the sync CronJob.
Enable
# group_vars/all/addons.yml
technitium_dns_externaldns_enabled: true
technitium_dns_externaldns_domain_filter:
- "home.local"
technitium_dns_externaldns_policy: "upsert-only" # or "sync" to also delete
Enable DDNS on zones in Technitium
For each zone that ExternalDNS should write to:
- Open Web UI → Zones →
home.local→ Zone Settings - Dynamic Updates → set to
Allow(orAllow Signedfor TSIG) - Save.
Variables reference
| Variable | Default | Description |
|---|---|---|
technitium_dns_primary_ip |
192.168.1.53 |
kube-vip LB IP for primary |
technitium_dns_secondary_enabled |
true |
Deploy secondary instance |
technitium_dns_secondary_ip |
192.168.1.54 |
kube-vip LB IP for secondary |
technitium_dns_primary_node |
"" |
Pin primary to node hostname |
technitium_dns_secondary_node |
"" |
Pin secondary to node hostname |
technitium_dns_domain |
home.local |
Local DNS domain |
technitium_dns_forwarders |
[1.1.1.1, 8.8.8.8] |
Upstream resolvers |
technitium_dns_recursion |
AllowOnlyForPrivateNetworks |
Recursion mode |
technitium_dns_admin_password |
— | In vault.yml — admin password |
technitium_dns_storage_class |
"" |
StorageClass (empty = cluster default) |
technitium_dns_storage_size |
1Gi |
PVC size per instance |
technitium_dns_ingress_enabled |
true |
Expose Web UI via Ingress |
technitium_dns_primary_host |
dns.home.local |
Primary Web UI hostname |
technitium_dns_secondary_host |
dns-secondary.home.local |
Secondary Web UI hostname |
technitium_dns_sync_enabled |
true |
Enable zone sync CronJob |
technitium_dns_sync_schedule |
*/5 * * * * |
Sync frequency |
technitium_dns_externaldns_enabled |
false |
Deploy ExternalDNS |
technitium_dns_externaldns_policy |
upsert-only |
ExternalDNS sync policy |
Troubleshooting
DNS not resolving after deploy
# Check pods are Running
kubectl -n technitium-dns get pods
# Test DNS resolution from a pod
kubectl run dnstest --rm -it --image=busybox -- nslookup kubernetes.default 192.168.1.53
Sync job failing
kubectl -n technitium-dns logs -l app.kubernetes.io/component=sync --tail=100
Common cause: secondary is not yet ready when the first sync runs. The job will retry on the next schedule.
Secondary shows stale records
Force a manual sync (see above). If secondary zone type is wrong, delete the zone on secondary and let sync recreate it.
kube-vip IP not assigned
Ensure the IP is in the kube-vip address pool (check kube-vip ConfigMap or CiliumLoadBalancerIPPool) and not already in use.