Compare commits

...

74 Commits

Author SHA1 Message Date
3732db698e klal
Some checks failed
Gitea Actions Demo / Explore-Gitea-Actions (push) Failing after 0s
2025-06-21 12:23:05 +02:00
42fdd319a3 klal
Some checks failed
Gitea Actions Demo / Explore-Gitea-Actions (push) Failing after 0s
2025-06-21 11:11:39 +02:00
47e730ef60 klal
Some checks failed
Gitea Actions Demo / Explore-Gitea-Actions (push) Failing after 0s
2025-06-20 01:53:36 +02:00
ddee60ab9c klal
Some checks failed
Gitea Actions Demo / Explore-Gitea-Actions (push) Failing after 0s
2025-06-19 20:24:53 +02:00
41bd0fbe73 klal 2025-06-19 20:09:33 +02:00
dac032de7c klal
Some checks failed
Gitea Actions Demo / Explore-Gitea-Actions (push) Failing after 0s
2025-06-08 22:58:39 +02:00
899d130325 klal
Some checks failed
Gitea Actions Demo / Explore-Gitea-Actions (push) Failing after 0s
2025-05-24 23:34:28 +02:00
8e7fb3bc42 klal
Some checks failed
Gitea Actions Demo / Explore-Gitea-Actions (push) Failing after 0s
2025-05-24 20:11:29 +02:00
46da0fa6e9 klal 2025-05-24 20:05:19 +02:00
2f3f58c965 klal 2025-05-24 19:57:19 +02:00
264f510541 klal 2025-05-24 19:56:14 +02:00
b01bdb59f1 klal 2025-05-24 19:51:55 +02:00
644d8b1a59 klal 2025-05-24 19:50:01 +02:00
230c665365 klal 2025-05-24 19:47:41 +02:00
6a0f33c73f klal 2025-05-24 19:46:03 +02:00
026925081e klal 2025-05-24 19:36:39 +02:00
5426f7ae3d klal
Some checks failed
Gitea Actions Demo / Explore-Gitea-Actions (push) Failing after 1s
2025-05-20 13:23:57 +02:00
d255ad37ad klal
Some checks failed
Gitea Actions Demo / Explore-Gitea-Actions (push) Failing after 0s
2025-05-07 18:00:24 +02:00
89030dec11 klal 2025-05-07 17:07:13 +02:00
6df5f17cfe klal 2025-05-07 17:03:31 +02:00
ff8ebb3940 klal 2025-05-07 17:01:40 +02:00
6a720e2e89 klal 2025-05-07 16:58:23 +02:00
02dc0134c4 klal 2025-05-07 16:56:57 +02:00
e7fb37545f klal 2025-05-07 16:51:45 +02:00
5927ad571e klal 2025-05-07 16:50:23 +02:00
9871b8cb29 klal
Some checks failed
Gitea Actions Demo / Explore-Gitea-Actions (push) Failing after 0s
2025-05-06 18:20:35 +02:00
6de27cd975 klal 2025-05-06 18:15:53 +02:00
d2c7c49d68 klal 2025-05-06 18:15:06 +02:00
9e3ce2e113 klal 2025-05-06 18:05:32 +02:00
bb248011ad klal 2025-05-06 18:04:03 +02:00
4a7838cd19 klal
Some checks failed
Gitea Actions Demo / Explore-Gitea-Actions (push) Failing after 0s
2025-05-05 22:20:28 +02:00
9a21d1c273 klal 2025-05-05 18:31:26 +02:00
2b4421d92d klal 2025-05-05 16:31:28 +02:00
8c88139223 renamed customer user group
All checks were successful
Gitea Actions Demo / Explore-Gitea-Actions (push) Successful in 1s
2025-04-28 19:22:21 +02:00
a2825f31c3 renamed customer user group
All checks were successful
Gitea Actions Demo / Explore-Gitea-Actions (push) Successful in 1s
2025-04-28 19:02:50 +02:00
024a3bb61f klal
Some checks failed
Gitea Actions Demo / Explore-Gitea-Actions (push) Has been cancelled
2025-04-28 16:56:26 +02:00
9bc4c937de klal
Some checks failed
Gitea Actions Demo / Explore-Gitea-Actions (push) Has been cancelled
2025-04-28 15:05:09 +02:00
f339cb755a klal 2025-04-26 19:37:01 +02:00
5ab8d6cf02 klal 2025-04-25 14:26:00 +02:00
153c3a5d1a renamed customer user group 2025-04-23 11:04:30 +02:00
f18365f184 renamed customer user group 2025-04-16 15:27:19 +02:00
e6ab9ac621 renamed customer user group 2025-04-16 12:36:30 +02:00
c2ba911536 renamed customer user group 2025-04-16 12:36:15 +02:00
9be37d9ad5 renamed customer user group 2025-04-16 11:38:58 +02:00
80ddec608a lala 2025-04-16 09:32:40 +02:00
24191afe3d alias 2025-04-16 09:31:09 +02:00
4c1e374f23 renamed customer user group 2025-04-16 09:19:25 +02:00
eaa4ecd07a renamed customer user group 2025-04-16 09:18:30 +02:00
d4ee9dc3eb renamed customer user group 2025-04-14 21:09:17 +02:00
16414f9bc4 renamed customer user group 2025-04-14 21:07:04 +02:00
04eb197989 renamed customer user group 2025-04-14 21:03:09 +02:00
5957435c36 renamed customer user group 2025-04-14 20:59:24 +02:00
b3de421b3a renamed customer user group 2025-04-14 20:55:20 +02:00
977c9ca44e renamed customer user group 2025-04-14 20:45:52 +02:00
e15b4ec2a8 renamed customer user group 2025-04-14 20:39:59 +02:00
365a045d4d renamed customer user group 2025-04-14 20:39:27 +02:00
0c730c0b65 Fixed taging for disabled host 2025-04-09 14:05:27 +02:00
83f37fc18a Fixed taging for disabled host 2025-04-09 13:52:48 +02:00
378d2ee456 Fixed taging for disabled host 2025-04-09 13:38:03 +02:00
7632faae6e Fixed taging for disabled host 2025-04-09 13:34:17 +02:00
2285c420ec Fixed taging for disabled host 2025-04-09 12:17:00 +02:00
187c422759 Fixed taging for disabled host 2025-04-09 12:15:02 +02:00
b2b98ef238 Fixed taging for disabled host 2025-04-09 12:08:48 +02:00
5fd82279f1 Fixed taging for disabled host 2025-04-09 12:08:02 +02:00
9eb9fb6190 alias 2025-04-06 03:14:44 +02:00
75a458dfb5 alias 2025-04-05 23:21:06 +02:00
792961fe55 lala 2025-04-04 00:34:53 +02:00
5ceb74a148 lala 2025-04-04 00:07:57 +02:00
a42ef3a30b lala 2025-04-04 00:06:47 +02:00
dbd2a549be lala 2025-04-04 00:04:07 +02:00
14d455d84d aaa 2025-03-18 18:20:37 +01:00
62584425b4 aaa 2025-03-18 18:04:58 +01:00
482244589f aaa 2025-03-18 17:56:00 +01:00
3f601f92a0 aaa 2025-03-18 17:24:20 +01:00
49 changed files with 1516 additions and 827 deletions

View File

@ -0,0 +1,9 @@
name: Gitea Actions Demo
run-name: ${{ gitea.actor }} is testing out Gitea Actions 🚀
on: [push]
jobs:
Explore-Gitea-Actions:
runs-on: jaydee
steps:
- run: curl -X GET https://kestra.sectorq.eu/api/v1/executions/webhook/jaydee/ansible-all/f851511c32ca9450

44
all.yml
View File

@ -1,31 +1,67 @@
---
- hosts: datacenter
name: Roles
gather_facts: false
roles:
- name: setup
role: setup
tags: setup
- name: common
tags: common
role: common
- name: hosts
role: hosts
tags: hosts
- name: ssh_config
role: ssh_config
tags: ssh_config
- name: sshd_config
role: sshd_config
tags: sshd_config
- name: ssh_keys
role: ssh_keys
tags: ssh_keys
- name: wake_on_lan
role: wake_on_lan
tags: wake_on_lan
- name: matter-server
role: matter-server
tags: matter-server
- name: docker
role: docker
tags: docker
- name: timeshift
role: timeshift
tags: timeshift
- name: monitoring
role: monitoring
tags: monitoring
- name: zabbix-agent
role: zabbix-agent
tags: zabbix-agent
- name: autofs_client
role: autofs_client
tags: autofs_client
- name: ldap_client
role: ldap_client
tags: ldap_client
- name: ssh_banner
role: ssh_banner
tags: ssh_banner
- name: omv_backup
role: omv_backup
tags: omv_backup
- name: wazuh-agent
role: wazuh-agent
tags: wazuh-agent
- role: mqtt-srv
- name: mqtt-srv
role: mqtt-srv
tags: mqtt-srv
- role: vnc_server
tags: vnc_server
- name: vnc_server
role: vnc_server
tags: vnc_server
- name: promtail
role: promtail
tags: promtail
- name: sudoers
role: sudoers
tags: sudoers

View File

@ -1,3 +1,6 @@
---
# requirements.yml
# ansible-galaxy collection install -r collections/requirements.yml --force
collections:
- name: community.general
- name: community.general
source: https://galaxy.ansible.com
- name: community.docker

1
hosts
View File

@ -10,6 +10,7 @@
# Ex 1: Ungrouped hosts, specify before any group headers.
#green.example.com
#blue.example.com
#192.168.100.1

View File

@ -73,7 +73,7 @@ datacenter:
ansible_winrm_kerberos_delegation: true
mqtt_srv:
children:
servers:
servers1:
hosts:
rpi5-1.home.lan:
rpi5.home.lan:
@ -110,13 +110,42 @@ datacenter:
containers:
children:
docker_servers:
children:
router:
hosts:
router.home.lan:
vars:
ansible_python_interpreter: /usr/bin/python3
ansible_ssh_user: root
ansible_ssh_private_key_file: ssh_key.pem
srv:
hosts:
rpi5.home.lan:
m-server.home.lan:
vars:
ansible_python_interpreter: /usr/bin/python3
ansible_ssh_user: jd
ansible_become_password: l4c1j4yd33Du5lo
ansible_ssh_private_key_file: ssh_key.pem
identity_file: ssh_key.pem
ns:
hosts:
nas.home.lan:
vars:
ansible_ssh_user: admin
become_method: su
become_user: admin
ansible_ssh_private_key_file: ssh_key.pem
# ansible_user: admin
# ansible_pass: l4c1!j4yd33?Du5lo1
ansible_python_interpreter: /share/ZFS530_DATA/.qpkg/QPython312/bin/python3
servers:
hosts:
rpi5-1.home.lan:
rpi5.home.lan:
m-server.home.lan:
fog.home.lan:
omv.home.lan:
amd.home.lan:
rack.home.lan:
vars:
ansible_python_interpreter: /usr/bin/python3
ansible_ssh_user: jd

View File

@ -1,17 +1,17 @@
$ANSIBLE_VAULT;1.1;AES256
37396163363830306632376461613061333432336166376338306632633139383336343536316463
3863643031313433613130613665373466383432323039350a333365363839616135353061653834
38396136343338366162366366326265346632656561636535633631346638333730613763373065
3732386136373565620a643661333137373738333332633631303535333836666465643862396634
62633466346463363363313162376464393533636335336533313536333531366139393134323733
64643535346530653865633034636466643635633430376539633061353037353236333531396531
64336133663630663438303266653662326463396565323664303764356264623661303465643038
36376531323365643363363465353064623630663662633238663661346630326464356232303564
30316265613438643731626463626564663963613036386235383766616561323235636566333438
31633933343138383237363765663735656362376132363336633631336462636531346664353435
33623935326532646136646436613662316431306336613632643639386534343532666237633433
63343031376462616262623965363139343961376162646133376232323365656663376361663539
62613637393630303830653232663563333436373663656434646632396162653030333034383961
62626334623833393536323035636135663530326138366332666535336130373733323835663232
36313035353436633962633435623232323362633265666330623761373162303235376264613339
37343139333730346362
34653034626436373537323430316462643663336164613763306336333038346562356565393036
3964393861323439333839383061303864326235306665620a346233313633393135366362326464
63643039363635646131323365313833643864373637346536663831613837353833343030623366
3038303063393565350a613439646161363330626566646264313939653339383439623532636638
38646433353765396136333236656535636235313639393565306636376438346362646438613835
62663031333832666262616365343831353530646263383932373666386631633430626363363966
61396336303365306135363039303032646137613330646434366638633738363064356132383439
36346432306531356333313963353463626232613563653331396334656539643531343136636635
31613762383664353930653165313461626133336161353639303662666234356138373539376161
30653837316266356136353132373663396365633434393166383230363263326139316362383766
64303738393663343636616437346535346566346536616663333866613966343563306265633064
66333331393861626637616330333463636135316466616532373663663464613034656337363437
62653333653838326632643238616638313935383532303233643132303637653963626363633662
33646161373931386133353338643462306635393866656662376234396533376431366134653536
36363835346434323338363465336166303161633732333232653861646136326334616261653462
66376139313433383665

View File

@ -2,7 +2,7 @@
name: Sync rpi5
become: true
tasks:
- name: Apt exclude linux-dtb-current-meson64
- name: Get running packages
ansible.builtin.shell: "docker ps|awk '{print $NF}'"
register: containers
- debug:
@ -13,4 +13,4 @@
when: item != "NAMES" and item != "watchtower-watchtower-1"
with_items: "{{ containers.stdout_lines }}"
- name: Sync data
ansible.builtin.shell: "/myapps/venv/bin/python3 /myapps/omv_backup.py -r all"
ansible.builtin.shell: "/myapps/venv/bin/python3 /myapps/omv_backup.py -r all"

View File

@ -4,54 +4,87 @@
# vars:
# DOCKER_IMAGE: docker-tasmota
# FWS: tasmota
become: true
tasks:
- name: Pull tasmota
ansible.builtin.shell:
cmd: 'git config --global --add safe.directory /share/docker_data/docker-tasmota/Tasmota'
- name: Fetch tasmota
ansible.builtin.shell:
cmd: 'git fetch https://github.com/arendst/Tasmota.git {{ BRANCH }}'
chdir: /share/docker_data/docker-tasmota/Tasmota
- name: Change conf
community.general.git_config:
name: safe.director
scope: global
value: /share/docker_data/docker-tasmota/Tasmota
- name: Checkout tasmota branch
ansible.builtin.shell:
cmd: 'git checkout --force {{ BRANCH }}'
chdir: /share/docker_data/docker-tasmota/Tasmota
# - name: Pull tasmota
# ansible.builtin.shell:
# cmd: 'git config --global --add safe.directory /share/docker_data/docker-tasmota/Tasmota'
- name: Pull tasmota
ansible.builtin.shell:
cmd: 'git pull'
chdir: /share/docker_data/docker-tasmota/Tasmota
- name: Checkout a github repo and use refspec to fetch all pull requests
ansible.builtin.git:
repo: 'https://github.com/arendst/Tasmota.git'
dest: /share/docker_data/docker-tasmota/Tasmota
version: '{{ BRANCH }}'
# - name: Fetch tasmota
# ansible.builtin.shell:
# cmd: 'git fetch https://github.com/arendst/Tasmota.git {{ BRANCH }}'
# chdir: /share/docker_data/docker-tasmota/Tasmota
- name: Git checkout
ansible.builtin.git:
repo: 'https://github.com/arendst/Tasmota.git'
dest: /share/docker_data/docker-tasmota/Tasmota
version: '{{ BRANCH }}'
# - name: Checkout tasmota branch
# ansible.builtin.shell:
# cmd: 'git checkout --force {{ BRANCH }}'
# chdir: /share/docker_data/docker-tasmota/Tasmota
- name: Just get information about the repository whether or not it has already been cloned locally
ansible.builtin.git:
repo: https://github.com/arendst/Tasmota.git
dest: /share/docker_data/docker-tasmota/Tasmota
update: true
# - name: Pull tasmota
# ansible.builtin.shell:
# cmd: 'git pull'
# chdir: /share/docker_data/docker-tasmota/Tasmota
- name: Copy platformio_override
ansible.builtin.shell:
ansible.builtin.command:
cmd: 'cp platformio_override.ini Tasmota/platformio_override.ini'
chdir: /share/docker_data/docker-tasmota/
register: my_output
changed_when: my_output.rc != 0
- name: Copy user_config_override
ansible.builtin.shell:
ansible.builtin.command:
cmd: 'cp user_config_override.h Tasmota/tasmota/user_config_override.h'
chdir: /share/docker_data/docker-tasmota/
register: my_output
changed_when: my_output.rc != 0
- name: Build tasmota
ansible.builtin.shell:
cmd: 'docker run --rm -v /share/docker_data/docker-tasmota/Tasmota:/tasmota -u $UID:$GID {{ DOCKER_IMAGE }} -e {{ FWS }}'
ansible.builtin.command:
cmd: 'docker run --rm -v /share/docker_data/docker-tasmota/Tasmota:/tasmota -u 0:0 {{ DOCKER_IMAGE }} -e {{ FWS }}'
chdir: /share/docker_data/docker-tasmota/
when: FWS != "all"
register: my_output
changed_when: my_output.rc != 0
- name: Build tasmota
ansible.builtin.shell:
cmd: 'docker run --rm -v /share/docker_data/docker-tasmota/Tasmota:/tasmota -u $UID:$GID {{ DOCKER_IMAGE }}'
ansible.builtin.command:
cmd: 'docker run --rm -v /share/docker_data/docker-tasmota/Tasmota:/tasmota -u 0:0 {{ DOCKER_IMAGE }}'
chdir: /share/docker_data/docker-tasmota/
when: FWS == "all"
register: my_output
changed_when: my_output.rc != 0
- name: Create a directory if it does not exist
ansible.builtin.file:
path: /share/docker_data/webhub/fw/{{ BRANCH }}
state: directory
mode: '0755'
mode: '0755'
- name: Build tasmota
ansible.builtin.shell:
cmd: 'mv /share/docker_data/docker-tasmota/Tasmota/build_output/firmware/* /share/docker_data/webhub/fw/{{ BRANCH }}'
ansible.builtin.shell:
cmd: 'mv /share/docker_data/docker-tasmota/Tasmota/build_output/firmware/* /share/docker_data/webhub/fw/{{ BRANCH }}/'

View File

@ -1,10 +0,0 @@
- hosts: nas
name: Sync mailu
ignore_unreachable: false
tasks:
- name: Syncing all
ansible.builtin.shell: 'rsync -avh --delete root@192.168.77.189:/srv/dev-disk-by-uuid-02fbe97a-cd9a-4511-8bd5-21f8516353ee/docker_data/latest/{{ CONTAINERS }} /share/docker_data/ --exclude="home-assistant.log*" --exclude="gitlab/logs/*"'
#ansible.builtin.shell: 'rsync -avh --delete /share/docker_data/{mailu2,webhub,nginx,heimdall} root@192.168.77.238:/share/docker_data/ --exclude="home-assistant.log*" --exclude="gitlab/logs/*"'
#ansible.builtin.shell: 'ls -la'
when: inventory_hostname in groups['nas']
# loop: '{{ CONTAINERS }}'

View File

@ -1,90 +1,173 @@
- hosts: containers
name: Switch mailu to second
- hosts: docker_servers
name: Switch server
ignore_unreachable: false
vars:
arch_name: docker_mailu2_data
containers:
- nginx-app-1
- heimdall
- mailu2-admin-1
- mailu2-antispam-1
- mailu2-antivirus-1
- mailu2-fetchmail-1
- mailu2-front-1
- mailu2-imap-1
- mailu2-oletools-1
- mailu2-redis-1
- mailu2-resolver-1
- mailu2-smtp-1
- mailu2-webdav-1
- mailu2-webmail-1
- HomeAssistant
- mosquitto-mosquitto-1
- gitlab
- watchtower-watchtower-1
- kestra-kestra-1
- kestra-postgres-1
- authentik-worker-1
- authentik-server-1
- authentik-redis-1
- authentik-postgresql-1
tasks:
- name: Start mailu containers
command: "docker start {{ containers | join(' ') }}"
become: true
- name: Reconfigure swap size
ansible.builtin.lineinfile:
path: /etc/sysctl.conf
regexp: "^net.ipv4.igmp_max_memberships =.*"
line: "net.ipv4.igmp_max_memberships = 1024"
create: true
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
when: inventory_hostname != "router.home.lan"
- name: Start containers
shell: docker start `docker ps -a |awk '{ print $NF }'|grep -v NAME |xargs`
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
ignore_errors: true
when: inventory_hostname in groups['raspberrypi5']
- name: Get ruleset
command: nvram get vts_rulelist
when: inventory_hostname in groups['router']
register: ruleset
- name: Print the gateway for each host when defined
ansible.builtin.debug:
msg: "var is {{ ruleset.stdout }}"
when: inventory_hostname in groups['router']
when: inventory_hostname == destination and inventory_hostname != "nas.home.lan"
- name: Start containers
shell: docker exec -it gitlab update-permissions
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
ignore_errors: true
when: inventory_hostname == destination and inventory_hostname != "nas.home.lan and inventory_hostname != "rpi5.home.lan"
- name: Print the gateway for each host when defined
ansible.builtin.debug:
msg: "var is {{ destination }}"
when: inventory_hostname in groups['router']
- name: Start containers
shell: /share/ZFS530_DATA/.qpkg/container-station/bin/docker exec -it gitlab update-permissions
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
ignore_errors: true
when: inventory_hostname == destination and inventory_hostname == "nas.home.lan"
- name: initialize variables
set_fact:
regexp: "\\g<1>{{ destination }}\\3"
when: inventory_hostname in groups['router']
- name: Start containers
shell: /share/ZFS530_DATA/.qpkg/container-station/bin/docker start `/share/ZFS530_DATA/.qpkg/container-station/bin/docker ps -a |awk '{ print $NF }'|grep -v NAME |xargs`
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
ignore_errors: true
when: inventory_hostname == destination and inventory_hostname == "nas.home.lan"
- set_fact:
app_path: "{{ ruleset.stdout | regex_replace('(\\<MAIL_SERVER\\>[0-9,]{1,}\\>)([0-9.]{1,})(\\>[0-9a-zA-Z\\s-]{0,}\\>TCP\\>)', regexp) | regex_replace('(\\<WEB_SERVER\\>[0-9,]{1,}\\>)([0-9.]{1,})(\\>[0-9a-zA-Z\\s-]{0,}\\>TCP\\>)', regexp) }}"
when: inventory_hostname in groups['router']
- name: Get Authentification token
ansible.builtin.uri:
url: http://localhost:9380/api/auth
method: POST
body_format: json
body: {"password":"l4c1j4yd33Du5lo"}
register: login
when: inventory_hostname != "router.home.lan"
# - debug:
# msg: "{{ login.json.session }}"
- name: Get Config
ansible.builtin.uri:
url: http://localhost:9380/api/config
method: GET
headers:
X-FTL-SID: "{{ login.json.session.sid }}"
register: old_config
when: inventory_hostname != "router.home.lan"
# - debug:
# msg: "{{ old_config.json.config.dns.cnameRecords }}"
- name: Parse config
ansible.builtin.set_fact:
jsondata: "{{ old_config }}"
- name: New records for nas
ansible.builtin.set_fact:
new_data: ["mqtt.home.lan,nas.home.lan","media.home.lan,nas.home.lan","ldap.home.lan,nas.home.lan","webhub.home.lan,nas.home.lan","semaphore.home.lan,nas.home.lan","active.home.lan,nas.home.lan"]
when: destination == 'nas.home.lan'
- name: New records for m-server
ansible.builtin.set_fact:
new_data: ["mqtt.home.lan,m-server.home.lan","media.home.lan,m-server.home.lan","ldap.home.lan,m-server.home.lan","webhub.home.lan,m-server.home.lan","semaphore.home.lan,m-server.home.lan","active.home.lan,m-server.home.lan"]
when: destination == 'm-server.home.lan'
- name: New records for rpi5
ansible.builtin.set_fact:
new_data: ["mqtt.home.lan,rpi5.home.lan","media.home.lan,rpi5.home.lan","ldap.home.lan,rpi5.home.lan","webhub.home.lan,rpi5.home.lan","semaphore.home.lan,rpi5.home.lan","active.home.lan,rpi5.home.lan"]
when: destination == 'rpi5.home.lan'
- name: Print the gateway for each host when defined
ansible.builtin.debug:
msg: "var is {{ app_path }}"
when: inventory_hostname in groups['router']
# - debug:
# msg: "{{ new_data }}"
- name: Set new values
ansible.utils.update_fact:
updates:
- path: jsondata.json.config.dns.cnameRecords
value: "{{ new_data }}"
register: new_config
when: inventory_hostname != "router.home.lan"
- name: Pause for 60 seconds
ansible.builtin.pause:
seconds: 60
- name: Set new ruleset
command: nvram set vts_rulelist="{{ app_path }}"
when: inventory_hostname in groups['router']
- name: Nvram commit
command: nvram commit
when: inventory_hostname in groups['router']
- name: Restart firewall
command: service restart_firewall
when: inventory_hostname in groups['router']
- name: Patch config
ansible.builtin.uri:
url: http://localhost:9380/api/config
method: PATCH
body: "{{ new_config.jsondata.json |to_json}}"
headers:
X-FTL-SID: "{{ login.json.session.sid }}"
Content-Type: application/json
register: _result
until: _result.status == 200
retries: 3 # 720 * 5 seconds = 1hour (60*60/5)
delay: 5 # Every 5 seconds
register: _result
until: _result.status == 200
retries: 3 # 720 * 5 seconds = 1hour (60*60/5)
delay: 5 # Every 5 seconds
when: inventory_hostname != "router.home.lan"
- name: Sleep for 30 seconds and continue with play
ansible.builtin.wait_for:
timeout: 10
- name: Logout
ansible.builtin.uri:
url: http://localhost:9380/api/auth
method: DELETE
status_code: 204
headers:
X-FTL-SID: "{{ login.json.session.sid }}"
when: inventory_hostname != "router.home.lan"
ignore_errors: true
- name: Setting up resolv.conf
ansible.builtin.copy:
dest: "/etc/resolv.conf"
content: |
nameserver 192.168.77.101
nameserver 192.168.77.106
nameserver 192.168.77.238
options rotate
options timeout:1
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
# until: _result.status == 204
# retries: 3 # 720 * 5 seconds = 1hour (60*60/5)
# delay: 5 # Every 5 seconds
- name: Sleep for 60 seconds and continue with play
ansible.builtin.wait_for:
timeout: 60
- name: Reconfigurte router containers
shell: python3 /root/unifi-api/unifi.py -s -d "{{ destination.split('.')[0] }}"
when: inventory_hostname == "router.home.lan"
- name: Stop containers
shell: docker stop `docker ps -a |awk '{ print $NF }'|egrep -v "NAME|^pihole$|watchtower|portainer" |xargs`
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
ignore_errors: true
when: inventory_hostname != destination and inventory_hostname != "nas.home.lan" and inventory_hostname != "router.home.lan"
- name: Restart containers
shell: docker restart nginx-app-1
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
when: inventory_hostname == destination
- name: Stop containers
shell: /share/ZFS530_DATA/.qpkg/container-station/bin/docker stop `/share/ZFS530_DATA/.qpkg/container-station/bin/docker ps -a |awk '{ print $NF }'|egrep -v "NAME|pihole|watchtower" |xargs`
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
ignore_errors: true
when: inventory_hostname != destination and inventory_hostname == "nas.home.lan" and inventory_hostname != "router.home.lan"
- name: Sleep for 120 seconds and continue with play
ansible.builtin.wait_for:
timeout: 120
# - name: Restart containers
# shell: docker restart nginx-app-1
# become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
# when: inventory_hostname == destination

View File

@ -0,0 +1,15 @@
- hosts: datacenter
name: Switch server
ignore_unreachable: false
tasks:
- name: Unifi Modifi
ansible.builtin.uri:
url: http://192.168.77.101:8123/api/webhook/-WcEse1k5QxIBlQu5B0u-5Esb?server=nas
method: POST
when: inventory_hostname == destination and destination == "nas.home.lan"
- name: Unifi Modifi
ansible.builtin.uri:
url: http://192.168.77.101:8123/api/webhook/-WcEse1k5QxIBlQu5B0u-5Esb?server=m-server
method: POST
when: inventory_hostname == destination and destination == "m-server.home.lan"

View File

@ -1,83 +1,109 @@
- block:
- name: include vault
ansible.builtin.include_vars:
file: jaydee.yml
- name: Install autofs
ansible.builtin.apt:
name:
- autofs
- cifs-utils
state: present
- name: Setup autofs
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
block:
- name: Include vault
ansible.builtin.include_vars:
file: jaydee.yml
- name: Install autofs
ansible.builtin.apt:
name:
- autofs
- cifs-utils
state: present
- name: Creating a file with content
copy:
dest: "/etc/auto.auth"
content: |
username={{ samba_user }}
password={{ samba_password }}
- name: Creating a file with content
copy:
dest: "/etc/auto.nas-movies"
content: |
movies -fstype=cifs,credentials=/etc/auto.auth,dir_mode=0777,file_mode=0777,uid=jd,rw ://nas.home.lan/movies
- name: Creating a file with content
copy:
dest: "/etc/auto.nas-music"
content: |
music -fstype=cifs,credentials=/etc/auto.auth,dir_mode=0777,file_mode=0777,uid=jd,rw ://nas.home.lan/music
- name: Creating a file with content
copy:
dest: "/etc/auto.nas-shows"
content: |
shows -fstype=cifs,credentials=/etc/auto.auth,dir_mode=0777,file_mode=0777,uid=jd,rw ://nas.home.lan/shows
- name: Creating a file with content
copy:
dest: "/etc/auto.nas"
content: |
nas-data -fstype=cifs,credentials=/etc/auto.auth,dir_mode=0755,file_mode=0755,uid=jd,rw ://nas.home.lan/Data
nas-docker-data -fstype=cifs,credentials=/etc/auto.auth,dir_mode=0755,file_mode=0755,uid=jd,rw ://nas.home.lan/docker_data
nas-photo -fstype=cifs,credentials=/etc/auto.auth,dir_mode=0755,file_mode=0755,uid=jd,rw ://nas.home.lan/Photo
nas-public -fstype=cifs,credentials=/etc/auto.auth,dir_mode=0755,file_mode=0755,uid=jd,rw ://nas.home.lan/Public
nas-install -fstype=cifs,credentials=/etc/auto.auth,dir_mode=0755,file_mode=0755,uid=jd,rw ://nas.home.lan/install
nas-downloads -fstype=cifs,credentials=/etc/auto.auth,dir_mode=0755,file_mode=0755,uid=jd,rw ://nas.home.lan/downloads
nas-games -fstype=cifs,credentials=/etc/auto.auth,dir_mode=0755,file_mode=0755,uid=jd,rw ://nas.home.lan/qda_2
# - name: Reconfigure autofs Server
# ansible.builtin.lineinfile:
# path: /etc/auto.master
# regexp: "^/media/nas.*"
# insertafter: '^/media/nas'
# line: "/media/nas /etc/auto.nas --timeout 360 --ghost"
- name: Creating a file with content
ansible.builtin.copy:
dest: "/etc/auto.auth"
content: |
username={{ samba_user }}
password={{ samba_password }}
mode: '0600'
owner: root
group: root
- name: Creating a file with content
ansible.builtin.copy:
dest: "/etc/auto.m-server"
content: |
docker_data -fstype=nfs m-server.home.lan:/share/docker_data
downloads -fstype=nfs m-server.home.lan:/media/data/downloads
mode: '0600'
owner: root
group: root
- name: Creating a file with content
ansible.builtin.copy:
dest: "/etc/auto.nas-movies"
content: |
movies -fstype=cifs,credentials=/etc/auto.auth,dir_mode=0755,file_mode=0755,uid=jd,rw ://nas.home.lan/movies
mode: '0600'
owner: root
group: root
- name: Reconfigure autofs Server
ansible.builtin.lineinfile:
path: /etc/auto.master
regexp: "^/media/data/music/nas.*"
line: /media/data/music/nas /etc/auto.nas-music --timeout 360 --ghost
- name: Reconfigure autofs Server
ansible.builtin.lineinfile:
path: /etc/auto.master
regexp: "^/media/data/movies/nas.*"
line: /media/data/movies/nas /etc/auto.nas-movies --timeout 360 --ghost
- name: Reconfigure autofs Server
ansible.builtin.lineinfile:
path: /etc/auto.master
regexp: "^/media/data/shows/nas.*"
line: /media/data/shows/nas /etc/auto.nas-shows --timeout 360 --ghost
- name: Reconfigure autofs Server
ansible.builtin.lineinfile:
path: /etc/auto.master
line: /media/nas /etc/auto.nas --timeout 360 --ghost
- name: Creating a file with content
ansible.builtin.copy:
dest: "/etc/auto.nas-music"
content: |
music -fstype=cifs,credentials=/etc/auto.auth,dir_mode=0755,file_mode=0755,uid=jd,rw ://nas.home.lan/music
mode: '0600'
owner: root
group: root
- name: Creating a file with content
ansible.builtin.copy:
dest: "/etc/auto.nas-shows"
content: |
shows -fstype=cifs,credentials=/etc/auto.auth,dir_mode=0755,file_mode=0755,uid=jd,rw ://nas.home.lan/shows
mode: '0600'
owner: root
group: root
- name: Creating a file with content
ansible.builtin.copy:
dest: "/etc/auto.nas"
content: |
nas-data -fstype=cifs,credentials=/etc/auto.auth,dir_mode=0755,file_mode=0755,uid=jd,rw ://nas.home.lan/Data
nas-docker-data -fstype=cifs,credentials=/etc/auto.auth,dir_mode=0755,file_mode=0755,uid=jd,rw ://nas.home.lan/docker_data
nas-photo -fstype=cifs,credentials=/etc/auto.auth,dir_mode=0755,file_mode=0755,uid=jd,rw ://nas.home.lan/Photo
nas-public -fstype=cifs,credentials=/etc/auto.auth,dir_mode=0755,file_mode=0755,uid=jd,rw ://nas.home.lan/Public
nas-install -fstype=cifs,credentials=/etc/auto.auth,dir_mode=0755,file_mode=0755,uid=jd,rw ://nas.home.lan/install
nas-downloads -fstype=cifs,credentials=/etc/auto.auth,dir_mode=0755,file_mode=0755,uid=jd,rw ://nas.home.lan/downloads
nas-games -fstype=cifs,credentials=/etc/auto.auth,dir_mode=0755,file_mode=0755,uid=jd,rw ://nas.home.lan/qda_2
mode: '0600'
owner: root
group: root
# - name: Reconfigure autofs Server
# ansible.builtin.lineinfile:
# path: /etc/auto.master
# regexp: "^/media/nas.*"
# insertafter: '^/media/nas'
# line: "/media/nas /etc/auto.nas --timeout 360 --ghost"
- name: Reconfigure autofs Server
ansible.builtin.lineinfile:
path: /etc/auto.master
regexp: "^/media/data/music/nas.*"
line: /media/data/music/nas /etc/auto.nas-music --timeout 360 --ghost
- name: Reconfigure autofs Server
ansible.builtin.lineinfile:
path: /etc/auto.master
regexp: "^/media/data/movies/nas.*"
line: /media/data/movies/nas /etc/auto.nas-movies --timeout 360 --ghost
- name: Reconfigure autofs Server
ansible.builtin.lineinfile:
path: /etc/auto.master
regexp: "^/media/data/shows/nas.*"
line: /media/data/shows/nas /etc/auto.nas-shows --timeout 360 --ghost
- name: Reconfigure autofs Server
ansible.builtin.lineinfile:
path: /etc/auto.master
line: /media/nas /etc/auto.nas --timeout 360 --ghost
- name: Reconfigure autofs Server
ansible.builtin.lineinfile:
path: /etc/auto.master
line: /media/m-server /etc/auto.m-server --timeout 360 --ghost
- name: Restart docker service
ansible.builtin.service:
name: autofs
state: restarted
become: true
- name: Restart docker service
ansible.builtin.service:
name: autofs
state: restarted

View File

@ -1,8 +1,10 @@
- name: Upgrade the full OS
ansible.builtin.apt:
upgrade: full
become: true
- name: Upgrade flatpack
ansible.builtin.command: flatpak update -y
become: true
when: inventory_hostname == 'morefine.home.lan'
- name: Upgrade
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
block:
- name: Upgrade the full OS
ansible.builtin.apt:
update_cache: true
upgrade: full
- name: Upgrade flatpack
ansible.builtin.command: flatpak update -y
when: inventory_hostname == 'morefine.home.lan'

33
roles/docker/files/ca.pem Executable file
View File

@ -0,0 +1,33 @@
-----BEGIN CERTIFICATE-----
MIIFqTCCA5GgAwIBAgIUJ3kgn/onrwoKs+MqhsHo7RmF/20wDQYJKoZIhvcNAQEL
BQAwZDELMAkGA1UEBhMCU0sxETAPBgNVBAgMCFNsb3Zha2lhMQswCQYDVQQHDAJT
SzETMBEGA1UECgwKc2VjdG9ycS5ldTELMAkGA1UECwwCSVQxEzARBgNVBAMMCnNl
Y3RvcnEuZXUwHhcNMjUwMzExMTc1MDA5WhcNMjYwMzExMTc1MDA5WjBkMQswCQYD
VQQGEwJTSzERMA8GA1UECAwIU2xvdmFraWExCzAJBgNVBAcMAlNLMRMwEQYDVQQK
DApzZWN0b3JxLmV1MQswCQYDVQQLDAJJVDETMBEGA1UEAwwKc2VjdG9ycS5ldTCC
AiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAJsXcxwOjZ3jBO3j7gps12vo
zXmSNEoka5RiUvZlfopifwKVxFMzAJd/yoeaxiUBYKIlHgZ/OYu/+WkrwgpX2HO3
2ZuB83Ym7P3TkTBhRp1S/HqBIb6aORGKhiuhZt6PNiCgqFszmb4Wl0Ox2cYxWYi5
1DeHXNa5vRob2rSfsJwtamiksJkAsXclQu5dyfMv+cvc4Pob1o/DT76+xDpqT4lr
pzXhpfXyT/xwtOEWku/53fccU0SBSSHPp6HzZUWHoodmHPigYYFEz1drYk1nDr3u
gZq+nEQAVpcn1JrH7DuUaX/CrgBZNRdQ8d+mQ9EEDAQXNfzlH10ebfTjm2ol40cu
9mwVJQ5Ru+h2xvfAlbcqnDTinXFgABuquSNzEz/1eJMIhm+myVOqF1WGeA/LnXGp
OaNny7oQW8/9OLmpAZKIFzcD7KxvdBAu9IkO/KduqJohD8BBPqVAksan85bmEs8R
Iu46XAJ7nmlX1DLchBtwvYv5MRdna73M52rTpNlmidWuiUeysZs8Nx7dGh1bd5I6
9JnHcMl01UorQn0uitnO9zrOTEg0KkEmUZab1A2CbqeoYYLXi72Sva959faviXb0
0HaPDtWuih9jQORu7fH7H6ghLFdfgUOp9am1hQpX1P7uXmUOB4iztMrh3bM8m2ZE
HEvr+VfNkcq9KaAfXPhHAgMBAAGjUzBRMB0GA1UdDgQWBBTG6a566m85pq5bLi0O
nC5y0pg6sjAfBgNVHSMEGDAWgBTG6a566m85pq5bLi0OnC5y0pg6sjAPBgNVHRMB
Af8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4ICAQA5g9OxfcoAbYvi2T89E5205QkC
ZxwWgrHIVGICeOF1K2nIypnddoJQUvMT/GYIK4QjZSWLGB2+YZMtXS+U/C9uxKOm
d7bbzp437wUZwUJRtA4JZayxIitVTtzLYYLimb13GrsPs2KwGaZALe0K7dYzDwP1
74gqOPvP7snDD98c6HV6vVXnTN+0T7djQyv/TqcyQ/IZjVY6JpsqgMg1rHqkYhDM
Na7XBgwOt0Y4QmgS6EYEVv1+QsVB0U1tdH1oa+zwiyj5xDwVNmU5bLocEq3kYIRU
tQUarNNKY4fMq529Heq7Ki63DLYTP8tJGh0Yijm9SFPqKYaZy6iL5xbdRFNCIFR/
FnBZmRVxvPealAoIg9vutHkQrdqebBfX11PwWtLn+fkGTXq+5fBwjYllK04/MBk0
SNjt6qwnOGZOc4gmEjthF4oVcVKoE7sVSCdgu/2jtLeJ48s0MwGhWZCk21ZgJbZY
5gMahOiSndmudTo1ubFrqLb71MBTpqjiHTF2VLdxZEsrFCqeQAbsG+KmMuj+UhzV
yuO3ycAGSDxsgbyHHYzjo2O5BvY35J7w1lZe1CExgoeeYFWlJ6t5PySf6OJupFit
7FNwYgVXqC3+vwEWmbXz0WHwPh4aCvfSuNAHoiwX2UyzceYOWB5F4TmA2Chj23Ih
isOdaq7ol1Q0iF9tjQ==
-----END CERTIFICATE-----

View File

@ -0,0 +1,32 @@
-----BEGIN CERTIFICATE-----
MIIFkDCCA3igAwIBAgIUUYzivwquTJnP+9/Q/zb/0Ew+eVowDQYJKoZIhvcNAQEL
BQAwZDELMAkGA1UEBhMCU0sxETAPBgNVBAgMCFNsb3Zha2lhMQswCQYDVQQHDAJT
SzETMBEGA1UECgwKc2VjdG9ycS5ldTELMAkGA1UECwwCSVQxEzARBgNVBAMMCnNl
Y3RvcnEuZXUwHhcNMjUwMzExMTc1MDEzWhcNMjYwMzExMTc1MDEzWjAcMRowGAYD
VQQDDBFtLXNlcnZlci5ob21lLmxhbjCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCC
AgoCggIBALcgqTwwWnKeiHt1ZZQjoyZw/c/DbPwQnBuQVhNGF6RX7apXP/eY4Sf8
/l2y6awZd6vM4JyFonPENbll/dEVgFEPgwwiqiaBC9PuZIbC60LLYwpDUmaHXNAd
xgohSWOEc7uT1lcW2yn5n1A93JpoOScb/dAmjWPUYV3BqnKTtcqVs3a5SzWxnIqO
szWt97SZpRY3GWIAiOmFqcKE5gL7FkSaMyS81E/Qfct/37o5OHWpiBhzLZUyop1e
z9f7RrgDRzEoNlJisWFY/wF0xvmowkslL8QsYBTkfgofP7dEm8MOn0hJOFzuUY75
TAp+h6wiL0bhTab4XDOrFjFy5ivehICdDSal+IlNEmI9Zsziy/1gW7WXCMMgOXKn
xX7se2OFbHGCaf9NCn+0ODHev9ZeDni5SQsgyD3Zjyh3kc7AZ97M8jNJlCGb2QaJ
f/BF2Q9EzbQYHjor97r/+tMdvYkYNo9+FYoJH3yP+T378Tn+DFe8KthvbqCSF01t
aDdfcRu0p+qNalVkD2rctohJgiEuhzVIIpfqe3P9yMyzBYgwoXMUIthug4wOo8gE
Xwr7cgTTK8pxPQGlo1JL0WuBxodtdHP9/VQmf3Qkgj3W0UTAP3rphnvg/5S5tqIT
P7W+HVjEzTEh2z2FGxz4lvEbo82FrhxnCrW+Gk/jhbY99Lr3SeetAgMBAAGjgYEw
fzAoBgNVHREEITAfghFtLXNlcnZlci5ob21lLmxhbocEwKhN7ocEfwAAATATBgNV
HSUEDDAKBggrBgEFBQcDATAdBgNVHQ4EFgQUOIy9QvfKWPuMGEp4C2yvjNO2uYsw
HwYDVR0jBBgwFoAUxumueupvOaauWy4tDpwuctKYOrIwDQYJKoZIhvcNAQELBQAD
ggIBAIJBsaPUjAApSDplyUGru6XnLL1UHjG+g49A12QIfgG9x2frRRhvAbx21121
sCJ5/dvHJS/a8xppcNd4cMFrvLrOkZn6s+gfeXc20sMscdyjnjIbxdmDiUwnhoFT
+9OKg5BYokg11PmEOhMEK7L9qEXaf5L+9TdcxBl/qvciqSpZ9FsOGDYCgB0EMsQ/
48/Tj/0ABF+c/+WVXzWL51Gdj6waM0qqXjGArbjAUA7ft8gy18n/6DyM3KWlZXCb
+mAwUGnOvHFNbb8jgxSDvFeIos0P6Edq0PDcK5k1uYEeATp0CC6/F3z1Eai2vKy+
c1BbJZtDJmlKTL+7vykHMSVqAuN/Vq4uvtxv1pOCR1UJk1mW0mr6Ovm9sVVk5HFD
3j6nOF81PiabdWA6GbbSCQdlpL2v0KipAR/sNheMwXAe+5NGJAiE5uaBgQSTVZS+
7b4DDKFxfkHR9ISOGURgf9wRxqF6jNS4qqQp9+sOdK6y++ZVGRTTpQbCHEg9V79r
TTGs4lbvaFCmF/Y9/NPSrRo//l+XhJrpjoeyx04iy6QipErCCFK2dHH5hYfS3ISt
kbaw2ARNqbcktQkWwA+W+rb83en/w3WG1v2vByKGCr1s4jHAhWtSLZhXx+PIYeT+
ml/kv+Y3W1T/lOcsytJrXug8t+g4nh9wYTnRl5YwruaKQjWF
-----END CERTIFICATE-----

View File

@ -0,0 +1,52 @@
-----BEGIN PRIVATE KEY-----
MIIJQgIBADANBgkqhkiG9w0BAQEFAASCCSwwggkoAgEAAoICAQC3IKk8MFpynoh7
dWWUI6MmcP3Pw2z8EJwbkFYTRhekV+2qVz/3mOEn/P5dsumsGXerzOCchaJzxDW5
Zf3RFYBRD4MMIqomgQvT7mSGwutCy2MKQ1Jmh1zQHcYKIUljhHO7k9ZXFtsp+Z9Q
PdyaaDknG/3QJo1j1GFdwapyk7XKlbN2uUs1sZyKjrM1rfe0maUWNxliAIjphanC
hOYC+xZEmjMkvNRP0H3Lf9+6OTh1qYgYcy2VMqKdXs/X+0a4A0cxKDZSYrFhWP8B
dMb5qMJLJS/ELGAU5H4KHz+3RJvDDp9ISThc7lGO+UwKfoesIi9G4U2m+FwzqxYx
cuYr3oSAnQ0mpfiJTRJiPWbM4sv9YFu1lwjDIDlyp8V+7HtjhWxxgmn/TQp/tDgx
3r/WXg54uUkLIMg92Y8od5HOwGfezPIzSZQhm9kGiX/wRdkPRM20GB46K/e6//rT
Hb2JGDaPfhWKCR98j/k9+/E5/gxXvCrYb26gkhdNbWg3X3EbtKfqjWpVZA9q3LaI
SYIhLoc1SCKX6ntz/cjMswWIMKFzFCLYboOMDqPIBF8K+3IE0yvKcT0BpaNSS9Fr
gcaHbXRz/f1UJn90JII91tFEwD966YZ74P+UubaiEz+1vh1YxM0xIds9hRsc+Jbx
G6PNha4cZwq1vhpP44W2PfS690nnrQIDAQABAoICAACEElRh8wKkg6xWkQULDMdi
wWen/H85frbufBhkyQH3NWjErCMmwzJsMWi9EUkKGs7VWKgLv7uadY4q03XHhgmc
GrAEwS6UaFmNgd5fmk3j1rHhUSIUyq8JNkbtIPr9bC+a6C/OuRYpE4o2V1zzPK1D
HokafrNqxHGne/g8ASfgGcApH9C1MwR9bnyi6txmhRcDM7SiZ5JCDCGdgg11eirz
45PvsAysg3ZfA4DAQOWn4defEj8NtO9kisbRKWBKosrrJmSWZ4fnd6F8TzSX/dO8
MEEXUW7RJ7G0vviTnSeQNnjsZB+wQk84y3lRGDzvCVxR7cqLdaKjMD38zQdr1HiM
IysiYw7aUQ8ukz+4I4izPmn/iDdTxNzTHSvaxCjKRqsaj9R3kEFqtVuOoInfwKD9
iSoEI35IkEIJwhvnt/xfZY03HwI7JBvSgA23zM5L2dvuM0nwGVcn+/WkLcYRum2y
hXRbpQ69dVTiFCxQG71bdcuK8z2lxXDPsyBjkcBta/WwQe8sHHdrszyc1Zf5DIDx
341bQ0cJEZQJD5BmKNij6Ow0N9g/0vySAScKF1zM9J0fE/XBihNYIH9JCXPRrFqw
BmUGmNjjyJSbnYMxjyVDz8g9026N+w23VtLv0UlA4hF3Hexupqol7XM+MhqNSFIO
A+F8Ho9U38LZfA3yt8JpAoIBAQD00RQmllHGtRR2zsIA0LPMVUyV3DOshJ4XYj8a
sN2rSU9rgNRB0rnpgWoGMAysOerPphvoY6bf1wrI3dFt5pzQMuKJLz6VFl135k5R
11kxZfCmZC/pIp3WLkIHDthAXkU5IKnWw/4vQgmIwTZ5I7rNjPaJYuoH8z5Buuwi
qUnEJj3czq4iNW2DHAFd657NQImrIbvN4T9SHLGrFBG3Bqf43xc/TMNqOnD7FcYe
+DIkBFXBFqx6pwMjP7hUwo88Oxzp7I/MaDXw9LnSPt2YQqdyNaaFiyk8JWc87LMq
DFaXFh+aON9XFxvKfCQA5uNCwyaWMi8zNWLpFTPKuZPPaWR5AoIBAQC/fi5ReLUL
HEpGgKw9UstgexmdnQLVisVfRH9eaQn/U6Yoo8XD0gpdjtqdA9dStV3jw9zKAoeP
twg819A/nl+kavDP1bGxaxEou9BUFvxyqw0OrA1bKznNlcpCNpqShSiFVO/6CqaU
awaDRuAsf4gs8/vKzw3q5bPErC+/a8x8USicOMc1tPrUxmTSwoXCfgtb+l7+7K48
QeA27zPxaOCotAhef1T6KW1mYC7vP0ertZwiG+Lqoh9fzrun5TUYielqqrAJWPFC
o12r6jqhr9a6dPZ0/ZBCK3JyvdYGt321P6yffA78sz0hvSqT9JMmNnZJSc6oOiuB
qqutqzl/KgfVAoIBAQDoZWD/kEpompSmg3beVz+WhJKC39mdtvZrtDO7HpIOezUN
E+pp4aPh6Zu/6/TbuM8R9tkfLRnH+tad/xNDhFrvuJ4bI+IAnI51twY54nck0WQ0
T367jMTQAHFlSc42rEaCCGOxH7Q3IDT0wJT5QdWeMmYF3QPUMC+1Lb/i11jS/opT
BU9/4b/nabpSccz5gn4tGYSx11TImbx+bjqyx3rEYOIskK4gNQHzF6RO2cSfNA5D
kUaB1/C+kUpmC5r0zhiQZqPKolIyPd33mv23/+38GLnOo1+tXMQ3rWoWTEgWfEXb
nIlGnwUeneF/ia3KPn5urYzoy5DtOddEZg3OInnhAoIBAGrVZ9v2PvMi5mFtGirg
TSzXoNPpLBKc6D6dRX4TlgtHzNSxgf0c6sGFmHuvD+tJ2kbfGAfv31eTotnnAXzs
y6k8LHuXWhqEhD84gSLY7CDBQ3ijDpSFiisjXYMRWa1S8udoGrZiSMtW5nxJB3pr
8Do8KIbee4JIgsG/2qet6ZiV4tU9bA6PmL0qrkdTVTLMBWRcS7FntFFT41Zin5UY
kPYt8tldqrgicrGCCc1afY7TtHbnHfMPXfeiq9kgrD2ze3ESJ0IfyAIIiJMIC4v3
QRInfPSKHnh8Ks7PEGAQ8OY0zwbvPKFJElsHYYDIG2xfSCDdN5ltUqZ15G/wrhQ/
C70CggEAHKhqoWElJNa3Ba4UscXKWL28cXRkMLdZGRngU5W9GLUQhDVYHdy+x5jU
5V4OnhCFo4Vq8uc2HsKnknhu/KGJ2gf3g8ASkILCG6aqB+0xZ+N6/dW0Yfft7vV4
az9azn2nEK6Pqiokm0ggc+UhZ4C6EKWY3Vefs0scxKBIx48aGDP0I/XwFrZpwdWC
Z/jlCjTZlJ+5G7VenkqWtIlJmXZ6zrRFkPKlmxSTKIrDTJaD0dcNmDrwe+au0x+y
YHMSo0gMN9W5pFN6LDc/JYXOkb995mkKXyzeRTFy+v2yFig6rSwBStwcSTsuNWAe
FOWrzZPSFGNqLJEHjZdIBAaDR6ER7A==
-----END PRIVATE KEY-----

View File

@ -1,57 +1,165 @@
- block:
- name: Install docker
ansible.builtin.apt:
name:
- ca-certificates
- curl
- telnet
- net-tools
- python3-pip
- python3-dev
state: present
update_cache: true
- name: Get keys for raspotify
ansible.builtin.shell:
install -m 0755 -d /etc/apt/keyrings
- name: Setup docker
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
block:
- name: Facts
ansible.builtin.setup:
- name: Get keys for raspotify
ansible.builtin.shell:
curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
- name: Print arch
ansible.builtin.debug:
msg: "{{ ansible_architecture }}"
- name: Install docker dependencies
ansible.builtin.apt:
name:
- ca-certificates
- curl
- telnet
- net-tools
- python3-pip
- python3-dev
state: present
update_cache: true
- name: Get keys for raspotify
ansible.builtin.command:
install -m 0755 -d /etc/apt/keyrings
- name: Get keys for raspotify
ansible.builtin.shell:
chmod a+r /etc/apt/keyrings/docker.asc
- name: Get keys for raspotify
ansible.builtin.shell: echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# - name: Add an Apt signing key to a specific keyring file
# ansible.builtin.apt_key:
# url: https://download.docker.com/linux/debian/gpg
# keyring: /etc/apt/keyrings/docker.asc
# when:
# - ansible_distribution == "Debian" and ansible_distribution_major_version == "12"
- name: Install docker
ansible.builtin.apt:
name:
- docker-ce
- docker-ce-cli
- containerd.io
- docker-buildx-plugin
- docker-compose-plugin
update_cache: true
# - name: Get keys for raspotify
# ansible.builtin.shell:
# curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
# when:
# - ansible_distribution == "Debian" and ansible_distribution_major_version == "12"
- name: Create a directory docker.service.d
ansible.builtin.file:
path: /etc/systemd/system/docker.service.d/
state: directory
mode: '0755'
- name: Get keys for raspotify
ansible.builtin.shell:
curl -fsSL https://download.docker.com/linux/raspbian/gpg -o /etc/apt/keyrings/docker.asc
when:
- ansible_distribution == "Debian" and ansible_distribution_major_version == "12"
- name: Creating a file with content
copy:
dest: "/etc/systemd/system/docker.service.d/override.conf"
content: |
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H tcp://0.0.0.0:2375
notify: restart_docker
- name: Add an Apt signing key to a specific keyring file
ansible.builtin.apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
keyring: /etc/apt/keyrings/docker.asc
when:
- ansible_distribution == "Ubuntu"
- name: Just force systemd to reread configs
ansible.builtin.systemd:
daemon_reload: true
# - name: Get keys for raspotify
# ansible.builtin.shell:
# curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
# when:
# - ansible_distribution == "Ubuntu"
- name: Change file ownership, group and permissions
ansible.builtin.file:
path: /etc/apt/keyrings/docker.asc
owner: root
group: root
mode: '0644'
become: true
# - name: Get keys for raspotify
# ansible.builtin.shell:
# chmod a+r /etc/apt/keyrings/docker.asc
- name: Get keys for raspotify
ansible.builtin.shell: echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
when:
- ansible_distribution == "Debian" and ansible_distribution_major_version == "12"
- name: Get keys for raspotify
ansible.builtin.shell: echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
when:
- ansible_distribution == "Ubuntu"
- name: Install docker
ansible.builtin.apt:
name:
- docker-ce
- docker-ce-cli
- containerd.io
- docker-buildx-plugin
- docker-compose-plugin
update_cache: true
- name: Create a directory docker.service.d
ansible.builtin.file:
path: /etc/systemd/system/docker.service.d/
state: directory
mode: '0755'
- name: Create a directory for certs
ansible.builtin.file:
path: /etc/docker/certs
state: directory
mode: '0700'
owner: root
group: root
- name: Copy files
ansible.builtin.copy:
src: server-key.pem
dest: /etc/docker/certs/
mode: '0600'
owner: root
group: root
- name: Copy files
ansible.builtin.copy:
src: ca.pem
dest: /etc/docker/certs/
mode: '0600'
owner: root
group: root
- name: Copy files
ansible.builtin.copy:
src: server-cert.pem
dest: /etc/docker/certs/
mode: '0600'
owner: root
group: root
- name: Creating a file with content
ansible.builtin.copy:
dest: "/etc/systemd/system/docker.service.d/override.conf"
content: |
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --tlsverify --tlscacert=/etc/docker/certs/ca.pem --tlscert=/etc/docker/certs/server-cert.pem --tlskey=/etc/docker/certs/server-key.pem -H=0.0.0.0:2376
mode: '0600'
owner: root
group: root
notify: restart_docker
when: mode == "cert"
# - name: Creating a file with content
# ansible.builtin.copy:
# dest: "/etc/systemd/system/docker.service.d/override.conf"
# content: |
# [Service]
# ExecStart=
# ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --tlsverify \
# --tlscacert=/etc/docker/certs/ca.pem --tlscert=/etc/docker/certs/server-cert.pem \
# --tlskey=/etc/docker/certs/server-key.pem -H=0.0.0.0:2376
# mode: '0600'
# owner: root
# group: root
# notify: restart_docker
# when: mode != "nocert"
- name: Just force systemd to reread configs
ansible.builtin.systemd:
daemon_reload: true
- name: Restart docker service
ansible.builtin.service:
name: docker
state: restarted
# - name: Get keys for raspotify
# ansible.builtin.shell: docker plugin install grafana/loki-docker-driver:3.3.2-{{ ansible_architecture }} --alias loki --grant-all-permissions
- name: Install a plugin
community.docker.docker_plugin:
plugin_name: grafana/loki-docker-driver:3.3.2
alias: loki
state: present

0
roles/fail2ban/files/action.d/banan.conf Normal file → Executable file
View File

0
roles/fail2ban/files/filter.d/bad-auth.conf Normal file → Executable file
View File

0
roles/fail2ban/files/filter.d/nextcloud.conf Normal file → Executable file
View File

0
roles/fail2ban/files/filter.d/sshd.conf Normal file → Executable file
View File

0
roles/fail2ban/files/jail.d/bad-auth.conf Normal file → Executable file
View File

0
roles/fail2ban/files/jail.d/nextcloud.conf Normal file → Executable file
View File

0
roles/fail2ban/files/jail.d/sshd.conf Normal file → Executable file
View File

View File

@ -1,41 +1,51 @@
- block:
- name: Install fail2ban packages
ansible.builtin.apt:
name:
- fail2ban
- sendmail
#add line to /etc/hosts
#127.0.0.1 m-server localhost....
- name: Copy files
copy:
src: "{{ item }}"
dest: /etc/fail2ban/jail.d/
with_fileglob:
- "jail.d/*.conf"
- name: Setup Fail2ban
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
block:
- name: Install fail2ban packages
ansible.builtin.apt:
name:
- fail2ban
- sendmail
# add line to /etc/hosts
# 127.0.0.1 m-server localhost....
- name: Copy files
ansible.builtin.copy:
src: "{{ item }}"
dest: /etc/fail2ban/jail.d/
mode: '0700'
owner: root
group: root
with_fileglob:
- "jail.d/*.conf"
- name: Copy files
copy:
src: "{{ item }}"
dest: /etc/fail2ban/filter.d/
with_fileglob:
- "filter.d/*.conf"
- name: Copy files
ansible.builtin.copy:
src: "{{ item }}"
dest: /etc/fail2ban/filter.d/
mode: '0700'
owner: root
group: root
with_fileglob:
- "filter.d/*.conf"
- name: Copy files
copy:
src: "{{ item }}"
dest: /etc/fail2ban/action.d/
with_fileglob:
- "action.d/*.conf"
- name: Copy files
ansible.builtin.copy:
src: "{{ item }}"
dest: /etc/fail2ban/action.d/
mode: '0700'
owner: root
group: root
with_fileglob:
- "action.d/*.conf"
- name: disable sendmail service
ansible.builtin.service:
name: sendmail.service
state: stopped
enabled: false
- name: Disable sendmail service
ansible.builtin.service:
name: sendmail.service
state: stopped
enabled: false
- name: Restart fail2ban service
ansible.builtin.service:
name: fail2ban.service
state: restarted
enabled: true
become: true
- name: Restart fail2ban service
ansible.builtin.service:
name: fail2ban.service
state: restarted
enabled: true

0
roles/hosts/files/hosts Normal file
View File

28
roles/hosts/tasks/main.yml Executable file
View File

@ -0,0 +1,28 @@
- name: Hosts
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
block:
- name: Reconfigure hosts file
ansible.builtin.lineinfile:
path: "/etc/hosts"
regexp: "^192.168.77.101 .*"
line: "192.168.77.101 m-server m-server.home.lan"
- name: Reconfigure hosts file
ansible.builtin.lineinfile:
path: "/etc/hosts"
regexp: "^192.168.77.106 .*"
line: "192.168.77.106 nas nas.home.lan"
- name: Reconfigure hosts file
ansible.builtin.lineinfile:
path: "/etc/hosts"
regexp: "^192.168.77.238 .*"
line: "192.168.77.238 rpi5 rpi5.home.lan"
- name: Reconfigure hosts file
ansible.builtin.lineinfile:
path: "/etc/hosts"
regexp: "^192.168.77.4 .*"
line: "192.168.77.4 amd amd.home.lan"
- name: Reconfigure hosts file
ansible.builtin.lineinfile:
path: "/etc/hosts"
regexp: "^192.168.77.55 .*"
line: "192.168.77.55 rack rack.home.lan"

View File

@ -1,10 +1,12 @@
- block:
- name: Reconfigure config
ansible.builtin.lineinfile:
path: /etc/sysctl.conf
regexp: "^Unet.ipv4.igmp_max_memberships.*"
line: "net.ipv4.igmp_max_memberships = 80"
- name: Restart agent
ansible.builtin.shell: echo 80 > /proc/sys/net/ipv4/igmp_max_memberships
notify: restart_matter_server
become: true
- name: Setup matter server
become: "{{ 'no' if inventory_hostname == 'nas.home.lan' else 'yes' }}"
block:
- name: Reconfigure config
ansible.builtin.lineinfile:
path: /etc/sysctl.conf
regexp: "^Unet.ipv4.igmp_max_memberships.*"
line: "net.ipv4.igmp_max_memberships = 80"
- name: Restart agent
ansible.builtin.shell: echo 80 > /proc/sys/net/ipv4/igmp_max_memberships
notify: restart_matter_server
changed_when: my_output.rc != 0

View File

@ -2,7 +2,7 @@
ansible.builtin.set_fact:
zabbix_agent_cfg: "/etc/zabbix/zabbix_agent2.conf"
when: inventory_hostname != 'nas.home.lan'
- name: Get config for nas
ansible.builtin.set_fact:
zabbix_agent_cfg: "/opt/ZabbixAgent/etc/zabbix_agentd.conf"
@ -29,42 +29,43 @@
become: true
- name: Install a .deb package from the internet2
ansible.builtin.apt:
#deb: https://repo.zabbix.com/zabbix/6.4/raspbian/pool/main/z/zabbix-release/zabbix-release_6.4-1+debian11_all.deb
# deb: https://repo.zabbix.com/zabbix/6.4/raspbian/pool/main/z/zabbix-release/zabbix-release_6.4-1+debian11_all.deb
deb: https://repo.zabbix.com/zabbix/7.0/raspbian/pool/main/z/zabbix-release/zabbix-release_7.0-1+debian11_all.deb
retries: 5
delay: 5
when:
- ansible_facts.architecture == "armv7l" or ansible_facts.architecture == "aarch64"
become: true
ignore_errors: true
failed_when: my_output.rc != 0
- name: Install a .deb package from the internet3
ansible.builtin.apt:
deb: https://repo.zabbix.com/zabbix/6.4/debian/pool/main/z/zabbix-release/zabbix-release_6.4-1+debian11_all.deb
become: true
become: true
when:
- ansible_facts.architecture != "armv7l" and ansible_distribution == "Debian" and ansible_distribution_major_version == "11"
- name: Install a .deb package from the internet4
ansible.builtin.apt:
#deb: https://repo.zabbix.com/zabbix/6.4/debian/pool/main/z/zabbix-release/zabbix-release_6.4-1+debian12_all.deb
# deb: https://repo.zabbix.com/zabbix/6.4/debian/pool/main/z/zabbix-release/zabbix-release_6.4-1+debian12_all.deb
deb: https://repo.zabbix.com/zabbix/7.2/debian/pool/main/z/zabbix-release/zabbix-release_7.2-1+debian12_all.deb
when:
- ansible_facts.architecture != "armv7l" and ansible_facts.architecture != "aarch64" and ansible_distribution == "Debian" and ansible_distribution_major_version == "12"
ignore_errors: true
- ansible_facts.architecture != "armv7l"
- ansible_facts.architecture != "aarch64"
- ansible_distribution == "Debian"
- ansible_distribution_major_version == "12"
failed_when: my_output.rc != 0
become: true
# - name: Install a .deb package localy
# ansible.builtin.apt:
# deb: /tmp/zabbix-release_6.4-1+ubuntu22.04_all.deb
- name: Install zabbix packages
ansible.builtin.apt:
name:
name:
- zabbix-agent2
- zabbix-agent2-plugin-mongodb
- zabbix-agent2-plugin-postgresql
update_cache: yes
update_cache: false
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
ignore_errors: true
failed_when: my_output.rc != 0
when: inventory_hostname != 'nas.home.lan'
- name: Reconfigure zabbix agent Server
@ -99,14 +100,14 @@
regexp: "^Hostname=.*"
line: "Hostname={{ inventory_hostname }}"
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
- name: Reconfigure zabbix-agent2 config
ansible.builtin.lineinfile:
path: "{{ zabbix_agent_cfg }}"
insertafter: '^# UserParameter='
regexp: "^UserParameter=system.certs.*"
line: "UserParameter=system.certs,python3 /share/ZFS530_DATA/.qpkg/ZabbixAgent/cert_check2.py"
when: inventory_hostname == 'nas.home.lan'
when: inventory_hostname == 'nas.home.lan'
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
- name: Reconfigure zabbix-agent2 config
@ -115,7 +116,7 @@
insertafter: '^# UserParameter='
regexp: "^UserParameter=system.certs.*"
line: "UserParameter=system.certs,python3 /usr/bin/cert_check2.py"
when: inventory_hostname == 'm-server.home.lan'
when: inventory_hostname == 'm-server.home.lan'
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
- name: Reconfigure zabbix-agent2 config
@ -140,14 +141,14 @@
regexp: "^HostMetadata=.*"
insertafter: '^# HostMetadata='
line: "HostMetadata=server;jaydee"
when: inventory_hostname == 'nas.home.lan' or inventory_hostname == 'm-server.home.lan'
when: inventory_hostname == 'nas.home.lan' or inventory_hostname == 'm-server.home.lan'
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
- name: Add the user 'to group video
ansible.builtin.user:
name: zabbix
groups: video
append: yes
append: true
when: inventory_hostname != 'nas.home.lan'
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
@ -160,6 +161,8 @@
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
- name: Restart agent
ansible.builtin.shell: /etc/init.d/ZabbixAgent.sh restart
ansible.builtin.command: /etc/init.d/ZabbixAgent.sh restart
when: inventory_hostname == 'nas.home.lan'
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
changed_when: my_output.rc != 0
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"

View File

@ -1,116 +1,146 @@
- block:
- name: include vault
ansible.builtin.include_vars:
file: jaydee.yml
- name: Delete content & directory
ansible.builtin.file:
state: absent
path: "{{ dest_folder }}"
- name: GIT pull
tags:
- git_pull
git:
repo: "https://{{ git_user | urlencode }}:{{ git_password_mqtt | urlencode }}@gitlab.sectorq.eu/jaydee/mqtt_srv.git"
dest: "{{ dest_folder }}"
update: yes
clone: yes
version: main
- debug:
msg: "{{ inventory_hostname }}"
- name: Setup mqtt_srv
become: "{{ 'no' if inventory_hostname == 'nas.home.lan' else 'yes' }}"
block:
- name: Include vault
ansible.builtin.include_vars:
file: jaydee.yml
- name: Delete content & directory
ansible.builtin.file:
state: absent
path: "{{ dest_folder }}"
- name: GIT pull
tags:
- git_pull
ansible.builtin.git:
repo: "https://{{ git_user | urlencode }}:{{ git_password_mqtt | urlencode }}@gitlab.sectorq.eu/jaydee/mqtt_srv.git"
dest: "{{ dest_folder }}"
update: true
clone: true
version: main
- name: Print message
ansible.builtin.debug:
msg: "{{ inventory_hostname }}"
- name: Create dir
ansible.builtin.file:
path: /etc/mqtt_srv/
state: directory
mode: '0755'
owner: root
group: root
- name: Create dir
ansible.builtin.file:
path: /myapps/mqtt_srv/
recurse: true
state: directory
mode: '0755'
owner: root
group: root
- name: Upload service config
ansible.builtin.copy:
src: "{{ dest_folder }}/mqtt_srv.service"
dest: /etc/systemd/system/mqtt_srv.service
remote_src: true
mode: '0755'
owner: root
group: root
when: inventory_hostname != 'nas.home.lan'
- name: Upload service config
ansible.builtin.copy:
src: "{{ dest_folder }}/mqtt_srv.service"
dest: /etc/systemd/system/mqtt_srv.service
remote_src: true
when: inventory_hostname != 'nas.home.lan'
- name: Upload service script
ansible.builtin.copy:
src: "{{ dest_folder }}/mqtt_srv.py"
dest: /usr/bin/mqtt_srv.py
mode: '755'
owner: root
remote_src: true
when: inventory_hostname != 'nas.home.lan'
- name: Upload service script config
ansible.builtin.copy:
src: "{{ dest_folder }}/mqtt_srv.cfg"
dest: /etc/mqtt_srv/mqtt_srv.cfg
mode: '755'
owner: root
remote_src: true
when: inventory_hostname != 'nas.home.lan'
# - name: Upload service script1
# ansible.builtin.copy:
# src: scripts/mqtt_srv.sh
# dest: /jffs/scripts/mqtt_srv/
# mode: '755'
# owner: admin
# when: inventory_hostname in groups['router']
# become: false
- name: Upload service script
ansible.builtin.copy:
src: "{{ dest_folder }}/mqtt_srv.py"
dest: /myapps/mqtt_srv/mqtt_srv.py
mode: '0755'
owner: root
group: root
remote_src: true
when: inventory_hostname != 'nas.home.lan'
- name: Upload service req
ansible.builtin.copy:
src: "{{ dest_folder }}/requirements.txt"
dest: /myapps/mqtt_srv/requirements.txt
mode: '0755'
owner: root
group: root
remote_src: true
when: inventory_hostname != 'nas.home.lan'
# - name: Upload service script
# ansible.builtin.copy:
# src: scripts/mqtt_srv.py
# dest: /jffs/scripts/mqtt_srv/
# mode: '755'
# owner: admin
# when: inventory_hostname in groups['router']
# become: false
- name: Upload service script config
ansible.builtin.copy:
src: "{{ dest_folder }}/mqtt_srv.cfg"
dest: /etc/mqtt_srv/mqtt_srv.cfg
mode: '755'
owner: root
remote_src: true
when: inventory_hostname != 'nas.home.lan'
- name: Upload service script1
ansible.builtin.copy:
src: "{{ dest_folder }}/mqtt_srv.sh"
dest: /etc/init.d/
mode: '755'
owner: admin
remote_src: true
when: inventory_hostname == 'nas.home.lan'
- debug:
msg: "{{ dest_folder }}"
- name: Upload service script2
ansible.builtin.copy:
src: "{{ dest_folder }}/mqtt_srv.py"
dest: /usr/bin/mqtt_srv.py
mode: '755'
owner: admin
remote_src: true
when: inventory_hostname == 'nas.home.lan'
- name: Install bottle python package
ansible.builtin.shell: pip install {{ item }} --break-system-packages
loop:
- paho-mqtt
- getmac
- ping3
- psutil
- autorandr
when: inventory_hostname != 'nas.home.lan'
- name: Just force systemd to reread configs (2.4 and above)
ansible.builtin.systemd:
daemon_reload: true
when: inventory_hostname != 'nas.home.lan'
# - name: Upload service script1
# ansible.builtin.copy:
# src: scripts/mqtt_srv.sh
# dest: /jffs/scripts/mqtt_srv/
# mode: '755'
# owner: admin
# when: inventory_hostname in groups['router']
# become: false
# - name: Upload service script
# ansible.builtin.copy:
# src: scripts/mqtt_srv.py
# dest: /jffs/scripts/mqtt_srv/
# mode: '755'
# owner: admin
# when: inventory_hostname in groups['router']
# become: false
- name: Restart mqtt_srv service
ansible.builtin.service:
name: mqtt_srv.service
state: restarted
enabled: true
when: inventory_hostname != 'nas.home.lan'
# - name: Upload service script1
# ansible.builtin.copy:
# src: "{{ dest_folder }}/mqtt_srv.sh"
# dest: /etc/init.d/
# mode: '755'
# owner: admin
# remote_src: true
# when: inventory_hostname == 'nas.home.lan'
- name: Restart mqtt service
ansible.builtin.shell: "(/etc/init.d/mqtt_srv.sh restart >/dev/null 2>&1 &)"
async: 10
poll: 0
when: inventory_hostname == 'nas.home.lan'
- name: Print message
ansible.builtin.debug:
msg: "{{ dest_folder }}"
become: "{{ 'no' if inventory_hostname == 'nas.home.lan' else 'yes' }}"
- name: Upload service script2
ansible.builtin.copy:
src: "{{ dest_folder }}/mqtt_srv.py"
dest: /myapps/mqtt_srv/mqtt_srv.py
mode: '755'
owner: admin
remote_src: true
when: inventory_hostname == 'nas.home.lan'
- name: Install venv
ansible.builtin.apt:
name:
- python3-virtualenv
- name: Install specified python requirements in indicated (virtualenv)
ansible.builtin.pip:
requirements: /myapps/mqtt_srv/requirements.txt
virtualenv: /myapps/mqtt_srv/venv
when: inventory_hostname != 'nas.home.lan'
- name: Just force systemd to reread configs (2.4 and above)
ansible.builtin.systemd:
daemon_reload: true
when: inventory_hostname != 'nas.home.lan'
- name: Restart mqtt_srv service
ansible.builtin.service:
name: mqtt_srv.service
state: restarted
enabled: true
when: inventory_hostname != 'nas.home.lan'
- name: Restart mqtt service
ansible.builtin.shell: "(/etc/init.d/mqtt_srv.sh restart >/dev/null 2>&1 &)"
async: 10
poll: 0
when: inventory_hostname == 'nas.home.lan'
changed_when: my_output.rc != 0

View File

@ -0,0 +1,8 @@
[Unit]
Description=Enable OMV backup
[Service]
ExecStart = nohup /myapps/omv_backup.py -b > /dev/null 2>&1 &
[Install]
WantedBy=basic.target

View File

@ -1,106 +1,91 @@
- block:
- name: include vault
ansible.builtin.include_vars:
file: jaydee.yml
- name: Delete content & directory
ansible.builtin.file:
state: absent
path: "{{ dest_folder }}"
- name: GIT pull
tags:
- git_pull
git:
repo: "https://{{ git_user | urlencode }}:{{ git_password_mqtt | urlencode }}@gitlab.sectorq.eu/jaydee/omv_backup.git"
dest: "{{ dest_folder }}"
update: yes
clone: yes
version: main
- debug:
msg: "{{ inventory_hostname }}"
- name: Create a directory if it does not exist
ansible.builtin.file:
path: /myapps
state: directory
mode: '0755'
owner: root
group: root
- name: Omv Setup
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
block:
- name: Include vault
ansible.builtin.include_vars:
file: jaydee.yml
name: mysecrets
when: inventory_hostname != 'nas.home.lan'
- name: Delete content & directory
ansible.builtin.file:
state: absent
path: "{{ dest_folder }}"
- name: Pull repo
tags:
- git_pull
ansible.builtin.git:
repo: "https://{{ mysecrets['git_user'] | urlencode }}:{{ mysecrets['git_password_mqtt'] | urlencode }}@gitlab.sectorq.eu/jaydee/omv_backup.git"
dest: "{{ dest_folder }}"
update: true
clone: true
version: main
when: inventory_hostname != 'nas.home.lan'
- name: Print
ansible.builtin.debug:
msg: "{{ inventory_hostname }}"
- name: Create a directory if it does not exist
ansible.builtin.file:
path: /myapps
state: directory
mode: '0755'
owner: root
group: root
when: inventory_hostname != 'nas.home.lan'
- name: Upload script
ansible.builtin.copy:
src: "{{ dest_folder }}/omv_backup.py"
dest: /myapps/omv_backup.py
remote_src: true
mode: '0755'
owner: root
group: root
when: inventory_hostname != 'nas.home.lan'
- name: Upload script
ansible.builtin.copy:
src: "{{ dest_folder }}/omv_backup_v2.py"
dest: /myapps/omv_backup_v2.py
remote_src: true
mode: '0755'
owner: root
group: root
when: inventory_hostname != 'nas.home.lan'
- name: Upload script
ansible.builtin.copy:
src: "{{ dest_folder }}/docker_backups.py"
dest: /myapps/docker_backups.py
remote_src: true
mode: '0755'
owner: root
group: root
when: inventory_hostname != 'nas.home.lan'
- name: Upload requirements
ansible.builtin.copy:
src: "{{ dest_folder }}/requirements.txt"
dest: /myapps/requirements.txt
remote_src: true
when: inventory_hostname != 'nas.home.lan'
- name: Upload script
ansible.builtin.copy:
src: "{{ dest_folder }}/omv_backup.py"
dest: /myapps/omv_backup.py
remote_src: true
mode: '0755'
owner: root
group: root
when: inventory_hostname != 'nas.home.lan'
- name: Install venv
ansible.builtin.apt:
name:
- python3-virtualenv
- name: Install specified python requirements in indicated (virtualenv)
ansible.builtin.pip:
requirements: /myapps/requirements.txt
virtualenv: /myapps/venv
- name: Upload requirements
ansible.builtin.copy:
src: "{{ dest_folder }}/requirements.txt"
dest: /myapps/requirements.txt
remote_src: true
mode: '0755'
owner: root
group: root
when: inventory_hostname != 'nas.home.lan'
- name: 'Ensure an old job is no longer present. Removes any job that is prefixed by "#Ansible: an old job" from the crontab'
ansible.builtin.cron:
name: "omv_backup"
state: absent
- name: Install venv
ansible.builtin.apt:
name:
- python3-virtualenv
# - name: Ensure a job that runs at 2 and 5 exists. Creates an entry like "0 5,2 * * ls -alh > /dev/null"
# ansible.builtin.cron:
# name: "omv_backup"
# minute: "0"
# hour: "8"
# job: "/myapps/venv/bin/python3 /myapps/omv_backup.py -b > /dev/null 2>&1 &"
- name: Creating config
ansible.builtin.copy:
dest: "/etc/systemd/system/omv_backup.service"
content: |
[Unit]
Description=Enable OMV backup
- name: Install specified python requirements in indicated (virtualenv)
ansible.builtin.pip:
requirements: /myapps/requirements.txt
virtualenv: /myapps/venv
[Service]
ExecStart = nohup /myapps/venv/bin/python3 /myapps/omv_backup_v2.py -b > /dev/null 2>&1 &
[Install]
WantedBy=basic.target
owner: root
mode: '0744'
- name: Restart service omv_backup, in all cases
ansible.builtin.service:
name: omv_backup
state: restarted
enabled: true
# async:
# poll: 0
# ignore_errors: true
become: true
- name: 'Ensure an old job is no longer present. Removes any job that is prefixed by "#Ansible: an old job" from the crontab'
ansible.builtin.cron:
name: "omv_backup"
state: absent
- name: Upload service config
ansible.builtin.copy:
src: omv_backup.service
dest: /etc/systemd/system/omv_backup.service
mode: '0755'
owner: root
group: root
when: inventory_hostname == 'amd.home.lan'
- name: Restart omv service
ansible.builtin.service:
name: omv_backup
state: restarted
daemon_reload: true
enabled: true
when: inventory_hostname == 'amd.home.lan'
# - name: Ensure a job that runs at 2 and 5 exists. Creates an entry like "0 5,2 * * ls -alh > /dev/null"
# ansible.builtin.cron:
# name: "omv_backup"
# minute: "0"
# hour: "8"
# job: "sudo /myapps/omv_backup.py -b > /dev/null 2>&1 &"
# state: present

View File

@ -1,12 +1,15 @@
- block:
- name: Creating a file with content
copy:
dest: "/etc/polkit-1/rules.d/50_disable_pol.rules"
content: |
polkit.addRule(function(action, subject) {
if (action.id == "org.freedesktop.NetworkManager.wifi.scan") {
return polkit.Result.YES;
}
});
- name: Setup policies
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
block:
- name: Creating a file with content
ansible.builtin.copy:
dest: "/etc/polkit-1/rules.d/50_disable_pol.rules"
content: |
polkit.addRule(function(action, subject) {
if (action.id == "org.freedesktop.NetworkManager.wifi.scan") {
return polkit.Result.YES;
}
});
mode: '0644'
owner: root
group: root

76
roles/promtail/tasks/main.yml Executable file
View File

@ -0,0 +1,76 @@
---
- name: Promtail
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
block:
- name: Create dir
ansible.builtin.file:
path: /etc/apt/keyrings/
owner: root
group: root
- name: Create Banner
ansible.builtin.shell: wget -q -O - https://apt.grafana.com/gpg.key | gpg --dearmor > /etc/apt/keyrings/grafana.gpg
register: my_output
changed_when: my_output.rc != 0
# - name: < Fetch file that requires authentication.
# username/password only available since 2.8, in older versions you need to use url_username/url_password
# ansible.builtin.get_url:
# url: https://apt.grafana.com/gpg.key
# dest: /etc/foo.conf
# username: bar
# password: '{{ mysecret }}'
# changed_when: my_output.rc != 0
- name: Create Banner
ansible.builtin.shell: echo "deb [signed-by=/etc/apt/keyrings/grafana.gpg] https://apt.grafana.com stable main" | tee /etc/apt/sources.list.d/grafana.list
register: my_output
changed_when: my_output.rc != 0
- name: Install packages
ansible.builtin.apt:
name:
- promtail
update_cache: true
- name: Creating a file with content
ansible.builtin.copy:
dest: "/etc/promtail/config.yml"
owner: root
group: root
mode: '0644'
content: |
# This minimal config scrape only single log file.
# Primarily used in rpm/deb packaging where promtail service can be started during system init process.
# And too much scraping during init process can overload the complete system.
# https://github.com/grafana/loki/issues/11398
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://192.168.77.101:3100/loki/api/v1/push
external_labels:
nodename: {{ inventory_hostname }}
scrape_configs:
- job_name: system
static_configs:
- targets:
- localhost
labels:
job: varlogs1
#NOTE: Need to be modified to scrape any additional logs of the system.
__path__: /var/log/zabbix/*.log
- targets:
- localhost
labels:
job: omv_backup
__path__: /myapps/omv_backup.log
- name: Sshd
ansible.builtin.service:
name: promtail
state: restarted

View File

@ -1,5 +1,5 @@
---
# requirements.yml
collections:
- name: community.general
source: https://galaxy.ansible.com
- name: community.docker

View File

@ -1,5 +0,0 @@
- name: restart_docker
ansible.builtin.service:
name: docker.service
state: restarted
become: true

View File

@ -1,57 +0,0 @@
- block:
- name: Install docker
ansible.builtin.apt:
name:
- ca-certificates
- curl
- telnet
- net-tools
- python3-pip
- python3-dev
state: present
update_cache: true
- name: Get keys for raspotify
ansible.builtin.shell:
install -m 0755 -d /etc/apt/keyrings
- name: Get keys for raspotify
ansible.builtin.shell:
curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
- name: Get keys for raspotify
ansible.builtin.shell:
chmod a+r /etc/apt/keyrings/docker.asc
- name: Get keys for raspotify
ansible.builtin.shell: echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
- name: Install docker
ansible.builtin.apt:
name:
- docker-ce
- docker-ce-cli
- containerd.io
- docker-buildx-plugin
- docker-compose-plugin
update_cache: true
- name: Create a directory docker.service.d
ansible.builtin.file:
path: /etc/systemd/system/docker.service.d/
state: directory
mode: '0755'
- name: Creating a file with content
copy:
dest: "/etc/systemd/system/docker.service.d/override.conf"
content: |
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H tcp://0.0.0.0:2375
notify: restart_docker
- name: Just force systemd to reread configs
ansible.builtin.systemd:
daemon_reload: true
become: true

5
roles/setup/tasks/main.yml Executable file
View File

@ -0,0 +1,5 @@
- name: Setup
become: "{{ 'no' if inventory_hostname == 'nas.home.lan' else 'yes' }}"
block:
- name: Gather facts
ansible.builtin.setup:

View File

@ -1,38 +1,40 @@
- block:
- name: Install packages
ansible.builtin.apt:
name:
- figlet
- toilet
- name: Set banner
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
block:
- name: Install packages
ansible.builtin.apt:
name:
- figlet
- toilet
- name: Create Banner
ansible.builtin.command: |
figlet -c {{ (inventory_hostname|split('.'))[0] }} -f slant
register: logo
- name: Create Banner
ansible.builtin.command: |
figlet -c {{ (inventory_hostname | split('.'))[0] }} -f slant
register: logo
changed_when: "logo.rc == 0"
- name: Creating a file with content
copy:
dest: "/etc/banner"
content: |
{{ logo.stdout }}
- name: Creating a file with content
ansible.builtin.copy:
dest: "/etc/motd"
content: |
{{ logo.stdout }}
owner: 0
group: 0
mode: "0777"
- name: Reconfigure sshd
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
regexp: "^Banner.* "
line: "Banner /etc/banner"
- name: Reconfigure sshd
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
regexp: "^Banner.* "
line: "#Banner /etc/banner"
- name: Reconfigure sshd
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
regexp: "^#PrintLastLog.* "
line: "PrintLastLog no"
- name: Reconfigure sshd
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
regexp: "^#PrintLastLog.* "
line: "PrintLastLog no"
- name: sshd
ansible.builtin.service:
name: ssh.service
state: restarted
become: true
- name: Sshd
ansible.builtin.service:
name: ssh.service
state: restarted

24
roles/ssh_config/files/config Executable file
View File

@ -0,0 +1,24 @@
Host m-server
HostName m-server.home.lan
Host rpi5
HostName rpi5.home.lan
Host rack
HostName rack.home.lan
Host amd
HostName amd.home.lan
Host nas
HostName nas.home.lan
User admin
Host router
HostName router.home.lan
User root
Host *
User jd
IdentityFile ~/.ssh/id_rsa
StrictHostKeyChecking no

19
roles/ssh_config/tasks/main.yml Executable file
View File

@ -0,0 +1,19 @@
- name: SSH config Setup
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
block:
- name: Upload config
ansible.builtin.copy:
src: config
dest: /home/jd/.ssh/config
mode: '0600'
owner: jd
group: jd
when: inventory_hostname != 'nas.home.lan'
- name: Upload config
ansible.builtin.copy:
src: config
dest: /root/.ssh/config
mode: '0600'
owner: root
group: root
when: inventory_hostname != 'nas.home.lan'

1
roles/ssh_config/vars/main.yml Executable file
View File

@ -0,0 +1 @@
dest_folder: "/tmp/ans_repo"

24
roles/ssh_keys/tasks/main.yml Executable file
View File

@ -0,0 +1,24 @@
- name: SSH keys deploy
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
block:
- name: Upload key
ansible.builtin.copy:
src: id_rsa
dest: /home/jd/.ssh/id_rsa
mode: '0600'
owner: jd
group: jd
when: inventory_hostname != 'nas.home.lan'
- name: Upload key
ansible.builtin.copy:
src: id_rsa
dest: /home/jd/.ssh/id_rsa.pub
mode: '0600'
owner: jd
group: jd
when: inventory_hostname != 'nas.home.lan'
- name: Set authorized key taken from file
ansible.posix.authorized_key:
user: jd
state: present
key: "{{ lookup('file', '/home/jd/.ssh/id_rsa.pub') }}"

View File

@ -0,0 +1,14 @@
- name: SSHD config Setup
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
block:
- name: Reconfigure sshd
ansible.builtin.replace:
path: /etc/ssh/sshd_config
regexp: "^PermitRootLogin"
replace: "#PermitRootLogin"
- name: Restart ssh service
ansible.builtin.service:
name: ssh
state: restarted
daemon_reload: true
enabled: true

View File

@ -0,0 +1 @@
dest_folder: "/tmp/ans_repo"

12
roles/sudoers/tasks/main.yml Executable file
View File

@ -0,0 +1,12 @@
- name: Set sudoers
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
block:
- name: Allow the backup jd to sudo /myapps/omv_backup.py
community.general.sudoers:
name: allow-backup
state: present
user: jd
commands:
- /myapps/omv_backup.py *
- /usr/sbin/poweroff
- /usr/sbin/shutdown *

View File

@ -1,21 +1,29 @@
- block:
- name: Get keys
ansible.builtin.shell: curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | gpg --no-default-keyring --keyring gnupg-ring:/usr/share/keyrings/wazuh.gpg --import && chmod 644 /usr/share/keyrings/wazuh.gpg
- name: Add repo
ansible.builtin.shell: echo "deb [signed-by=/usr/share/keyrings/wazuh.gpg] https://packages.wazuh.com/4.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list
- name: Update cache
ansible.builtin.apt:
update_cache: true
- name: Instal wazuh
ansible.builtin.apt:
name: wazuh-agent
environment:
WAZUH_MANAGER: 'm-server.home.lan'
WAZUH_AGENT_NAME: "{{ inventory_hostname}}"
- name: Restart wazuh service
ansible.builtin.service:
name: wazuh-agent
state: restarted
enabled: true
- name: Setup loki agent
become: "{{ 'no' if inventory_hostname == 'nas.home.lan' else 'yes' }}"
block:
- name: Get keys
ansible.builtin.command: |
curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | gpg --no-default-keyring --keyring gnupg-ring:/usr/share/keyrings/wazuh.gpg --import && chmod 644 /usr/share/keyrings/wazuh.gpg
changed_when: my_output.rc != 0
become: true
- name: Add repo
ansible.builtin.command: |
echo "deb [signed-by=/usr/share/keyrings/wazuh.gpg] https://packages.wazuh.com/4.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list
changed_when: my_output.rc != 0
- name: Update cache
ansible.builtin.apt:
update_cache: true
- name: Instal wazuh
ansible.builtin.apt:
name: wazuh-agent
environment:
WAZUH_MANAGER: 'm-server.home.lan'
WAZUH_AGENT_NAME: "{{ inventory_hostname }}"
- name: Restart wazuh service
ansible.builtin.service:
name: wazuh-agent
state: restarted
enabled: true

View File

@ -1,175 +1,181 @@
- block:
- name: Get config for not nas
ansible.builtin.set_fact:
zabbix_agent_cfg: "/etc/zabbix/zabbix_agent2.conf"
when: inventory_hostname != 'nas.home.lan'
- name: Get config for nas
ansible.builtin.set_fact:
zabbix_agent_cfg: "/opt/ZabbixAgent/etc/zabbix_agentd.conf"
when: inventory_hostname == 'nas.home.lan'
- name: Install zabbix agent
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
block:
- name: Get config for not nas
ansible.builtin.set_fact:
zabbix_agent_cfg: "/etc/zabbix/zabbix_agent2.conf"
when: inventory_hostname != 'nas.home.lan'
- name: Print all available facts
ansible.builtin.debug:
msg: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
- name: Get config for nas
ansible.builtin.set_fact:
zabbix_agent_cfg: "/opt/ZabbixAgent/etc/zabbix_agentd.conf"
when: inventory_hostname == 'nas.home.lan'
- name: Print all available facts
ansible.builtin.debug:
var: ansible_facts.architecture
- name: Print all available facts
ansible.builtin.debug:
var: ansible_distribution
- name: Print all available facts
ansible.builtin.debug:
var: ansible_distribution_major_version
# - name: Upload zabbix package
# ansible.builtin.copy:
# src: packages/zabbix-release_6.4-1+ubuntu22.04_all.deb
# dest: /tmp/
- name: Install a .deb package from the internet11
ansible.builtin.apt:
deb: https://repo.zabbix.com/zabbix/6.4/ubuntu-arm64/pool/main/z/zabbix-release/zabbix-release_6.4-1+ubuntu22.04_all.deb
when:
- ansible_facts.architecture != "armv7l" and ( ansible_distribution == "Ubuntu" or ansible_distribution == "Linux Mint" )
- name: Print all available facts
ansible.builtin.debug:
msg: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
- name: Install a .deb package from the internet2
ansible.builtin.apt:
#deb: https://repo.zabbix.com/zabbix/6.4/raspbian/pool/main/z/zabbix-release/zabbix-release_6.4-1+debian11_all.deb
deb: https://repo.zabbix.com/zabbix/7.0/raspbian/pool/main/z/zabbix-release/zabbix-release_7.0-1+debian11_all.deb
retries: 5
delay: 5
when:
- ansible_facts.architecture == "armv7l" or ansible_facts.architecture == "aarch64"
- name: Print all available facts
ansible.builtin.debug:
var: ansible_facts.architecture
- name: Print all available facts
ansible.builtin.debug:
var: ansible_distribution
- name: Print all available facts
ansible.builtin.debug:
var: ansible_distribution_major_version
# - name: Upload zabbix package
# ansible.builtin.copy:
# src: packages/zabbix-release_6.4-1+ubuntu22.04_all.deb
# dest: /tmp/
- name: Install a .deb package from the internet111
ansible.builtin.apt:
deb: https://repo.zabbix.com/zabbix/7.2/release/ubuntu/pool/main/z/zabbix-release/zabbix-release_latest_7.2+ubuntu24.04_all.deb
when:
- ansible_facts.architecture != "armv7l" and ( ansible_distribution == "Ubuntu1" or ansible_distribution == "Linux Mint" )
ignore_errors: true
- name: Install a .deb package from the internet2
ansible.builtin.apt:
# deb: https://repo.zabbix.com/zabbix/6.4/raspbian/pool/main/z/zabbix-release/zabbix-release_6.4-1+debian11_all.deb
deb: https://repo.zabbix.com/zabbix/7.2/release/raspbian/pool/main/z/zabbix-release/zabbix-release_latest_7.2+debian12_all.deb
retries: 5
delay: 5
when:
- ansible_facts.architecture == "armv7l" or ansible_facts.architecture == "aarch64"
register: command_result
failed_when: "'FAILED' in command_result.stderr"
- name: Install a .deb package from the internet3
ansible.builtin.apt:
deb: https://repo.zabbix.com/zabbix/6.4/debian/pool/main/z/zabbix-release/zabbix-release_6.4-1+debian11_all.deb
when:
- ansible_facts.architecture != "armv7l" and ansible_distribution == "Debian" and ansible_distribution_major_version == "11"
- name: Install a .deb package from the internet4
ansible.builtin.apt:
#deb: https://repo.zabbix.com/zabbix/6.4/debian/pool/main/z/zabbix-release/zabbix-release_6.4-1+debian12_all.deb
deb: https://repo.zabbix.com/zabbix/7.0/debian/pool/main/z/zabbix-release/zabbix-release_7.0-1+debian12_all.deb
when:
- ansible_facts.architecture != "armv7l" and ansible_facts.architecture != "aarch64" and ansible_distribution == "Debian" and ansible_distribution_major_version == "12"
ignore_errors: true
- name: Install a .deb package from the internet3
ansible.builtin.apt:
deb: https://repo.zabbix.com/zabbix/6.4/debian/pool/main/z/zabbix-release/zabbix-release_6.4-1+debian11_all.deb
when:
- ansible_facts.architecture != "armv7l" and ansible_distribution == "Debian" and ansible_distribution_major_version == "11"
# - name: Install a .deb package localy
# ansible.builtin.apt:
# deb: /tmp/zabbix-release_6.4-1+ubuntu22.04_all.deb
- name: Install zabbix packages
ansible.builtin.apt:
name:
- zabbix-agent2
- zabbix-agent2-plugin-mongodb
- zabbix-agent2-plugin-postgresql
# - zabbix-agent2-plugin-mysql
update_cache: yes
ignore_errors: true
when: inventory_hostname != 'nas.home.lan'
- name: Install a .deb package from the internet4
ansible.builtin.apt:
# deb: https://repo.zabbix.com/zabbix/6.4/debian/pool/main/z/zabbix-release/zabbix-release_6.4-1+debian12_all.deb
deb: https://repo.zabbix.com/zabbix/7.0/debian/pool/main/z/zabbix-release/zabbix-release_7.0-1+debian12_all.deb
when:
- ansible_facts.architecture != "armv7l"
- ansible_facts.architecture != "aarch64"
- ansible_distribution == "Debian"
- ansible_distribution_major_version == "12"
register: command_result
failed_when: "'FAILED' in command_result.stderr"
# - name: Install a .deb package localy
# ansible.builtin.apt:
# deb: /tmp/zabbix-release_6.4-1+ubuntu22.04_all.deb
- name: Install zabbix packages
ansible.builtin.apt:
name:
- zabbix-agent2
- zabbix-agent2-plugin-mongodb
- zabbix-agent2-plugin-postgresql
# - zabbix-agent2-plugin-mysql
update_cache: true
when: inventory_hostname != 'nas.home.lan'
- name: Install zabbix packages
ansible.builtin.apt:
name:
- zabbix-agent2
- zabbix-agent2-plugin-mongodb
- zabbix-agent2-plugin-postgresql
# - zabbix-agent2-plugin-mysql
only_upgrade: true
when: inventory_hostname != 'nas.home.lan'
- name: Reconfigure zabbix agent Server
ansible.builtin.lineinfile:
path: "{{ zabbix_agent_cfg }}"
regexp: "^Server=.*"
insertafter: '^# Server='
line: "Server=192.168.77.0/24,172.30.0.0/24"
- name: Reconfigure zabbix agent Server
ansible.builtin.lineinfile:
path: "{{ zabbix_agent_cfg }}"
regexp: "^Server=.*"
insertafter: '^# Server='
line: "Server=192.168.77.0/24,192.168.89.0/28"
- name: Reconfigure zabbix agent ServerActive
ansible.builtin.lineinfile:
path: "{{ zabbix_agent_cfg }}"
regexp: "^ServerActive=.*"
line: "ServerActive={{ ZABBIX_SERVER }}"
- name: Reconfigure zabbix agent ServerActive
ansible.builtin.lineinfile:
path: "{{ zabbix_agent_cfg }}"
regexp: "^ServerActive=.*"
line: "ServerActive={{ ZABBIX_SERVER }}"
- name: Reconfigure zabbix agent ListenPort
ansible.builtin.lineinfile:
path: "{{ zabbix_agent_cfg }}"
regexp: "^ListenPort=.*"
line: "ListenPort=10050"
# - name: Reconfigure zabbix agent ListenIP
# ansible.builtin.lineinfile:
# path: /"{{ zabbix_agent_cfg }}"
# regexp: "^ListenIP=.*"
# line: "ListenIP=0.0.0.0"
- name: Reconfigure zabbix agent ListenPort
ansible.builtin.lineinfile:
path: "{{ zabbix_agent_cfg }}"
regexp: "^ListenPort=.*"
line: "ListenPort=10050"
# - name: Reconfigure zabbix agent ListenIP
# ansible.builtin.lineinfile:
# path: /"{{ zabbix_agent_cfg }}"
# regexp: "^ListenIP=.*"
# line: "ListenIP=0.0.0.0"
- name: Reconfigure zabbix-agent2 hostname
ansible.builtin.lineinfile:
path: "{{ zabbix_agent_cfg }}"
regexp: "^Hostname=.*"
line: "Hostname={{ inventory_hostname }}"
- name: Reconfigure zabbix-agent2 hostname
ansible.builtin.lineinfile:
path: "{{ zabbix_agent_cfg }}"
regexp: "^Hostname=.*"
line: "Hostname={{ inventory_hostname }}"
- name: Reconfigure zabbix-agent2 config
ansible.builtin.lineinfile:
path: "{{ zabbix_agent_cfg }}"
insertafter: '^# UserParameter='
regexp: "^UserParameter=system.certs.*"
line: "UserParameter=system.certs,python3 /share/ZFS530_DATA/.qpkg/ZabbixAgent/cert_check2.py"
when: inventory_hostname == 'nas.home.lan'
- name: Reconfigure zabbix-agent2 config
ansible.builtin.lineinfile:
path: "{{ zabbix_agent_cfg }}"
insertafter: '^# UserParameter='
regexp: "^UserParameter=system.certs.*"
line: "UserParameter=system.certs,python3 /share/ZFS530_DATA/.qpkg/ZabbixAgent/cert_check2.py"
when: inventory_hostname == 'nas.home.lan'
- name: Reconfigure zabbix-agent2 config
ansible.builtin.lineinfile:
path: "{{ zabbix_agent_cfg }}"
insertafter: '^# UserParameter='
regexp: "^UserParameter=system.certs.*"
line: "UserParameter=system.certs,python3 /usr/bin/cert_check2.py"
when: inventory_hostname == 'm-server.home.lan'
- name: Reconfigure zabbix-agent2 config
ansible.builtin.lineinfile:
path: "{{ zabbix_agent_cfg }}"
insertafter: '^# UserParameter='
regexp: "^UserParameter=system.certs.*"
line: "UserParameter=system.certs,python3 /usr/bin/cert_check2.py"
when: inventory_hostname == 'm-server.home.lan'
- name: Reconfigure zabbix-agent2 config
ansible.builtin.lineinfile:
path: "{{ zabbix_agent_cfg }}"
insertafter: '^# UserParameter='
line: "UserParameter=rpi.hw.temp,/usr/bin/vcgencmd measure_temp"
when: inventory_hostname == 'rpi5.home.lan'
- name: Reconfigure zabbix-agent2 config
ansible.builtin.lineinfile:
path: "{{ zabbix_agent_cfg }}"
insertafter: '^# UserParameter='
line: "UserParameter=rpi.hw.temp,/usr/bin/vcgencmd measure_temp"
when: inventory_hostname == 'rpi5.home.lan'
- name: Reconfigure zabbix-agent2 hostname
ansible.builtin.lineinfile:
path: "{{ zabbix_agent_cfg }}"
regexp: "^HostMetadata=.*"
insertafter: '^# HostMetadata='
line: "HostMetadata=linux;jaydee"
- name: Reconfigure zabbix-agent2 hostname
ansible.builtin.lineinfile:
path: "{{ zabbix_agent_cfg }}"
regexp: "^HostMetadata=.*"
insertafter: '^# HostMetadata='
line: "HostMetadata=linux;jaydee"
- name: Reconfigure zabbix-agent2 hostname
ansible.builtin.lineinfile:
path: "{{ zabbix_agent_cfg }}"
regexp: "^HostMetadata=.*"
insertafter: '^# HostMetadata='
line: "HostMetadata=server;jaydee"
when: inventory_hostname == 'nas.home.lan' or inventory_hostname == 'm-server.home.lan'
- name: Add the user 'to group video
ansible.builtin.user:
name: zabbix
groups: video
append: true
when: inventory_hostname != 'nas.home.lan'
- name: Reconfigure zabbix-agent2 hostname
ansible.builtin.lineinfile:
path: "{{ zabbix_agent_cfg }}"
regexp: "^HostMetadata=.*"
insertafter: '^# HostMetadata='
line: "HostMetadata=server;jaydee"
when: inventory_hostname == 'nas.home.lan' or inventory_hostname == 'm-server.home.lan'
- name: Restart zabbix-agent2 service
ansible.builtin.service:
name: zabbix-agent2.service
state: restarted
enabled: true
when: inventory_hostname != 'nas.home.lan'
- name: Add the user 'to group video
ansible.builtin.user:
name: zabbix
groups: video
append: yes
when: inventory_hostname != 'nas.home.lan'
- name: Restart zabbix-agent2 service
ansible.builtin.service:
name: zabbix-agent2.service
state: restarted
enabled: true
when: inventory_hostname != 'nas.home.lan'
- name: Restart agent
ansible.builtin.shell: /etc/init.d/ZabbixAgent.sh restart
when: inventory_hostname == 'nas.home.lan'
become: "{{ false if inventory_hostname == 'nas.home.lan' else true }}"
- name: Restart agent
ansible.builtin.command: /etc/init.d/ZabbixAgent.sh restart
when: inventory_hostname == 'nas.home.lan'
changed_when: false

38
ssh_key.pem Normal file
View File

@ -0,0 +1,38 @@
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn
NhAAAAAwEAAQAAAYEAoz4u+IAB09hgWyllpplK8864SkDd5p89w01p9NioW4FrOjES5U65
9ny6geIZKFRDmrdvcADidsbmOGhIxnzup5f95Gt6KrJcMVvRqhQkV1R5xd2/lpcvo5J97W
4bfoIxSuMdJ6dVmW6UyP50e4BJVmu++Cwh8uYuH5uSEqAh80TLCnd3VReGDrXukAvVLuLk
EI6WVpLGEYwDbVwJORAxDhEc/g4fEQ5F2xmFtr8dYoTUBoed893Olum2oxqt6sf/hvdOTL
UYLrWx1jjmEhPKNqt72g5AjQOhY3dz+oB0z3EYxRH6B3PFcadL40fXuqJfF+w/hhC+oOkD
eueu+9maN/0JGtKyf8zJ094GdkuItAs5qg6HnG8wwl+8sMLsIL5ZiRjrMIMMDij/xNP0Z1
outwVBVf31PaOZ6/WV4JIkWvWQs7mx2YccjemexIXDlZzG731dVuRZ/724/OGLnQi5s65O
ar6UPqlbDF4tRjyBMhcMoPwH7uSU6TROfuFYhc0zAAAFiO2G2RLthtkSAAAAB3NzaC1yc2
EAAAGBAKM+LviAAdPYYFspZaaZSvPOuEpA3eafPcNNafTYqFuBazoxEuVOufZ8uoHiGShU
Q5q3b3AA4nbG5jhoSMZ87qeX/eRreiqyXDFb0aoUJFdUecXdv5aXL6OSfe1uG36CMUrjHS
enVZlulMj+dHuASVZrvvgsIfLmLh+bkhKgIfNEywp3d1UXhg617pAL1S7i5BCOllaSxhGM
A21cCTkQMQ4RHP4OHxEORdsZhba/HWKE1AaHnfPdzpbptqMarerH/4b3Tky1GC61sdY45h
ITyjare9oOQI0DoWN3c/qAdM9xGMUR+gdzxXGnS+NH17qiXxfsP4YQvqDpA3rnrvvZmjf9
CRrSsn/MydPeBnZLiLQLOaoOh5xvMMJfvLDC7CC+WYkY6zCDDA4o/8TT9GdaLrcFQVX99T
2jmev1leCSJFr1kLO5sdmHHI3pnsSFw5Wcxu99XVbkWf+9uPzhi50IubOuTmq+lD6pWwxe
LUY8gTIXDKD8B+7klOk0Tn7hWIXNMwAAAAMBAAEAAAGABj1wcTT/cjghXXVkeoJTQE+aHC
iE8vW1AtwDiAWnezFOxn+CupDlqzuTAOJXgNhRFJRyJjoPw2eQpv5W2H4vvJNNnrzRri55
jLTMURqKVcDtE2MJxE63gPgEEp6KCYNzhpUjk6pMq4aebwfJrxY1IiBl6+RP+zzQ7YoLrY
+Wd09IDaM0b2Rso5pRFLYxv3mSgI7axf4VToK8zMzfA2HlkM/sUkp65d2Bo8GYKrynzxDH
GWV3ZGTe//DOIkejofJkIpm8l3xHAhkOQuEu6HubbLNCrIbwtTeGQuVQW6EqdGIF55/wHu
vrwrjkaGT4rdsCD4Ue3aCDfc5PFgRkDSmGUwcQmSRiA3vDliYe33m61SXqngbE763EIv12
vQBzU+vRR/IjQQAfX7bEJNnZ//TbiiA0vQzXnjrxNtwJVgPAQduqJI7F9D6r6TvpLeB7TS
NjJErQivVj3Uwvjz2xHum9+Z9hOtA0hwuvqsRqvxarlIGpV12tDK/Qe2r/sUMHvEbBAAAA
wQCTkbTjZ+KyOGp54SlzFWXrxYZcswzr7UKms+WlmT8NFtqpwnLg6/KzyLURWVns3k2eTz
ZRC3lJhGVZyTyuVR9ek0wbI+f/JmnTsAPaas76KHcXszpxftwf6blZc1wZUEJLmRDTy5K2
tEDoll4lki2L2Wyv6KPWyG6Gai34YigW95vS7veaslLVIy/nNRkwBEBpwRtOV/YCy1SC3U
PRMaMUyHNzcdwbeT1uymU/UeGc+h8pTGriG/5EhbvCHhLqVV8AAADBAOMRDYvh6wT4f31K
JitrfTruY+Bdg9O9aqyQXzEkCtzJWoHTLQY/brMo0mvUxR9wSxKBwR+Nu9EXqkBpA3g1Hv
UgXv3NWrpAiAVVs6NHay0aDa1uIY/Ol8k6DbLL5R0k7tQc2nD44Ey9zme88/0sdHA2VIQ2
z/tlcPSQ8VKKhDmRY7BTE5YgeE6615nqvCq8e6CRSjFa0HKbfdFsmSnOHgil6Oc2MKT2pL
i/dKSxeAr2BP++xF9xYe0bVMdy4mEF8wAAAMEAuAsxC2bs0FnzoInge3W1ra55+qZF3LFp
efTVb26yciNYYeJLeks+nOjOQQd+kcISILdIhYnUGIumWQR/CiY2KL89AXLoTB6EIt5DGC
udKAjIzSeHtXTliZ1okgjGSmJeE6dhr0UwdLYyaFkdscbF17iTndLc74yCrKIoUnHfxBEc
/34AT8UdwkrJXj7e2nJyV9WGYz/Ga2xd3s56hVSTUckjxQaBnkZWFilw7ZAlHhPhSglTmd
pkgWv+Gd4evavBAAAAC2pkQG1vcmVmaW5lAQIDBAUGBw==
-----END OPENSSH PRIVATE KEY-----