docs(smart-home): dokumentiere Heizung/Brenner komplett, alle Zugänge in homelab.conf
- smart-home/HEIZUNG.md: komplette Doku zur Ölbrenner-Erkennung (brennerstarts.py), Schwellwerte, Rekonstruktion, Dashboard-Panels, Troubleshooting - smart-home/STATE.md: klare Tabelle mit allen Dienst-URLs (public+intern) und Logins — Grafana/ioBroker/InfluxDB laufen ALLE in CT 143 auf pve-mu-3 - homelab.conf: CT_143_MU3 Beschreibung korrigiert (war "Raspi-Broker"), neue Variablen GRAFANA_URL_*/IOBROKER_URL_*/INFLUX_URL_INTERN + User/Pass (=PW_DEFAULT) damit beim nächsten Mal keine Fragen aufkommen - smart-home/scripts/: alle relevanten Skripte ins Repo: grafana_shot.js (Puppeteer-Login mit admin/astral66) add_month_panel.py (idempotente Monatskacheln im Heizung-Dashboard) brenner_rekonstruktion.py + cleanup_reconstruct.py + check_april.py patch_brenner.sh (Anpassung der Schwellwerte nach Regelkurven-Änderung) - MASTER_INDEX.md: Verweis auf HEIZUNG.md Made-with: Cursor
This commit is contained in:
parent
9889b2df76
commit
b61ac66367
10 changed files with 767 additions and 19 deletions
|
|
@ -8,7 +8,8 @@
|
||||||
|---|---|---|
|
|---|---|---|
|
||||||
| **Arakava News** | arakava-news/STATE.md | WordPress + RSS-Manager + KI-Artikel |
|
| **Arakava News** | arakava-news/STATE.md | WordPress + RSS-Manager + KI-Artikel |
|
||||||
| **Edelmetall Dashboard** | edelmetall/STATE.md | Gold/Silber Preisbot |
|
| **Edelmetall Dashboard** | edelmetall/STATE.md | Gold/Silber Preisbot |
|
||||||
| **Smart Home** | smart-home/STATE.md | ioBroker, Grafana, MQTT, Sensoren |
|
| **Smart Home** | smart-home/STATE.md | ioBroker + InfluxDB + Grafana (alle in CT 143 pve-mu-3) |
|
||||||
|
| **Heizung & Ölverbrauch** | smart-home/HEIZUNG.md | Brennererkennung, Dashboard, Zugänge, Scripts |
|
||||||
| **ESP32 Projekte** | esp32/PLAN.md | Heizungssteuerung, Sensorik |
|
| **ESP32 Projekte** | esp32/PLAN.md | Heizungssteuerung, Sensorik |
|
||||||
| **FünfVorAcht** | fuenfvoracht/STATE.md | Telegram KI-Poster (täglich 19:55) |
|
| **FünfVorAcht** | fuenfvoracht/STATE.md | Telegram KI-Poster (täglich 19:55) |
|
||||||
| **Redakteur** | redax-wp/STATE.md | WordPress KI-Autor + DeutschlandBlog |
|
| **Redakteur** | redax-wp/STATE.md | WordPress KI-Autor + DeutschlandBlog |
|
||||||
|
|
|
||||||
33
homelab.conf
33
homelab.conf
|
|
@ -171,7 +171,7 @@ CT_502_MU2="Test-Shop-2|—|Test Shop 2"
|
||||||
CT_139_MU3="Syncthing-Muldenstein|—|Syncthing"
|
CT_139_MU3="Syncthing-Muldenstein|—|Syncthing"
|
||||||
CT_141_MU3="syncthing|—|Syncthing"
|
CT_141_MU3="syncthing|—|Syncthing"
|
||||||
CT_142_MU3="WG-easy|—|WireGuard VPN"
|
CT_142_MU3="WG-easy|—|WireGuard VPN"
|
||||||
CT_143_MU3="Raspi-Broker|—|ioBroker MQTT Broker"
|
CT_143_MU3="smart-home|100.66.78.56|ioBroker (MQTT) + InfluxDB 1.x + Grafana 12.3.1 + brennerstarts.py — ersetzt alten Raspi, siehe smart-home/HEIZUNG.md"
|
||||||
CT_145_MU3="flugscanner-mu|100.75.182.15|Flugpreisscanner Node DE"
|
CT_145_MU3="flugscanner-mu|100.75.182.15|Flugpreisscanner Node DE"
|
||||||
CT_504_MU3="projektscan-template|—|Projektscan Template"
|
CT_504_MU3="projektscan-template|—|Projektscan Template"
|
||||||
CT_600_MU3="wp-mirror|100.92.205.101|WordPress Mirror (Redundanz CT 101)"
|
CT_600_MU3="wp-mirror|100.92.205.101|WordPress Mirror (Redundanz CT 101)"
|
||||||
|
|
@ -231,6 +231,37 @@ MAIL_IMAP_PORT="993"
|
||||||
MAIL_USER="info@orbitalo.info"
|
MAIL_USER="info@orbitalo.info"
|
||||||
MAIL_PASS="Astral-66"
|
MAIL_PASS="Astral-66"
|
||||||
|
|
||||||
|
# --- SMART HOME / HEIZUNG (CT 143 auf pve-mu-3) ---
|
||||||
|
# Alle drei Dienste laufen im GLEICHEN Container CT 143.
|
||||||
|
# Doku: smart-home/HEIZUNG.md, smart-home/STATE.md
|
||||||
|
SMARTHOME_CT="143"
|
||||||
|
SMARTHOME_HOST="pve-mu-3"
|
||||||
|
SMARTHOME_TS="100.66.78.56" # Tailscale IP von CT 143
|
||||||
|
SMARTHOME_LAN="192.168.178.36" # LAN IP (Muldenstein)
|
||||||
|
|
||||||
|
# Grafana (Dashboards + Alerts)
|
||||||
|
GRAFANA_URL_PUBLIC="https://grafana.orbitalo.net" # Cloudflare Tunnel
|
||||||
|
GRAFANA_URL_INTERN="http://100.66.78.56:3000" # Tailscale
|
||||||
|
GRAFANA_USER="admin"
|
||||||
|
GRAFANA_PASS="astral66" # = PW_DEFAULT
|
||||||
|
GRAFANA_DASHBOARD_HEIZUNG="heizung" # UID → /d/heizung/
|
||||||
|
|
||||||
|
# ioBroker (MQTT + Smart-Home-Logik + JS-Skripte)
|
||||||
|
IOBROKER_URL_INTERN="http://100.66.78.56:8081"
|
||||||
|
IOBROKER_URL_LAN="http://192.168.178.36:8081"
|
||||||
|
IOBROKER_USER="admin"
|
||||||
|
IOBROKER_PASS="astral66" # = PW_DEFAULT
|
||||||
|
|
||||||
|
# InfluxDB 1.x (Zeitreihen, Datenbank "iobroker", keine Auth intern)
|
||||||
|
INFLUX_URL_INTERN="http://100.66.78.56:8086"
|
||||||
|
INFLUX_DB="iobroker"
|
||||||
|
|
||||||
|
# SSH-Zugang zum Container:
|
||||||
|
# ssh pve-mu-3 → Proxmox-Host (via ~/.ssh/config mit SOCKS5 ProxyCommand)
|
||||||
|
# pct exec 143 -- <cmd> → Kommando im Container
|
||||||
|
# Grafana-Dashboard-Screenshot:
|
||||||
|
# node /tmp/grafana_shot.js <url> <ausgabe.png> (Puppeteer, loggt sich mit admin/astral66 ein)
|
||||||
|
|
||||||
# --- LOKI ---
|
# --- LOKI ---
|
||||||
LOKI_URL="http://100.109.206.43:3100"
|
LOKI_URL="http://100.109.206.43:3100"
|
||||||
LOKI_CT="110"
|
LOKI_CT="110"
|
||||||
|
|
|
||||||
165
smart-home/HEIZUNG.md
Normal file
165
smart-home/HEIZUNG.md
Normal file
|
|
@ -0,0 +1,165 @@
|
||||||
|
# Heizung Muldenstein — Brenner-Erkennung & Ölverbrauch
|
||||||
|
|
||||||
|
> **Wo läuft das?** Alles in **CT 143 auf pve-mu-3** (Tailscale `100.66.78.56`, LAN `192.168.178.36`).
|
||||||
|
> **Was heißt das?** Es gibt KEINEN Raspberry Pi mehr — der alte `raspi-broker` wurde durch diesen LXC-Container ersetzt.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Zugänge — NICHT MEHR FRAGEN, STEHT HIER
|
||||||
|
|
||||||
|
Alle Passwörter sind `PW_DEFAULT` aus `homelab.conf`, also **`astral66`**.
|
||||||
|
|
||||||
|
| Dienst | URL intern (Tailscale) | URL public / LAN | Login |
|
||||||
|
|---|---|---|---|
|
||||||
|
| Grafana | http://100.66.78.56:3000 | https://grafana.orbitalo.net (Cloudflare) | `admin` / `astral66` |
|
||||||
|
| ioBroker | http://100.66.78.56:8081 | http://192.168.178.36:8081 | `admin` / `astral66` |
|
||||||
|
| InfluxDB | http://100.66.78.56:8086 | — (nur intern) | keine Auth, DB `iobroker` |
|
||||||
|
|
||||||
|
**SSH-Zugang** (funktioniert aus dem `monitoring-bot` CT 116 heraus, wo Cursor läuft):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ssh pve-mu-3 # Proxmox-Host (ProxyCommand via SOCKS5 Tailscale)
|
||||||
|
ssh pve-mu-3 'pct exec 143 -- <befehl>' # direkt im Container ausführen
|
||||||
|
# Beispiel:
|
||||||
|
ssh pve-mu-3 'pct exec 143 -- systemctl status brennerstarts.service'
|
||||||
|
```
|
||||||
|
|
||||||
|
Die SSH-Config dafür liegt in `~/.ssh/config` auf CT 116 und nutzt den Tailscale-SOCKS5-Proxy auf `127.0.0.1:1055`.
|
||||||
|
|
||||||
|
**Grafana-Screenshots** (Puppeteer, loggt sich automatisch ein):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
node /tmp/grafana_shot.js "https://grafana.orbitalo.net/d/heizung/f09f94a5-heizung-and-puffer?kiosk" /tmp/out.png
|
||||||
|
```
|
||||||
|
|
||||||
|
Der Login ist im Script fest auf `admin` / `astral66` gesetzt — Script liegt in `scripts/grafana_shot.js` im Repo.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Hardware-Kontext
|
||||||
|
|
||||||
|
- **Ölkessel** ohne eigenen digitalen Status-Ausgang. Daher wird "Brenner an/aus" über die
|
||||||
|
Vorlauftemperatur `mqtt.0.Oelkessel.Oelkessel_VL.Vorlauf` detektiert.
|
||||||
|
- Brenner-Rate: **1,89 L/h** Heizöl (Messwert).
|
||||||
|
- **Regelkurve wurde im April 2026 abgesenkt** → max. Vorlauf jetzt ~40 °C statt vorher ~60 °C.
|
||||||
|
Das hat die alte Erkennungslogik kaputt gemacht (Schwellwerte zu hoch).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Erkennungslogik: `/root/brennerstarts.py` (im CT 143)
|
||||||
|
|
||||||
|
Python-Daemon, läuft als `brennerstarts.service` (systemd).
|
||||||
|
Pollt jede Minute die Vorlauftemperatur aus InfluxDB und schreibt `brennerstatus`, `brennerstarts`,
|
||||||
|
`brennerlaufzeit` und `brenner_heute` zurück.
|
||||||
|
|
||||||
|
**Aktuelle Parameter (nach Regelkurven-Anpassung, Stand 2026-04-20):**
|
||||||
|
|
||||||
|
```python
|
||||||
|
STEIGUNG_AN = 0.3 # °C über 3 Min → "Brenner zündet"
|
||||||
|
STEIGUNG_1MIN = 0.1 # °C über 1 Min → Sofort-Anstieg (zusätzlich zu AN)
|
||||||
|
STEIGUNG_AUS = -0.15 # °C über 3 Min → "Brenner aus"
|
||||||
|
MIN_TEMP_BRENNER = 30 # unter 30 °C gar keine Brennerdetektion
|
||||||
|
COOLDOWN_MINUTEN = 10 # Mindestpause zwischen zwei START-Events
|
||||||
|
BRENNER_RATE_LH = 1.89 # Liter/Stunde
|
||||||
|
```
|
||||||
|
|
||||||
|
**Alte Werte** (vor Regelkurven-Anpassung, falls Rollback nötig): `55 / 1.5 / 0.3 / -0.3`.
|
||||||
|
Backup der Originaldatei liegt im CT 143 unter `/root/brennerstarts.py.bak-20260420-2142`.
|
||||||
|
|
||||||
|
**InfluxDB-Query-Timeouts** wurden von 10s auf 30s erhöht (Log zeigte gehäuft `timed out` um 04:00,
|
||||||
|
während das Backup lief).
|
||||||
|
|
||||||
|
### Service-Kommandos
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ssh pve-mu-3 'pct exec 143 -- systemctl status brennerstarts.service'
|
||||||
|
ssh pve-mu-3 'pct exec 143 -- systemctl restart brennerstarts.service'
|
||||||
|
ssh pve-mu-3 'pct exec 143 -- journalctl -u brennerstarts.service -n 200 --no-pager'
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## InfluxDB Measurements (DB `iobroker`)
|
||||||
|
|
||||||
|
| Measurement | Bedeutung | Quelle |
|
||||||
|
|---|---|---|
|
||||||
|
| `mqtt.0.Oelkessel.Oelkessel_VL.Vorlauf` | Vorlauftemperatur °C | MQTT → ioBroker |
|
||||||
|
| `mqtt.0.Holzvergaser_Sensoren_6.Aussenfühler.temperature` | Außentemp °C | MQTT |
|
||||||
|
| `mqtt.0.Wohnstube_Temperatur_1.Wohnstube.Wohnstube_Temperatur` | Raumtemp °C | MQTT |
|
||||||
|
| `brennerstatus` | 0/1 live | `brennerstarts.py` |
|
||||||
|
| `brennerstarts` | Event pro Brennerstart (value=1) | `brennerstarts.py` |
|
||||||
|
| `brennerlaufzeit` | Sekunden kumulativ pro Intervall | `brennerstarts.py` |
|
||||||
|
| `brenner_heute` | Sekunden seit 00:00 | `brennerstarts.py` |
|
||||||
|
|
||||||
|
Heizöl-Liter = `sum(brennerlaufzeit) / 3600 * 1.89`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Grafana-Dashboard `Heizung & Puffer` (UID: `heizung`)
|
||||||
|
|
||||||
|
URL: https://grafana.orbitalo.net/d/heizung/
|
||||||
|
|
||||||
|
Aktueller Aufbau (Stand 2026-04-20):
|
||||||
|
|
||||||
|
| Zeile | Panels |
|
||||||
|
|---|---|
|
||||||
|
| Oben | Puffer Oben/Mitte/Unten · Füllgrad · Außen · Ölkessel VL · **Brenner AN/AUS** · Rücklauf |
|
||||||
|
| Row "🛢️ Ölkessel Statistik (1,89 L/h)" | Heute · Letzte 7 Tage · Letzte 30 Tage · Gesamt |
|
||||||
|
| Mitte | Temperaturverlauf (Zeitreihe) |
|
||||||
|
| Unten | 📅 Tagesverbrauch (letzte 7 Tage) als Bar Chart |
|
||||||
|
| Ganz unten | **Ölverbrauch je Heizmonat (Liter)** — farbige Kachelzeile, 1 Kachel pro Kalendermonat |
|
||||||
|
|
||||||
|
### Monats-Kacheln pflegen / erweitern
|
||||||
|
|
||||||
|
Wenn ein neuer Monat dazukommt, **einfach das Script nochmal laufen lassen** —
|
||||||
|
es entfernt das alte Panel und erzeugt die Kacheln neu für alle Monate ab Jan 2026
|
||||||
|
bis einschließlich aktueller Monat:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 scripts/add_month_panel.py
|
||||||
|
```
|
||||||
|
|
||||||
|
Idempotent. Script liegt in `scripts/add_month_panel.py` in diesem Repo.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Historische Daten-Rekonstruktion (einmalig, 2026-04-20)
|
||||||
|
|
||||||
|
Weil die Erkennung zwischen **06.04. und 20.04.2026** wegen der zu hohen Schwellwerte keine
|
||||||
|
Brenner-Events geschrieben hat, wurde die Periode nachträglich rekonstruiert:
|
||||||
|
|
||||||
|
1. **Löschen** alter/doppelter Events im Fenster
|
||||||
|
(`2026-04-06T02:00:00Z` bis `2026-04-20T19:45:00Z`) via `DELETE` auf `brennerstarts`,
|
||||||
|
`brennerstatus`, `brennerlaufzeit`.
|
||||||
|
2. **Neu berechnen** mit den aktuellen Schwellwerten aus den noch vorhandenen Rohdaten
|
||||||
|
(`Oelkessel_VL.Vorlauf`), Dry-Run zuerst, dann `--commit`.
|
||||||
|
3. **Ergebnis** (April gesamt): 52 Starts, 18,2 h Laufzeit, 34,4 L Öl.
|
||||||
|
|
||||||
|
Scripts:
|
||||||
|
|
||||||
|
- `scripts/brenner_rekonstruktion.py` — Rekonstruktion (Dry-Run default, `--commit` schreibt)
|
||||||
|
- `scripts/cleanup_reconstruct.py` — löscht Events im Reconstruction-Fenster
|
||||||
|
- `scripts/check_april.py` — Sanity-Check der Monatszahlen
|
||||||
|
|
||||||
|
**Nicht nochmal laufen lassen**, außer die Logik ändert sich erneut und ein Zeitraum muss
|
||||||
|
neu berechnet werden.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Häufige Fragen / Troubleshooting
|
||||||
|
|
||||||
|
**"Der Brenner scheint nicht erkannt zu werden."**
|
||||||
|
→ Vorlauftemperatur `Oelkessel_VL.Vorlauf` anschauen (Grafana „Temperaturverlauf"). Wenn die
|
||||||
|
Amplitude < 30 °C bleibt, ist `MIN_TEMP_BRENNER` zu hoch → in `brennerstarts.py` anpassen.
|
||||||
|
|
||||||
|
**"Zahlen in 'Letzte 30 Tage' passen nicht zum Kalendermonat."**
|
||||||
|
→ Das ist gewollt: Das Panel zeigt ein rollierendes 30-Tage-Fenster (Grafana `now()-30d`).
|
||||||
|
Für Kalendermonate ist die Kachelzeile unten da.
|
||||||
|
|
||||||
|
**"Wo sehe ich ob der Brenner gerade läuft?"**
|
||||||
|
→ Oben rechts im Dashboard: Panel `Brenner`. Rot/AN wenn `brennerstatus == 1`, grün/AUS sonst.
|
||||||
|
Datenquelle: das Measurement `brennerstatus` das `brennerstarts.py` minütlich aktualisiert.
|
||||||
|
|
||||||
|
**"Grafana liefert keine Screenshots über die Render-API."**
|
||||||
|
→ Der native `grafana-image-renderer` ist auf dieser Installation kaputt (Plugin-Signatur-Issue).
|
||||||
|
**Immer** den Puppeteer-Wrapper `scripts/grafana_shot.js` verwenden.
|
||||||
|
|
@ -1,24 +1,57 @@
|
||||||
# Smart Home Muldenstein — Live State
|
# Smart Home Muldenstein — Live State
|
||||||
> Auto-generiert: 2026-04-17 22:00
|
|
||||||
|
> **Alles läuft in CT 143 auf pve-mu-3.** Es gibt keinen Raspberry Pi mehr.
|
||||||
|
> Für die Heizung & Brennererkennung: siehe **[HEIZUNG.md](HEIZUNG.md)**.
|
||||||
|
|
||||||
|
## Container CT 143 — smart-home
|
||||||
|
|
||||||
|
| Attribut | Wert |
|
||||||
|
|---|---|
|
||||||
|
| Host | `pve-mu-3` (Tailscale `100.109.101.12`) |
|
||||||
|
| Tailscale IP | `100.66.78.56` |
|
||||||
|
| LAN IP | `192.168.178.36` |
|
||||||
|
| OS | Debian LXC |
|
||||||
|
|
||||||
|
### Dienste im Container (ein Container, drei Dienste)
|
||||||
|
|
||||||
|
| Dienst | Port | URL public | URL intern | Login |
|
||||||
|
|---|---|---|---|---|
|
||||||
|
| **Grafana** | 3000 | https://grafana.orbitalo.net (Cloudflare Tunnel) | http://100.66.78.56:3000 | `admin` / `astral66` |
|
||||||
|
| **ioBroker** | 8081 | — | http://100.66.78.56:8081 · http://192.168.178.36:8081 | `admin` / `astral66` |
|
||||||
|
| **InfluxDB 1.x** | 8086 | — | http://100.66.78.56:8086 | keine Auth, DB `iobroker` |
|
||||||
|
|
||||||
|
**Alle Passwörter = `PW_DEFAULT` = `astral66`** (siehe `homelab.conf`).
|
||||||
|
|
||||||
|
## Zugang
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# aus dem monitoring-bot (CT 116 auf pve-mu-2, wo Cursor läuft):
|
||||||
|
ssh pve-mu-3 # Host
|
||||||
|
ssh pve-mu-3 'pct exec 143 -- <cmd>' # im Container
|
||||||
|
ssh pve-mu-3 'pct exec 143 -- bash' # interaktive Shell
|
||||||
|
|
||||||
|
# Grafana-Dashboard als PNG (Puppeteer-Login mit admin/astral66):
|
||||||
|
node /tmp/grafana_shot.js "https://grafana.orbitalo.net/d/heizung/f09f94a5-heizung-and-puffer?kiosk" /tmp/out.png
|
||||||
|
```
|
||||||
|
|
||||||
|
## Wichtige Python-Services in CT 143
|
||||||
|
|
||||||
|
| Service | Datei | Zweck |
|
||||||
|
|---|---|---|
|
||||||
|
| `brennerstarts.service` | `/root/brennerstarts.py` | Ölbrenner-Erkennung per Vorlauftemperatur → InfluxDB |
|
||||||
|
|
||||||
|
Backup der Originalskripte vor Anpassungen: `/root/*.bak-YYYYMMDD-HHMM` direkt im Container.
|
||||||
|
|
||||||
## Backup-Status
|
## Backup-Status
|
||||||
- Letztes Backup: 696MB, 2026-04-17 04:43
|
- Letztes Backup: 696 MB, 2026-04-17 04:43
|
||||||
- Backups gesamt: 34
|
- Backups gesamt: 34
|
||||||
- Ziel: /home/backup-muldenstein/backups/ (CT 144)
|
- Ziel: `/home/backup-muldenstein/backups/` auf CT 144 (muldenstein-backup)
|
||||||
|
- Cronjob: täglich 04:00 → `/root/backup-to-hetzner.sh` (auf `pve-mu-3`)
|
||||||
|
- Retention: 30 d täglich, 90 d wöchentlich, monatlich unbegrenzt
|
||||||
|
|
||||||
## Services (CT 143)
|
## Grafana Alerts → Telegram (Chat `674951792`)
|
||||||
| Dienst | URL |
|
|
||||||
|---|---|
|
|
||||||
| Grafana | https://grafana.orbitalo.net |
|
|
||||||
| ioBroker | http://192.168.178.36:8081 |
|
|
||||||
| InfluxDB | http://192.168.178.36:8086 |
|
|
||||||
|
|
||||||
## Grafana Alerts → Telegram 674951792
|
|
||||||
- Promtail DOWN (> 5 Min keine Daten)
|
- Promtail DOWN (> 5 Min keine Daten)
|
||||||
- CPU > 70%
|
- CPU > 70 %
|
||||||
- Memory > 80%
|
- Memory > 80 %
|
||||||
- Disk > 90%
|
- Disk > 90 %
|
||||||
|
|
||||||
## Backup-Zeitplan
|
|
||||||
- täglich 04:00 → /root/backup-to-hetzner.sh (auf pve3)
|
|
||||||
- Retention: 30d tägl, 90d wöchl, unbegrenzt monatl
|
|
||||||
|
|
|
||||||
114
smart-home/scripts/add_month_panel.py
Normal file
114
smart-home/scripts/add_month_panel.py
Normal file
|
|
@ -0,0 +1,114 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Fuegt 'Ölverbrauch je Heizmonat' Panel am unteren Dashboard-Rand hinzu (idempotent)."""
|
||||||
|
import json, subprocess, string
|
||||||
|
from datetime import date
|
||||||
|
|
||||||
|
BASE='http://100.66.78.56:3000'
|
||||||
|
PANEL_TITLE='Ölverbrauch je Heizmonat (Liter)'
|
||||||
|
PANEL_ID=900
|
||||||
|
|
||||||
|
def curl(path, method='GET', body=None):
|
||||||
|
cmd=['curl','-s','--socks5-hostname','127.0.0.1:1055','-u','admin:astral66',
|
||||||
|
'-X',method,f'{BASE}{path}']
|
||||||
|
if body is not None:
|
||||||
|
cmd+=['-H','Content-Type: application/json','-d',json.dumps(body)]
|
||||||
|
r=subprocess.run(cmd,capture_output=True,text=True,timeout=30)
|
||||||
|
return json.loads(r.stdout) if r.stdout.startswith(('[','{')) else r.stdout
|
||||||
|
|
||||||
|
# Welche Monate? Alle Kalendermonate ab erstem Monat mit Daten (Jan 2026) bis heute.
|
||||||
|
# Damit wir nicht manuell nachpflegen muessen, generieren wir alle Monate von Jan 2026 bis today.month+1
|
||||||
|
START_Y, START_M = 2026, 1
|
||||||
|
today = date.today()
|
||||||
|
months = []
|
||||||
|
y, m = START_Y, START_M
|
||||||
|
while (y, m) <= (today.year, today.month):
|
||||||
|
months.append((y, m))
|
||||||
|
y, m = (y+1, 1) if m == 12 else (y, m+1)
|
||||||
|
|
||||||
|
print('Monate:', months)
|
||||||
|
|
||||||
|
def alphabet(i):
|
||||||
|
# A,B,...,Z,AA,AB,...
|
||||||
|
if i < 26:
|
||||||
|
return string.ascii_uppercase[i]
|
||||||
|
return string.ascii_uppercase[i//26 - 1] + string.ascii_uppercase[i % 26]
|
||||||
|
|
||||||
|
MON_DE = ['Jan','Feb','Mär','Apr','Mai','Jun','Jul','Aug','Sep','Okt','Nov','Dez']
|
||||||
|
|
||||||
|
targets = []
|
||||||
|
overrides = []
|
||||||
|
for i, (y, m) in enumerate(months):
|
||||||
|
ref = alphabet(i)
|
||||||
|
ny, nm = (y+1, 1) if m == 12 else (y, m+1)
|
||||||
|
q = (f"SELECT sum(\"value\") / 3600 * 1.89 FROM \"brennerlaufzeit\" "
|
||||||
|
f"WHERE time >= '{y}-{m:02d}-01T00:00:00Z' AND time < '{ny}-{nm:02d}-01T00:00:00Z'")
|
||||||
|
targets.append({'query': q, 'rawQuery': True, 'refId': ref})
|
||||||
|
overrides.append({
|
||||||
|
'matcher': {'id': 'byFrameRefID', 'options': ref},
|
||||||
|
'properties': [
|
||||||
|
{'id': 'displayName', 'value': f'{MON_DE[m-1]} {y}'},
|
||||||
|
]
|
||||||
|
})
|
||||||
|
|
||||||
|
# Get current dashboard
|
||||||
|
d = curl('/api/dashboards/uid/heizung')
|
||||||
|
dash = d['dashboard']
|
||||||
|
|
||||||
|
# Max y+h
|
||||||
|
max_y = 0
|
||||||
|
for p in dash['panels']:
|
||||||
|
gp = p.get('gridPos', {})
|
||||||
|
max_y = max(max_y, gp.get('y', 0) + gp.get('h', 0))
|
||||||
|
|
||||||
|
# Remove existing panel with same title (idempotent)
|
||||||
|
dash['panels'] = [p for p in dash['panels']
|
||||||
|
if p.get('title') not in (PANEL_TITLE, 'Ölverbrauch je Heizmonat')
|
||||||
|
and p.get('id') != PANEL_ID]
|
||||||
|
|
||||||
|
new_panel = {
|
||||||
|
'id': PANEL_ID,
|
||||||
|
'type': 'stat',
|
||||||
|
'title': PANEL_TITLE,
|
||||||
|
'datasource': 'InfluxDB',
|
||||||
|
'gridPos': {'x': 0, 'y': max_y, 'w': 24, 'h': 5},
|
||||||
|
'fieldConfig': {
|
||||||
|
'defaults': {
|
||||||
|
'decimals': 1,
|
||||||
|
'unit': 'none',
|
||||||
|
'color': {'mode': 'thresholds'},
|
||||||
|
'thresholds': {'mode': 'absolute', 'steps': [
|
||||||
|
{'value': None, 'color': 'green'},
|
||||||
|
{'value': 100, 'color': 'orange'},
|
||||||
|
{'value': 250, 'color': 'red'},
|
||||||
|
]},
|
||||||
|
},
|
||||||
|
'overrides': overrides,
|
||||||
|
},
|
||||||
|
'options': {
|
||||||
|
'colorMode': 'background_solid',
|
||||||
|
'graphMode': 'none',
|
||||||
|
'justifyMode': 'center',
|
||||||
|
'reduceOptions': {
|
||||||
|
'calcs': ['lastNotNull'],
|
||||||
|
'fields': '',
|
||||||
|
'values': False,
|
||||||
|
},
|
||||||
|
'textMode': 'value_and_name',
|
||||||
|
'orientation': 'vertical',
|
||||||
|
'text': {
|
||||||
|
'titleSize': 14,
|
||||||
|
'valueSize': 32,
|
||||||
|
},
|
||||||
|
'wideLayout': True,
|
||||||
|
'percentChangeColorMode': 'standard',
|
||||||
|
},
|
||||||
|
'targets': targets,
|
||||||
|
}
|
||||||
|
dash['panels'].append(new_panel)
|
||||||
|
|
||||||
|
resp = curl('/api/dashboards/db', 'POST', {
|
||||||
|
'dashboard': dash,
|
||||||
|
'overwrite': True,
|
||||||
|
'message': f'add monthly oil consumption tiles ({len(months)} months)',
|
||||||
|
})
|
||||||
|
print(resp)
|
||||||
215
smart-home/scripts/brenner_rekonstruktion.py
Normal file
215
smart-home/scripts/brenner_rekonstruktion.py
Normal file
|
|
@ -0,0 +1,215 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Rekonstruiert brennerstarts/brennerstatus/brennerlaufzeit aus VL-Rohdaten fuer
|
||||||
|
den Zeitraum, in dem die Live-Erkennung wegen zu hoher Schwelle (55 C) nichts
|
||||||
|
mehr erfasst hat.
|
||||||
|
|
||||||
|
Verwendet die AKTUELLEN Schwellen (MIN_TEMP=30, Steigung 0.3/3min, 0.1/1min,
|
||||||
|
-0.15/3min AUS, Cooldown 10 min). Schreibt mit historischen Timestamps.
|
||||||
|
|
||||||
|
Dry-run by default. --commit zum tatsaechlichen Schreiben.
|
||||||
|
"""
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
from datetime import datetime, timedelta, timezone
|
||||||
|
from urllib.parse import quote
|
||||||
|
from urllib.request import Request, urlopen
|
||||||
|
|
||||||
|
INFLUX = 'http://localhost:8086'
|
||||||
|
DB = 'iobroker'
|
||||||
|
VL_MEASUREMENT = 'mqtt.0.Oelkessel.Oelkessel_VL.Vorlauf'
|
||||||
|
|
||||||
|
MIN_TEMP_BRENNER = 30
|
||||||
|
STEIGUNG_AN = 0.3
|
||||||
|
STEIGUNG_1MIN = 0.1
|
||||||
|
STEIGUNG_AUS = -0.15
|
||||||
|
COOLDOWN_SEC = 10 * 60
|
||||||
|
BRENNER_RATE_LH = 1.89
|
||||||
|
|
||||||
|
# Live-Service fing am 2026-04-20 21:45 MESZ an; davor war Erkennung tot
|
||||||
|
# ab dem 06.04. mittags (letzter AUS war 2026-04-06 03:24 UTC = 05:24 MESZ)
|
||||||
|
START_UTC = datetime(2026, 4, 6, 4, 0, tzinfo=timezone.utc) # 06:00 MESZ
|
||||||
|
END_UTC = datetime(2026, 4, 20, 19, 44, tzinfo=timezone.utc) # 21:44 MESZ
|
||||||
|
|
||||||
|
STEP_SEC = 30
|
||||||
|
|
||||||
|
|
||||||
|
def influx_query(q):
|
||||||
|
url = f'{INFLUX}/query?db={DB}&epoch=ns&q={quote(q)}'
|
||||||
|
with urlopen(url, timeout=60) as r:
|
||||||
|
return json.loads(r.read().decode())
|
||||||
|
|
||||||
|
|
||||||
|
def fetch_vl(start_utc, end_utc):
|
||||||
|
q = (
|
||||||
|
f'SELECT value FROM "{VL_MEASUREMENT}" '
|
||||||
|
f"WHERE time >= '{start_utc.strftime('%Y-%m-%dT%H:%M:%SZ')}' "
|
||||||
|
f"AND time <= '{end_utc.strftime('%Y-%m-%dT%H:%M:%SZ')}' "
|
||||||
|
f'ORDER BY time ASC'
|
||||||
|
)
|
||||||
|
data = influx_query(q)
|
||||||
|
series = data['results'][0].get('series', [])
|
||||||
|
if not series:
|
||||||
|
return []
|
||||||
|
return [(int(t), float(v)) for t, v in series[0]['values'] if v is not None]
|
||||||
|
|
||||||
|
|
||||||
|
def temp_at_or_before(samples, i, target_ns):
|
||||||
|
"""Binary-search-ish: nimm letzten Sample <= target_ns beginnend bei i rueckwaerts."""
|
||||||
|
j = i
|
||||||
|
while j > 0 and samples[j][0] > target_ns:
|
||||||
|
j -= 1
|
||||||
|
return samples[j][1] if samples[j][0] <= target_ns else None
|
||||||
|
|
||||||
|
|
||||||
|
def reconstruct(samples):
|
||||||
|
"""Events liste [(ts_ns, 'an'|'aus', laufzeit_s_at_aus), ...]"""
|
||||||
|
events = []
|
||||||
|
brenner_laeuft = False
|
||||||
|
start_ts_ns = None
|
||||||
|
last_start_ns = None
|
||||||
|
|
||||||
|
if not samples:
|
||||||
|
return events
|
||||||
|
# Iteriere in Schritten zeitlich gleichmaessig
|
||||||
|
t0 = samples[0][0]
|
||||||
|
t1 = samples[-1][0]
|
||||||
|
cur = t0
|
||||||
|
step = STEP_SEC * 1_000_000_000
|
||||||
|
three_min = 3 * 60 * 1_000_000_000
|
||||||
|
one_min = 1 * 60 * 1_000_000_000
|
||||||
|
cooldown = COOLDOWN_SEC * 1_000_000_000
|
||||||
|
|
||||||
|
while cur <= t1:
|
||||||
|
# finde index <= cur
|
||||||
|
# linear forward search (samples sorted)
|
||||||
|
pass
|
||||||
|
break
|
||||||
|
# Stattdessen: Pro-Sample-Iteration (einfacher, robust)
|
||||||
|
# An jedem Sample i werten wir ueber das 3min-Fenster aus.
|
||||||
|
for i in range(len(samples)):
|
||||||
|
ts_now, temp_now = samples[i]
|
||||||
|
target_3m = ts_now - three_min
|
||||||
|
target_1m = ts_now - one_min
|
||||||
|
# finde temp_vor_3m, temp_vor_1m
|
||||||
|
j3 = i
|
||||||
|
while j3 > 0 and samples[j3][0] > target_3m:
|
||||||
|
j3 -= 1
|
||||||
|
j1 = i
|
||||||
|
while j1 > 0 and samples[j1][0] > target_1m:
|
||||||
|
j1 -= 1
|
||||||
|
if samples[j3][0] > ts_now - int(3.5 * 60 * 1_000_000_000):
|
||||||
|
pass # ok
|
||||||
|
temp_vor_3m = samples[j3][1]
|
||||||
|
temp_vor_1m = samples[j1][1]
|
||||||
|
# Ueberpruefe genug Spreizung Daten
|
||||||
|
if ts_now - samples[j3][0] < 2 * 60 * 1_000_000_000:
|
||||||
|
# weniger als 2 min Historie -> ueberspringen
|
||||||
|
continue
|
||||||
|
|
||||||
|
steigung_3m = temp_now - temp_vor_3m
|
||||||
|
steigung_1m = temp_now - temp_vor_1m
|
||||||
|
|
||||||
|
if not brenner_laeuft:
|
||||||
|
if (
|
||||||
|
temp_now > MIN_TEMP_BRENNER
|
||||||
|
and steigung_3m >= STEIGUNG_AN
|
||||||
|
and steigung_1m >= STEIGUNG_1MIN
|
||||||
|
):
|
||||||
|
if last_start_ns is None or (ts_now - last_start_ns) > cooldown:
|
||||||
|
brenner_laeuft = True
|
||||||
|
start_ts_ns = ts_now
|
||||||
|
last_start_ns = ts_now
|
||||||
|
events.append((ts_now, 'an', None))
|
||||||
|
else:
|
||||||
|
if steigung_3m <= STEIGUNG_AUS:
|
||||||
|
laufzeit_s = (ts_now - start_ts_ns) / 1_000_000_000
|
||||||
|
events.append((ts_now, 'aus', laufzeit_s))
|
||||||
|
brenner_laeuft = False
|
||||||
|
start_ts_ns = None
|
||||||
|
return events
|
||||||
|
|
||||||
|
|
||||||
|
def write_line(line, dry=True):
|
||||||
|
if dry:
|
||||||
|
return True
|
||||||
|
url = f'{INFLUX}/write?db={DB}&precision=ns'
|
||||||
|
req = Request(url, data=line.encode(), method='POST')
|
||||||
|
with urlopen(req, timeout=30) as r:
|
||||||
|
return r.status == 204
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
ap = argparse.ArgumentParser()
|
||||||
|
ap.add_argument('--commit', action='store_true')
|
||||||
|
ap.add_argument('--start', default=START_UTC.isoformat())
|
||||||
|
ap.add_argument('--end', default=END_UTC.isoformat())
|
||||||
|
args = ap.parse_args()
|
||||||
|
|
||||||
|
start = datetime.fromisoformat(args.start)
|
||||||
|
end = datetime.fromisoformat(args.end)
|
||||||
|
|
||||||
|
print(f'Fetch VL {start} -> {end}')
|
||||||
|
samples = fetch_vl(start, end)
|
||||||
|
print(f' {len(samples)} Samples, erster {datetime.fromtimestamp(samples[0][0]/1e9, timezone.utc)}, letzter {datetime.fromtimestamp(samples[-1][0]/1e9, timezone.utc)}')
|
||||||
|
print(f' Min {min(v for _,v in samples):.1f} C, Max {max(v for _,v in samples):.1f} C')
|
||||||
|
|
||||||
|
events = reconstruct(samples)
|
||||||
|
print(f'\nErkannte Events: {len(events)}')
|
||||||
|
ans = [e for e in events if e[1] == 'an']
|
||||||
|
auss = [e for e in events if e[1] == 'aus']
|
||||||
|
print(f' {len(ans)} Starts, {len(auss)} Stopps')
|
||||||
|
total_s = sum(e[2] for e in auss)
|
||||||
|
print(f' Gesamt-Laufzeit: {total_s/3600:.2f} h -> {total_s/3600*BRENNER_RATE_LH:.2f} L')
|
||||||
|
|
||||||
|
# Erste / letzte Events
|
||||||
|
for label, lst in (('erste 5 Starts', ans[:5]), ('letzte 5 Starts', ans[-5:]),
|
||||||
|
('erste 5 Stopps', auss[:5]), ('letzte 5 Stopps', auss[-5:])):
|
||||||
|
print(f'\n{label}:')
|
||||||
|
for e in lst:
|
||||||
|
ts = datetime.fromtimestamp(e[0]/1e9, timezone.utc).astimezone()
|
||||||
|
if e[1] == 'aus':
|
||||||
|
print(f' {ts.strftime("%Y-%m-%d %H:%M:%S %z")} AUS {e[2]/60:.1f} min')
|
||||||
|
else:
|
||||||
|
print(f' {ts.strftime("%Y-%m-%d %H:%M:%S %z")} AN')
|
||||||
|
|
||||||
|
# Tagesbilanzen
|
||||||
|
print('\nTagesbilanz:')
|
||||||
|
per_day = {}
|
||||||
|
for _, _, _ in [(0,0,0)]:
|
||||||
|
pass
|
||||||
|
# Zaehle Starts und Laufzeit pro Tag (lokaler Tag Europe/Berlin ≈ UTC+2 in April)
|
||||||
|
TZ = timezone(timedelta(hours=2))
|
||||||
|
for ts_ns, typ, laufzeit in events:
|
||||||
|
d = datetime.fromtimestamp(ts_ns/1e9, TZ).date()
|
||||||
|
if d not in per_day:
|
||||||
|
per_day[d] = {'starts': 0, 'laufzeit_s': 0.0}
|
||||||
|
if typ == 'an':
|
||||||
|
per_day[d]['starts'] += 1
|
||||||
|
elif typ == 'aus':
|
||||||
|
per_day[d]['laufzeit_s'] += laufzeit
|
||||||
|
for d in sorted(per_day):
|
||||||
|
s = per_day[d]
|
||||||
|
h = s['laufzeit_s'] / 3600
|
||||||
|
print(f' {d} Starts={s["starts"]:3d} Laufzeit={h:5.2f}h Verbrauch={h*BRENNER_RATE_LH:5.2f}L')
|
||||||
|
|
||||||
|
# Schreiben
|
||||||
|
if args.commit:
|
||||||
|
print('\n--- commit: schreibe nach InfluxDB ---')
|
||||||
|
n = 0
|
||||||
|
for ts_ns, typ, laufzeit in events:
|
||||||
|
if typ == 'an':
|
||||||
|
write_line(f'brennerstarts value=1 {ts_ns}', dry=False)
|
||||||
|
write_line(f'brennerstatus value=1 {ts_ns}', dry=False)
|
||||||
|
n += 2
|
||||||
|
elif typ == 'aus':
|
||||||
|
write_line(f'brennerlaufzeit value={laufzeit} {ts_ns}', dry=False)
|
||||||
|
write_line(f'brennerstatus value=0 {ts_ns}', dry=False)
|
||||||
|
n += 2
|
||||||
|
print(f' {n} Zeilen geschrieben')
|
||||||
|
else:
|
||||||
|
print('\n(dry-run, nichts geschrieben; mit --commit ausfuehren)')
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
||||||
62
smart-home/scripts/check_april.py
Normal file
62
smart-home/scripts/check_april.py
Normal file
|
|
@ -0,0 +1,62 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
import json
|
||||||
|
from urllib.parse import quote
|
||||||
|
from urllib.request import urlopen
|
||||||
|
|
||||||
|
INFLUX = 'http://localhost:8086'
|
||||||
|
DB = 'iobroker'
|
||||||
|
|
||||||
|
def q(sql):
|
||||||
|
url = f'{INFLUX}/query?db={DB}&epoch=ns&q={quote(sql)}'
|
||||||
|
with urlopen(url, timeout=30) as r:
|
||||||
|
return json.loads(r.read().decode())
|
||||||
|
|
||||||
|
def rows(sql):
|
||||||
|
d = q(sql)
|
||||||
|
s = d['results'][0].get('series', [])
|
||||||
|
if not s:
|
||||||
|
return []
|
||||||
|
return s[0]['values']
|
||||||
|
|
||||||
|
from datetime import datetime, timezone, timedelta
|
||||||
|
TZ = timezone(timedelta(hours=2))
|
||||||
|
|
||||||
|
print('=== Starts im April (count) ===')
|
||||||
|
r = rows("SELECT count(value) FROM brennerstarts WHERE time >= '2026-04-01T00:00:00Z' AND time < '2026-05-01T00:00:00Z'")
|
||||||
|
print(r)
|
||||||
|
|
||||||
|
print('\n=== Laufzeit-Summe April (Stunden) ===')
|
||||||
|
r = rows("SELECT sum(value) FROM brennerlaufzeit WHERE time >= '2026-04-01T00:00:00Z' AND time < '2026-05-01T00:00:00Z'")
|
||||||
|
if r:
|
||||||
|
print(f' Summe = {r[0][1]:.1f} s = {r[0][1]/3600:.2f} h')
|
||||||
|
|
||||||
|
print('\n=== Pro Tag Starts und Laufzeit (April) ===')
|
||||||
|
starts = rows("SELECT count(value) FROM brennerstarts WHERE time >= '2026-04-01T00:00:00Z' AND time < '2026-05-01T00:00:00Z' GROUP BY time(1d,-2h) fill(0)")
|
||||||
|
lauf = rows("SELECT sum(value) FROM brennerlaufzeit WHERE time >= '2026-04-01T00:00:00Z' AND time < '2026-05-01T00:00:00Z' GROUP BY time(1d,-2h) fill(0)")
|
||||||
|
d_starts = {s[0]: s[1] or 0 for s in starts}
|
||||||
|
d_lauf = {s[0]: s[1] or 0 for s in lauf}
|
||||||
|
for ts in sorted(set(list(d_starts) + list(d_lauf))):
|
||||||
|
day = datetime.fromtimestamp(ts/1e9, TZ).date()
|
||||||
|
st = d_starts.get(ts, 0)
|
||||||
|
lf = d_lauf.get(ts, 0) / 3600
|
||||||
|
if st or lf:
|
||||||
|
print(f' {day} Starts={st:3d} Laufzeit={lf:5.2f}h Liter={lf*1.89:5.2f}')
|
||||||
|
|
||||||
|
print('\n=== verdächtig lange einzelne Laufzeiten > 1h ===')
|
||||||
|
r = rows("SELECT value FROM brennerlaufzeit WHERE value > 3600 AND time > '2026-03-01T00:00:00Z' ORDER BY time DESC LIMIT 20")
|
||||||
|
for ts, v in r:
|
||||||
|
t = datetime.fromtimestamp(ts/1e9, TZ)
|
||||||
|
print(f' {t.strftime("%Y-%m-%d %H:%M:%S")} {v:.0f}s = {v/60:.1f}min = {v/3600:.2f}h')
|
||||||
|
|
||||||
|
print('\n=== Daily max einzelne Laufzeit (wann >30min?) ===')
|
||||||
|
r = rows("SELECT max(value) FROM brennerlaufzeit WHERE time >= '2026-04-01T00:00:00Z' AND time < '2026-05-01T00:00:00Z' GROUP BY time(1d,-2h) fill(0)")
|
||||||
|
for ts, v in r:
|
||||||
|
if v and v > 30*60:
|
||||||
|
t = datetime.fromtimestamp(ts/1e9, TZ).date()
|
||||||
|
print(f' {t} max einzel-laufzeit = {v/60:.1f} min')
|
||||||
|
|
||||||
|
print('\n=== brennerstarts value-Verteilung April ===')
|
||||||
|
r = rows("SELECT value FROM brennerstarts WHERE time >= '2026-04-01T00:00:00Z' AND time < '2026-05-01T00:00:00Z'")
|
||||||
|
vals = [x[1] for x in r]
|
||||||
|
from collections import Counter
|
||||||
|
print(f' Anzahl Zeilen = {len(vals)}, Summe = {sum(vals)}, Counter = {Counter(vals)}')
|
||||||
43
smart-home/scripts/cleanup_reconstruct.py
Normal file
43
smart-home/scripts/cleanup_reconstruct.py
Normal file
|
|
@ -0,0 +1,43 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""1) Löscht Überlappungsbereich 2) rekonstruiert sauber mit neuen Schwellen."""
|
||||||
|
import json
|
||||||
|
from urllib.parse import quote
|
||||||
|
from urllib.request import Request, urlopen
|
||||||
|
|
||||||
|
INFLUX='http://localhost:8086'; DB='iobroker'
|
||||||
|
|
||||||
|
def qget(sql):
|
||||||
|
with urlopen(f'{INFLUX}/query?db={DB}&epoch=ns&q={quote(sql)}', timeout=30) as r:
|
||||||
|
return json.loads(r.read().decode())
|
||||||
|
|
||||||
|
def qpost(sql):
|
||||||
|
url = f'{INFLUX}/query?db={DB}'
|
||||||
|
req = Request(url, data=f'q={quote(sql)}'.encode(), method='POST',
|
||||||
|
headers={'Content-Type':'application/x-www-form-urlencoded'})
|
||||||
|
with urlopen(req, timeout=30) as r:
|
||||||
|
return json.loads(r.read().decode())
|
||||||
|
|
||||||
|
# Zeitraum: ab erstem "toten" Zeitpunkt bis eine Minute vor Live-Service-Start
|
||||||
|
# Live-Start war 2026-04-20 21:45 MESZ = 19:45 UTC
|
||||||
|
# Rekonstruktion soll 06.04. mittags bis 20.04. 21:44 MESZ
|
||||||
|
START_UTC = '2026-04-06T02:00:00Z' # 04:00 MESZ 06.04.
|
||||||
|
END_UTC = '2026-04-20T19:45:00Z' # 21:45 MESZ 20.04.
|
||||||
|
|
||||||
|
print('=== VOR DELETE ===')
|
||||||
|
for m in ('brennerstarts','brennerstatus','brennerlaufzeit'):
|
||||||
|
r = qget(f'SELECT count(value) FROM {m} WHERE time >= \'{START_UTC}\' AND time < \'{END_UTC}\'')
|
||||||
|
s = r['results'][0].get('series',[])
|
||||||
|
c = s[0]['values'][0][1] if s else 0
|
||||||
|
print(f' {m}: {c} Zeilen im Rekonstruktions-Fenster')
|
||||||
|
|
||||||
|
print('\n=== DELETE ===')
|
||||||
|
for m in ('brennerstarts','brennerstatus','brennerlaufzeit'):
|
||||||
|
r = qpost(f'DELETE FROM {m} WHERE time >= \'{START_UTC}\' AND time < \'{END_UTC}\'')
|
||||||
|
print(f' {m}: {r}')
|
||||||
|
|
||||||
|
print('\n=== NACH DELETE ===')
|
||||||
|
for m in ('brennerstarts','brennerstatus','brennerlaufzeit'):
|
||||||
|
r = qget(f'SELECT count(value) FROM {m} WHERE time >= \'{START_UTC}\' AND time < \'{END_UTC}\'')
|
||||||
|
s = r['results'][0].get('series',[])
|
||||||
|
c = s[0]['values'][0][1] if s else 0
|
||||||
|
print(f' {m}: {c} Zeilen verbleibend')
|
||||||
59
smart-home/scripts/grafana_shot.js
Normal file
59
smart-home/scripts/grafana_shot.js
Normal file
|
|
@ -0,0 +1,59 @@
|
||||||
|
const puppeteer = require('/opt/webpage-screenshot-mcp/node_modules/puppeteer');
|
||||||
|
|
||||||
|
(async () => {
|
||||||
|
const url = process.argv[2];
|
||||||
|
const out = process.argv[3] || '/tmp/shot.png';
|
||||||
|
const user = process.env.GF_USER || 'admin';
|
||||||
|
const pass = process.env.GF_PASS || 'astral66';
|
||||||
|
const base = new URL(url);
|
||||||
|
const loginUrl = `${base.protocol}//${base.host}/login`;
|
||||||
|
|
||||||
|
const browser = await puppeteer.launch({
|
||||||
|
headless: true,
|
||||||
|
args: [
|
||||||
|
'--no-sandbox',
|
||||||
|
'--disable-dev-shm-usage',
|
||||||
|
'--disable-gpu',
|
||||||
|
],
|
||||||
|
defaultViewport: { width: 1920, height: 1400 },
|
||||||
|
});
|
||||||
|
const page = await browser.newPage();
|
||||||
|
|
||||||
|
const res = await page.evaluate(
|
||||||
|
async (loginUrl, user, pass) => {
|
||||||
|
const r = await fetch(loginUrl, {
|
||||||
|
method: 'POST',
|
||||||
|
headers: { 'Content-Type': 'application/json' },
|
||||||
|
body: JSON.stringify({ user, password: pass }),
|
||||||
|
});
|
||||||
|
return r.status;
|
||||||
|
},
|
||||||
|
loginUrl,
|
||||||
|
user,
|
||||||
|
pass
|
||||||
|
).catch(() => null);
|
||||||
|
|
||||||
|
await page.goto(loginUrl, { waitUntil: 'domcontentloaded' });
|
||||||
|
await page.evaluate(
|
||||||
|
async (loginUrl, user, pass) => {
|
||||||
|
await fetch(loginUrl, {
|
||||||
|
method: 'POST',
|
||||||
|
headers: { 'Content-Type': 'application/json' },
|
||||||
|
credentials: 'include',
|
||||||
|
body: JSON.stringify({ user, password: pass }),
|
||||||
|
});
|
||||||
|
},
|
||||||
|
loginUrl,
|
||||||
|
user,
|
||||||
|
pass
|
||||||
|
);
|
||||||
|
|
||||||
|
await page.goto(url, { waitUntil: 'networkidle2', timeout: 60000 });
|
||||||
|
await new Promise((r) => setTimeout(r, 6000));
|
||||||
|
await page.screenshot({ path: out, fullPage: false });
|
||||||
|
console.log('saved', out);
|
||||||
|
await browser.close();
|
||||||
|
})().catch((e) => {
|
||||||
|
console.error('ERR', e.message);
|
||||||
|
process.exit(1);
|
||||||
|
});
|
||||||
25
smart-home/scripts/patch_brenner.sh
Normal file
25
smart-home/scripts/patch_brenner.sh
Normal file
|
|
@ -0,0 +1,25 @@
|
||||||
|
#!/bin/bash
|
||||||
|
set -e
|
||||||
|
python3 - <<'PYEOF'
|
||||||
|
import re, pathlib
|
||||||
|
p = pathlib.Path("/root/brennerstarts.py")
|
||||||
|
s = p.read_text()
|
||||||
|
s = re.sub(r"STEIGUNG_AN\s*=\s*[0-9.]+", "STEIGUNG_AN = 0.3", s, count=1)
|
||||||
|
s = re.sub(r"STEIGUNG_1MIN\s*=\s*[0-9.]+", "STEIGUNG_1MIN = 0.1", s, count=1)
|
||||||
|
s = re.sub(r"MIN_TEMP_BRENNER\s*=\s*[0-9.]+", "MIN_TEMP_BRENNER = 30", s, count=1)
|
||||||
|
s = re.sub(r"STEIGUNG_AUS\s*=\s*-?[0-9.]+", "STEIGUNG_AUS = -0.15", s, count=1)
|
||||||
|
p.write_text(s)
|
||||||
|
print("patched")
|
||||||
|
PYEOF
|
||||||
|
echo "--- neue Schwellen ---"
|
||||||
|
grep -n STEIGUNG_ /root/brennerstarts.py
|
||||||
|
grep -n MIN_TEMP_ /root/brennerstarts.py
|
||||||
|
grep -n BRENNER_RATE /root/brennerstarts.py
|
||||||
|
echo "--- Timeouts ---"
|
||||||
|
grep -n 'timeout=' /root/brennerstarts.py
|
||||||
|
echo "--- service restart ---"
|
||||||
|
systemctl restart brennerstarts
|
||||||
|
sleep 3
|
||||||
|
systemctl is-active brennerstarts
|
||||||
|
echo "--- log nach restart ---"
|
||||||
|
tail -15 /var/log/brennerstarts.log
|
||||||
Loading…
Add table
Reference in a new issue