Misc changes to the OMV server and useful commands

Made lots of minor changes to the ‘box’ recently while recovering from hip replacement.

Zigbee2mqtt
  • Updated to version 2.0.0 which needed a fix to the configuration.yaml
  • Still using usbip to contact the zigbee controller on the Pi running on 192.xxx.xxx.220
  • can use this
    ls -l /dev/serial/by-id
    to discover the location of the controller on the OMV server pointing to the Pi.  Pi uses usbipd service and using
    lsusb
    useful in location of the zigbee controller.
Find files of certain size

find .m4v -size +200M

find files by name and move them

sudo find -name *.m4v -exec mv “{}” /media/6f326f9f-b1a4-484c-b6d6-e8768b9eee3e \;

size can be added too…..

sudo find -name *.m4v -size +200M -exec mv “{}” /media/6f326f9f-b1a4-484c-b6d6-e8768b9eee3e \;

Ports in use

This command useful
sudo lsof -i:ppp
where ppp is the port e.g. sudo lsof -i:443

Stop mails in crontab

put MAILTO=”” infront to the the cron

Stop things writing to the syslog
  1.  owlmqtt service – commented out the echo
  2. crontab as above of the Cron running every 5 mins from NextCloud
  3. removed checking of ssh from monit as this was causing an entry from 127.0.0.1 in the syslog every time it was checked.
run command inside docker container

sudo docker exec -u root -it dashy /bin/bash -c ‘yarn validate-config’

/bin/bash might be /bin/sh depending on the container

look for string in folder and lower files

grep -r . -e “string”

Added to Docker-Compose
  1. New version of bitwarden
  2. portainer
  3. watchtower
  4. vaultwarden – to access old bitwarden database
  5. plex
Reinstalled
  1. beets
OMV
Salt

needed to move some salt python modules on the installed pyenv to the local python3 libraries used from sudo.  Tried to do this properly and gave up after 2 days  Needed to be able to run omv-salt command

removed k3s service – Kubernet was a plug in installed and removed but left the service behind

remove podman service – Podman was a plug in installed and removed but left the service behind

omv-extras

sudo -u root wget -O – https://github.com/OpenMediaVault-Plugin-Developers/packages/raw/master/install | bash

OMV notifications

used one of the email addresses for zen after trying to use SMTP for both google mail and outlook and finding they no longer support simple SMTP authentication!!

OMV – run the update which is done after a change in the command line

useful when it fails and you want to ‘watch’ the fail.

sudo omv-rpc -u admin “Config” “applyChanges” “{\”modules\”: $(cat /var/lib/openmediavault/dirtymodules.json), \”force\”: true}”

php

Tidied up the various php services running.  Stupidly remove php-pam which mucked up the login for openmediavault and required an omv remove and install.  reinstall did not fix.

nginx and certbot

Updating certificates

use command below (-v is verbose and useful)

sudo certbot –nginx -v

Synology and Cloud backup

This exists as a paid service but wanted to see whether it would work for any of the following

Mega
Mediafire
pCloud
Dropbox
Evernote

It turns out that Mega is by far the best but given the age of my Synology box there is no package solution but there is a scripting language.

In the case of the others e.g. pCloud and Mediafire there is nothing but drag and drop.

Homeassistant – Delete old/abandoned refresh tokens from user profiles

I kept on finding there were loads of refresh tokens when logging into HA. So I googled why….. I am not logging out! No surprise there but they are a pain to delete individually and so I found this page.

Delete old/abandoned refresh tokens from user profiles

The tokens are in the .homeassistant/.storage/auth file and held as JSON.

The link explains that something call ‘jq’ can edit JSON from the command line… neat! However I had trouble getting the script in the link to work and so created a new shell script like this….

authfile=/home/homeassistant/.homeassistant/.storage/auth
tmp=”$(tempfile)”
echo “Copying from ” $authfile ” to ” $tmp
jq –arg s “$(date -d “” +”%Y-%m-%dT%H:%M”)” ‘del( .data.refresh_tokens[] | select(.last_used_at < $s) )’ $authfile >$tmp
cp $tmp $authfile
rm $tmp

This needs to be run as the homeassistant user to work and homeassistant needs to be stopped as the auth file seems to be loaded into memory when ha is running.

What it does is call jq after setting up the auth file location and a system temp file using –args. –args passes the ‘s’ as a string of value “$(date -d “”+…) into the command ‘del(.data….) using the $authfile as input. The command deletes the part of the JSON file with the refresh token where the last used date is less than today and copies it to the temp file.

When this is complete the temp file is copied over the authfile and then the temp is deleted.

Works well. Just need to remember to restart home assistant!

Installing Graphite on Debian Stretch

Getting Graphite Running

After having installed Icinga2, IcingaWeb and IcingaDirector I decided I wanted some better pretty pictures and graphs.

I have Grafana installed via HomeAssistant and so decided the next point was to install Graphite and get some data into it and then to Grafana for display.

It was not as easy as expected.

First port of call to see how to do this was https://graphite.readthedocs.io/en/latest/index.html which is IMHO not that easy to follow – it is all there but the actual steps are not that clear and initially I got stuck at Initial Configuration and the Nginx + uWSGI step and gave up and deleted everything including the virtual environment I had created.

I then decided to start again and read the dependencies a bit more carefully and also looked here https://gist.github.com/tristanbes/4046457 and here https://arpitbhayani.me/blogs/setting-up-graphite-using-nginx-on-ubuntu.

So I started again….

  • No virtual environment
  • Python3
  • Now the dependencies ….
    • Django (I had heard of this but did not know what it was). The tutorial here is great …. https://uwsgi-docs.readthedocs.io/en/latest/tutorials/Django_and_nginx.html and I followed this and got the sample app running. Main problems were getting the user and group www-data:www-data the right permissions.
    • I also tried to install django-tagging using pip3 and found it was already there as were these dependencies – putz and scandir
    • I tried to install fontconfig using pip3 and it failed and with a bit of googling found out this was a apt install apt-get install -y fontconfig. This was also up to date.
  • Then back to the instructions here https://graphite.readthedocs.io/en/latest/index.html and decided on default layout without a virtual environment.
  • Checked with development headers – all OK now. apt-get install python-dev libcairo2-dev libffi-dev build-essential
  • Installed packages
    • export PYTHONPATH=”/opt/graphite/lib/:/opt/graphite/webapp/”
    • pip install –no-binary=:all: https://github.com/graphite-project/whisper/tarball/master
    • pip install –no-binary=:all: https://github.com/graphite-project/carbon/tarball/master
    • pip install –no-binary=:all: https://github.com/graphite-project/graphite-web/tarball/master
  • From the link a arpitbhayani.me I decided to use Postgres as the database and followed his config instructions and so needed to sudo pip3 install psycopg2 as psycopg2 is a dependency for using Postgres.
  • Then did the Webapp Database setup – this is why Django was needed to be installed BEFORE this step – i.e. dependency
  • Then the steps in Nginx + uWSGI were followed… with a few changes
    • I decided to use a socket for NGINX and so the graphite.ini was different (socket = /opt/graphite/webapp/graphite/graphite.sock) and as I have no virtual environment the virtualenv setting not needed.
  • The Nginx conf file is as follows….
# /etc/nginx/sites-available/graphite.conf
upstream graphite {
    #server unix:///path/to/your/mysite/mysite.sock; # for a file socket
    server unix:///opt/graphite/webapp/graphite/graphite.sock; # for a file socket

}

server {
    listen 8080;
   # the domain name it will serve for
    server_name <ip-address>; # substitute your machine's IP address or FQDN
    charset     utf-8;

    location /static/ {
        alias /opt/graphite/webapp/content/;
    }

    location / {
        include /opt/graphite/conf/uwsgi_params;
        uwsgi_pass graphite;
    }
}

Configuring Carbon

  • Firstly copy some files from /opt/graphite/conf for editing (now only at line 18 and 19 in this link https://gist.github.com/tristanbes/4046457)
    • sudo cp carbon.conf.example carbon.conf
    • sudo cp storage-schemas.conf.example storage-schemas.conf
  • and now do lines 21-56 in the link from tristanbes to set up storage-schemas.conf and storage-aggregation.conf
  • Next following the instructions on editing the carbon.conf in the main graphite documentation including changing the cache ports 2003 to 2013 and 2003 to 2014 in cache section and the Destination setting in relay section
  • Now set up a system service for carbon-cache

[Unit]
Description=Graphite Carbon Cache
After=network.target

[Service]
Type=forking
StandardOutput=syslog
StandardError=syslog
ExecStart=/opt/graphite/bin/carbon-cache.py –config=/opt/graphite/conf/carbon.conf –pidfile=/var/run/carbon-cache.pid start
ExecReload=/bin/kill -USR1 $MAINPID
PIDFile=/var/run/carbon-cache.pid

[Install]
WantedBy=multi-user.target

  • enable the service (sudo systemctl enable carbon-cache.service)
  • and start it.
  • Try sending some into port 2013.
  • e.g.
echo "test.count 9 `date +%s`" | nc -q0 127.0.0.1 2003;

This will add one data metric of value 9 in system. Lets add some more data; this time wee loop through values

for i in 4 6 8 16 2; do echo "test.count $i `date +%s`" | nc -q0 127.0.0.1 2003; sleep 6; done
  • and you should be able to see data in Graphite like this
test.count graph in Graphite

Updating LetsEncrypt on OpenMediaVault with a parallel *.conf

I struggled with the LetsEncrypt certificate renewal using the plugin in OMV as I have a separate sites.enabled running in parallel.

To do this in the future I need to login into webmin and disable www.thenaylors.co.uk

then go into OMV and stop SSL and run only on port 80.

Then do the update of the certificates and then

rerun SSL on OMV and then re-enable www.thenaylors.co.uk on webmin.

Updating NextCloud with the Open Updater button does not work

Updated the items below as not working. Much easier to navigate to the nextcloud folder and execute

sudo -u www-data php updater/updater.phar

————-

If the “Open Updater” button does not work then try this.

Go to /updater/login.php

Open terminal window and execute php -r ‘$password = trim(shell_exec(“openssl rand -base64 48”));if(strlen($password) === 64) {$hash = password_hash($password, PASSWORD_DEFAULT) . “\n”; echo “Insert as \”updater.secret\”: “.$hash; echo “The plaintext value is: “.$password.”\n”;}else{echo “Could not execute OpenSSL.\n”;};’

Check that the updater.secret is as the hashed version in the config.php file for next cloud.

Enter the non hashed version as the password if requested on the login.php page.

It should then work.

It is also possible to run this manually from the next cloud home directory like this …..

sudo -u www-data php /media/www/nextcloud/updater/updater.phar
sudo -u www-data php occ upgrade
sudo -u www-data php occ maintenance:mode –off

Upgrading Home Assistant

  1. Stop Home Assistant (sudo systemctl stop homeassistant.service)
  2. Open the directory where the virtual environment is located: (cd /srv/homeassistant/)
  3. login as homeassistant…. sudo -u homeassistant -H -s
  4. Activate the virtual environment: source bin/activate
  5. Upgrade Home Assistant: ‘python3 -m pip install –upgrade homeassistant’
  6. Start Home Assistant (sudo systemctl start homeassistant.service)
  7. Check status (sudo systemctl status homeassistant.service)
  8. You can now reach the web interface on http://ipaddress:8123/ – the first start may take up to 20 minutes before the web interface is available

Update

  • If python needs updating….
    • As pyenv is installed a download of python and
      • a ./configure – -enable-optimisations followed by a make -j2 does not work as a module called encodings is not found.
      • there is another way ……
    • Create a new venv using the pyenv version of python
      1. get the latest version of python from pyenv needed.  
        • pyenv list
        • pyenv install <version>
        • pyenv global <version>
      2. stop homeassistant. sudo systemctl stop homeassistant
      3. login as homeassistant
      4. Go to /srv …. cd /srv
      5. sudo mv homeassistant homeassistant_old_<yymmdd>
      6. sudo mkdir homeassistant
      7. change ownership. sudo crown homeassistant:users homeassistant
      8. now install virtual environment.
        • cd /homeassistant
        • python13 -m venv .
        • source bin/activate
      9. Install homeassistant.  python3 -m pip install homeassistant
      10. system runs mysql database and this needs to be installed as well python3 -m pip install mysql
      11. check has installed.  file ‘hass’ should be in /srv/homeassistant/bin
      12. restart.  sudo systemctl restart homeassistant

Installing the HADashboard on Home Assistant

After having used Home Assistant  for a few weeks I wondered whether these was a “nicer” front end or dashboard.  A bit of Googling and I found this…. https://www.home-assistant.io/docs/ecosystem/hadashboard/.

This takes you to the install for Appdaemon here https://appdaemon.readthedocs.io/en/stable/DASHBOARD_INSTALL.html.  This then directs you to the install site here https://www.home-assistant.io/docs/configuration/packages/.

Installing Appdaemon

The instructions are pretty clear to start with.

1.  Install appdaemon using pip3 ($ sudo pip3 install appdaemon)

but then it gets a little confusing (well to me) as it starts discussing development builds etc and then goes down to running the app and indicates a link to /home/homeassistant/conf.
Well on checking I do not have that directory.

So you then need to go back to the 2nd link I posted above. https://appdaemon.readthedocs.io/en/stable/DASHBOARD_INSTALL.html and look firther down the are some instructions in a different section called configuration.  https://appdaemon.readthedocs.io/en/stable/CONFIGURE.html follow the instructions there.

  1.  Set up a new directory /home/homeassistant/conf
  2. Create appdaemon.yaml and populate this
  3. Go to Home Assistant and create a Long Lived Access Token.  The instructions on how to do this are on the same page but at the bottom
  4. Update the appdaemon.yaml with this token
  5. Skips over the section on Filters (I guess in the future I may come back to this)
  6. MQTT is added
  7. Test App added

Back to the installation instructions and changes made to get the service starting at boot time.

  • $ sudo nano /etc/systemd/system/appdaemon@appdaemon.service
  • Create the file replacing the user bit %I and the <full path to config directory>
[Unit]
Description=AppDaemon
After=home-assistant@homeassistant.service
[Service]
Type=simple
User=%I
ExecStart=/usr/local/bin/appdaemon -c <full path to config directory>
[Install]
WantedBy=multi-user.target
  • $ sudo systemctl daemon-reload
  • $ sudo systemctl enable appdaemon@appdaemon.service –now
  • $ sudosystemctl start appdaemon@appdaemon.service

Installing the Dashboard

Now back here https://appdaemon.readthedocs.io/en/stable/DASHBOARD_INSTALL.html.

The appdaemon.yaml file need updating again.  This time with

hadashboard:  
dash_url: http://192.168.1.20:5050
dash_ssl_certificate: /etc/letsencrypt/live/somehost/fullchain.pem
dash_ssl_key: /etc/letsencrypt/live/somehost/privkey.pem
dash_password: !secret dash_password
dashboard_dir: /home/homeassistant/conf/dashboards

Add a new directory /home/homeassistant/conf/dashboards and create a new file hello.dash and add this to it:

 

##
## Main arguments, all optional
##
title: Hello Panel
widget_dimensions: [120, 120]
widget_margins: [5, 5]
columns: 8

label:
    widget_type: label
    text: Hello World

layout:
    - label(2x2)

You can now navigate to the url e.g. http://192.168.1.20:5050.

Owl Intuition, logging to Mosquitto and Home Assistant

So thanks to Google again and the Home Assistant wiki I now have the Owl Intuition Solar readings being sent to Mosquitto and used as a series of sensors in Home Assistant.

Starting point was this wiki entry and the entry by jchasey. https://community.home-assistant.io/t/owl-intuition-pv-home-assistant/18157/3.

The python script was amended to take solar reading and not hot_water and heating.

import socket
import struct
import json
from xml.etree import ElementTree as ET

OWL_PORT = 22600
OWL_GROUP = "224.192.32.19"

sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind((OWL_GROUP, OWL_PORT))
mreq = struct.pack("=4sl", socket.inet_aton(OWL_GROUP), socket.INADDR_ANY)
sock.setsockopt(socket.IPPROTO_IP, socket.IP_ADD_MEMBERSHIP, mreq)

while True:
    # Collect the XML multicast message
    xml, addr = sock.recvfrom(1024)
    # Parse the XML string
    root = ET.fromstring(xml)

    # print (root)

    # Only process those messages we are interested in

    if root.tag == 'electricity':
      timestamp_value = 0
      timestamp = root.find('timestamp')
      if timestamp is not None:
           timestamp_value = int(timestamp.text)
           #print ("time", timestamp_value)
        
      signal_rssi_value = 0
      signal_lqi_value = 0
      signal = root.find('signal')
      if signal is not None:
        signal_rssi_value = int(signal.attrib["rssi"])
        signal_lqi_value = int(signal.attrib["lqi"])
        #print ("rssi", signal_rssi_value)
        #print ("lqi", signal_lqi_value)

      battery_value = 0.0
      battery = root.find('battery')
      if battery is not None:
        battery_value = float(battery.attrib["level"].rstrip("%"))
        #print ("battery value", battery_value)

      for chan in root.iter('chan'):
        #print (chan.attrib)
        
        chanid = chan.get('id')
        #print ("chandid", chanid)
        if chanid == "0":
          current_now = 0.0
          current = chan.find('curr')
          if current is not None:
              current_now = float(current.text)
              #print ("current now", current_now)
          current_day = 0.0
          current = chan.find('day')
          if current is not None:
              current_day = float(current.text)
              #print ("current day", current_day)

        if chanid == "1":
          solar_now = 0.0
          solar = chan.find('curr')
          if solar is not None:
              solar_now = float(solar.text)
              #print ("solar now", solar_now)
          solar_day = 0.0
          solar = chan.find('day')
          if solar is not None:
              solar_day = float(solar.text)
              #print ("solar day", solar_day)

    if root.tag == 'solar':
        solar_exported = 0.0
        solar = root.find('day/exported')
        if solar is not None:
            solar_exported = float(solar.text)
            #print ("solar export", solar_exported)
            print (json.dumps({'type': root.tag, \
                       'timestamp': timestamp_value, \
                       'battery_level': battery_value, \
                       'current_now': current_now, \
                       'current_day': current_day,\
                       'solar_now': solar_now,\
                       'solar_day': solar_day,\
                       'solar_exported': solar_exported}))
            

I then added a shell script to run the python.  This was placed into /usr/local/bin with permission at 0400.

#!/bin/sh
#
# owl_mcast2mqtt.sh
#
# Publish json output of the owl_mcast2json script to MQTT 
#
export LANG=C
PATH="/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin"

MQTT_HOST="ip.address.0.0"
MQTT_PORT="8883"
MQTT_USER="xxx"
MQTT_PASS="xxx"

echo "OWL Network Multicast 2 MQTT"

python3 -u /usr/owlintuition/owl.py | while read line
do
	MQTT_TOPIC="tele/home/mcast/$(echo $line | jq --raw-output '.type')"
	# Publish the json to the appropriate topic
	echo $line | mosquitto_pub -h $MQTT_HOST -p $MQTT_PORT -u $MQTT_USER -P $MQTT_PASS -i OWL_MCast -r -l -t $MQTT_TOPIC
done

A service was then created and enabled.

[Unit]
Description=OWL MQTT store
After=suspend.target

[Service]
User=root
Type=oneshot
ExecStart=/usr/local/bin/owl_mcast2mqtt.sh
TimeoutSec=0
StandardOutput=syslog

[Install]
WantedBy=suspend.target

Strangely after a power cut all the above “disappeared”.

Mosquito needed reinstalling – link here https://www.switchdoc.com/2018/02/tutorial-installing-and-testing-mosquitto-mqtt-on-raspberry-pi/

jq needed installing – link here https://stedolan.github.io/jq/download/

This is a useful reminder for getting scripts working – https://www.raspberrypi.org/forums/viewtopic.php?t=197513

Update to NGINX config for OpenMediaVault stops sites working … the fix.

Tried to access the sites I had set up via https://myurl/xxxx and they were not working today.

Checking the /etc/nginx/sites-available I could see that an update had been done on the 23 Oct and further checking shows this seemed to be to the openmediavault-webgui.

Checking the backup made a few weeks ago showed that the
listen [::]:80 ipv6only=off;
line had been changed to
listen 80;
and
listen [::]:443 ipv6only=off;
changed to
listen 443;
changing back seems to fix everything!
Not sure why right now.
The net result of this was that the 2nd server block I had for myurl was being ignored by NGINX.
Looks like I might need to fix this regularly from now on which is a bit of a pain!