I’ve recently stumbled over an article in the german magazine C’T about visualisations of your Fritz!Box’s connection. The solution looked quite boring and outdated, since it used MRTG for the graph creation.
I’ve started searching for a better solution using Grafana, InfluxDB and my Raspberry Pi and found this great blog post. I’ve already explained how to install Grafana and InfluxDB in this post, so I’ll concentrate on the Fritz!Box related parts:
Start with the installation of fritzcollectd. It is a plugin for collectd.
Now create a user account in the Fritz!Box for collectd. Go to System, Fritz!Box-user and create a new user with password, who has access from internet disabled. The important part is to enable “Fritz!Box settings”.
Additionally make sure that your Fritz!Box is configured to support connection queries using UPnP. You can configure this under “Home Network > Network > Networksettings”. Select “Allow access for applications” as well as “Statusinformation using UPnP”.
Next part is the installation and configuration of collectd:
Login to your grafana installation and configure a new datasource. Make sure to set the collectd database. If you’re using credentials for the InfluxDB, you can add them now. If you’re not using authentication you can disable the “With credentials” checkbox.
Check if your configuration is working by clicking on “Save & Test”.
If everything worked, you can proceed to importing the Fritz!Box Dashboard from the Grafana.com dashboard. The ID is 713. Make sure to select the right InfluxDB during the import setup.
After clicking on import, you’ll should be able to see your new Dashboard. It might take a few minutes/hours until you’ve gathered enough data to properly display graphs.
Be aware though that if you start gathering this much data you’ll might end up with “insufficient memory” errors. You’ll might want to tweak your InfluxDB settings accordingly.
A few days ago I’ve noticed that my influxdb installation wasn’t working properly. The server was crashing constantly.
I’ve checked the logs using
sudo journalctl -u influxdb -b
and found this
May 12 23:12:18 pi3plus influxd: ts=2019-05-12T21:12:18.440902Z lvl=info msg="Opened file" log_id=0FNU47~W000 engine=tsm1 service=filestore path=/mnt/databases/influxdb/data/_internal/monitor/342/000000020-000000002.tsm id=0 duration=14
May 12 23:12:18 pi3plus influxd: runtime: out of memory: cannot allocate 2121015296-byte block (16056320 in use)
May 12 23:12:18 pi3plus influxd: fatal error: out of memory
May 12 23:12:18 pi3plus influxd: runtime stack:
May 12 23:12:18 pi3plus influxd: runtime.throw(0xbc70be, 0xd)
May 12 23:12:18 pi3plus influxd: /usr/local/go/src/runtime/panic.go:608 +0x5c
May 12 23:12:18 pi3plus influxd: runtime.largeAlloc(0x7e6c15dd, 0x60101, 0x76f91a20)
May 12 23:12:18 pi3plus influxd: /usr/local/go/src/runtime/malloc.go:1021 +0x120
May 12 23:12:18 pi3plus influxd: runtime.mallocgc.func1()
May 12 23:12:18 pi3plus influxd: /usr/local/go/src/runtime/malloc.go:914 +0x38
May 12 23:12:18 pi3plus influxd: runtime.systemstack(0x1c4e3c0)
May 12 23:12:18 pi3plus influxd: /usr/local/go/src/runtime/asm_arm.s:354 +0x84
May 12 23:12:18 pi3plus influxd: runtime.mstart()
May 12 23:12:18 pi3plus influxd: /usr/local/go/src/runtime/proc.go:1229
May 12 23:12:18 pi3plus influxd: goroutine 27 [running]:
May 12 23:12:18 pi3plus influxd: runtime.systemstack_switch()
May 12 23:12:18 pi3plus systemd: influxdb.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
May 12 23:12:18 pi3plus systemd: influxdb.service: Unit entered failed state.
May 12 23:12:18 pi3plus systemd: influxdb.service: Failed with result 'exit-code'.
This happened because I’ve recently added statistics from my FritzBox with regards to my DSL line speed. The statistics have a high cadence, which means that many entries are created in influxdb in a short amount of time. Influxdb tries to create an index in RAM for these entries and is overwhelmed by the mass of data.
There’s unfortunately no way to mass delete old Tweets you’ve posted on Twitter. There are some online services, who promise to delete your data for you, but since you’ll have to grant them access to your account I’ve got a bad feeling and wanted to do things on my own.
I’ve tried last year a windows only software called Twitter Archive Eraser. Last year it used to be a github project which you could compile locally and let it run on your account. It’s now free for a limited amount of tweets and also only works with tweets not older than two years. To remove these restrictions you’ve got to pay a small amount for a license.
You’ll need to download your complete message archive for the deletion process. Once you’ve got the data from Twitter you might as well start to write a little script which deletes the old messages for you using the Twitter post id.
Luckily, I found this blog post by Kris Shaffer. He explains how he deleted a large amount of his tweets using python so I’ve started to try this for myself.
New approach using JSON Twitter archives
This is the currently working approach (December 2020). I’ve updated the python script accordingly and put it into its separate git repo on GitHub.
Old approach using CVS Twitter archives
There was also a different blog which explained the process more beginner friendly. However, I’ve got problems with misformatted characters so I’ve decided to post my used code as gist to github:
To use this I’ve done the following things:
Requested and download my account data from Twitter
Create a Twitter developer account
Created a new app to get Api keys and Access tokens
Installed python3 on my mac with homebrew ‘brew install python3’
I’m using influxdb on my Raspberry Pi in combination with a NFS mount. The NFS mount is on my Synology NAS and should store the database data of influxdb. Reason for this setup is that I fear that the SD card won’t survive the many write/read cycles caused by a database writing to it.
The shared folder on my Synology is configured to be accessible by various IPs in my network:
Unmount the existing NFS share. Remove/comment out the line for the nfs mount in your /etc/fstab so that it doesn’t conflict with autofs. Restart autofs with
sudo service autofs restart
Now check the content of your mount point with e.g.
Autofs should now automatically mount the NFS share. This might take a while, which is a good sign that the mount is loaded. You can also verify with
that your NFS share is mounted to e.g. /mnt/databases. If you’ll restart now, influxdb should be happy on restart. When it tries to start, autofs will see the access to the mounted folder and will mount the NFS share before influxdb can start up properly.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.