Category: Technology
You are viewing all posts from this category, beginning with the most recent.
🛜Set up QOS with EdgeOS (Ubiquiti)
Setting up QOS on a Ubiquiti router with EdgeOS is not very complicated and pretty straightforward. In this post I try to set forth the way to set this up. I will show two ways.
- A configuration with groups, which is slower but better manageable
- A configuration without groups, which is faster giving a higher throughput
Configuration with groups
This configuration with a group is nice if you want to easily add or remove nodes from QOS. You can do this by logging in to the webGUI of your Ubiquiti router and just conveniently add or remove a IP. In the script below that group is called “QOS_High_Prio_Nodes”. A default IP is added “192.168.130.1”.
The downside of this script is that because we are marking the traffic, the router gets pretty busy and so the throughput of the traffic goes significantly down.
-
Create an address group for high-priority nodes.
This group (QOS_High_Prio_Nodes) can be easily managed from the webGui. -
Mark traffic via a firewall rule
This marking makes it possible to use a group, as using a group directly is not possible. The downside is that the throughput is significantly lower than not marking the traffic. -
Define a traffic shaper policy for download traffic
Here you set the true bandwidth of your broadband connection with your ISP. -
Class 10: High priority for traffic to QOS_High_Prio_Nodes
-
Class 20: Lower priority for traffic from VLAN200 (192.168.200.0/24)
-
Class 30: Lower priority for traffic from VLAN120 (192.168.120.0/24)
In this example there are 3 classes set. All with their own bandwidth-quota and priority. -
Default Class: Other traffic receives default treatment.
All other traffic is shaped at the lowest priority.Make sure the sum of all the bandwidth does not excite 100%
-
Apply the policy to both interfaces.
Since I own a Ubiquiti Edgerouter ER-12 the default WAN port is eth9. Of course if your WAN is on a different port, adjust accordingly.
Script 1
configure
# 1. Create an address group for all high priority nodes
set firewall group address-group QOS_High_Prio_Nodes address 192.168.130.1
set firewall group address-group QOS_High_Prio_Nodes description "Address group for nodes with high priority"
# 2. Mark traffic via a firewall rule
set firewall modify MARK_QOS_High_Prio_Nodes rule 10 action modify
set firewall modify MARK_QOS_High_Prio_Nodes rule 10 modify mark 10
set firewall modify MARK_QOS_High_Prio_Nodes rule 10 source group address-group QOS_High_Prio_Nodes
set firewall modify MARK_QOS_High_Prio_Nodes rule 10 description "Mark traffic for QOS_High_Prio_Nodes"
# 3. Define a traffic shaper policy for download traffic
set traffic-policy shaper DOWNLOAD_POLICY bandwidth 910mbit
set traffic-policy shaper DOWNLOAD_POLICY description "QoS policy for download traffic, total bandwidth 910 Mbps"
# 4. Class 10: High priority for VLAN30 traffic (192.168.130.1/32)
set traffic-policy shaper DOWNLOAD_POLICY class 10 bandwidth 5%
set traffic-policy shaper DOWNLOAD_POLICY class 10 ceiling 100%
set traffic-policy shaper DOWNLOAD_POLICY class 10 priority 7
set traffic-policy shaper DOWNLOAD_POLICY class 10 queue-type fair-queue
set traffic-policy shaper DOWNLOAD_POLICY class 10 match VLAN30_QOS_HIGH_PRIORITY mark 10
set traffic-policy shaper DOWNLOAD_POLICY class 10 description "High priority (50% guaranteed) for PRIO nodes"
# 5. Class 20: Lower priority for VLAN20 traffic (192.168.20.0/24)
# If VLAN20 does not exist in your network, remove this class.
set traffic-policy shaper DOWNLOAD_POLICY class 20 bandwidth 20%
set traffic-policy shaper DOWNLOAD_POLICY class 20 ceiling 100%
set traffic-policy shaper DOWNLOAD_POLICY class 20 priority 4
set traffic-policy shaper DOWNLOAD_POLICY class 20 queue-type fair-queue
set traffic-policy shaper DOWNLOAD_POLICY class 20 match VLAN20_PRIORITY ip source address 192.168.120.0/24
set traffic-policy shaper DOWNLOAD_POLICY class 20 description "Lower priority (20% guaranteed) for VLAN20 traffic"
# 6. Class 30: Lower priority for VLAN178 traffic (192.168.178.0/24)
set traffic-policy shaper DOWNLOAD_POLICY class 30 bandwidth 65%
set traffic-policy shaper DOWNLOAD_POLICY class 30 ceiling 100%
set traffic-policy shaper DOWNLOAD_POLICY class 30 priority 3
set traffic-policy shaper DOWNLOAD_POLICY class 30 queue-type fair-queue
set traffic-policy shaper DOWNLOAD_POLICY class 30 match VLAN178_PRIORITY ip source address 192.168.200.0/24
set traffic-policy shaper DOWNLOAD_POLICY class 30 description "Lower priority (20% guaranteed) for VLAN178 traffic"
# 7. Default class: Other traffic gets standard treatment
set traffic-policy shaper DOWNLOAD_POLICY default bandwidth 10%
set traffic-policy shaper DOWNLOAD_POLICY default ceiling 100%
set traffic-policy shaper DOWNLOAD_POLICY default priority 1
set traffic-policy shaper DOWNLOAD_POLICY default queue-type fair-queue
# 8. Apply the policy to both interfaces:
# - eth9: Outbound traffic to the internet
set interfaces ethernet eth9 traffic-policy out DOWNLOAD_POLICY
set interfaces ethernet eth9 description "WAN interface with QoS policy applied to outbound traffic"
commit
save
exit
Configuration without groups, but a higher throughput
This configuration lacks groups, which would have given it the ability to be more manageable. More manageable in the way that you could add and remove modes (IP’s) to and from an address-group in the webGUI of your Ubiquiti router. This would have been very convenient. But more on that later.
The great benefit of dropping the convenience of the traffic marking is that you get the max possible throughput of the router.
-
Define a traffic shaping policy for download traffic
Here you set the true bandwidth of your broadband connection with your ISP. -
Class 10: High priority for VLAN30 traffic (192.168.130.1/32)
This now your node this the highest priority for QOS -
Class 20: Lower priority for VLAN120 traffic (192.168.120.0/24)
-
Class 30: Lower priority for VLAN140 traffic (192.168.140.0/24)
-
Default class: Other traffic gets standard treatment
All other traffic is shaped at the lowest priority.Make sure the sum of all the bandwidth does not excite 100%
-
Apply the policy to both interface eth9
Since I own a Ubiquiti Edgerouter ER-12 the default WAN port is eth9. Of course if your WAN is on a different port, adjust accordingly.
Script 2
configure
# 1. Define a traffic shaping policy for download traffic
set traffic-policy shaper DOWNLOAD_POLICY bandwidth 910mbit
set traffic-policy shaper DOWNLOAD_POLICY description "QoS policy for download traffic, total bandwidth 910 Mbps"
# 2. Class 10: High priority for (192.168.130.1/32)
set traffic-policy shaper DOWNLOAD_POLICY class 10 bandwidth 5%
set traffic-policy shaper DOWNLOAD_POLICY class 10 ceiling 100%
set traffic-policy shaper DOWNLOAD_POLICY class 10 priority 7
set traffic-policy shaper DOWNLOAD_POLICY class 10 queue-type fair-queue
set traffic-policy shaper DOWNLOAD_POLICY class 10 match VLAN30_QOS_HIGH_PRIORITY ip source address 192.168.130.1/32
set traffic-policy shaper DOWNLOAD_POLICY class 10 description "High priority (5% guaranteed) for TV"
# 3. Class 20: Lower priority for VLAN120 traffic (192.168.120.0/24)
# If VLAN120 does not exist in your network, remove this class.
set traffic-policy shaper DOWNLOAD_POLICY class 20 bandwidth 20%
set traffic-policy shaper DOWNLOAD_POLICY class 20 ceiling 100%
set traffic-policy shaper DOWNLOAD_POLICY class 20 priority 4
set traffic-policy shaper DOWNLOAD_POLICY class 20 queue-type fair-queue
set traffic-policy shaper DOWNLOAD_POLICY class 20 match VLAN20_PRIORITY ip source address 192.168.120.0/24
set traffic-policy shaper DOWNLOAD_POLICY class 20 description "Lower priority (20% guaranteed) for VLAN120 traffic"
# 4.Class 30: Lower priority for VLAN140 traffic (192.168.140.0/24)
# If VLAN140 does not exist in your network, remove this class.
set traffic-policy shaper DOWNLOAD_POLICY class 30 bandwidth 65%
set traffic-policy shaper DOWNLOAD_POLICY class 30 ceiling 100%
set traffic-policy shaper DOWNLOAD_POLICY class 30 priority 3
set traffic-policy shaper DOWNLOAD_POLICY class 30 queue-type fair-queue
set traffic-policy shaper DOWNLOAD_POLICY class 30 match VLAN178_PRIORITY ip source address 192.168.140.0/24
set traffic-policy shaper DOWNLOAD_POLICY class 30 description "Lower priority (65% guaranteed) for VLAN140 traffic"
# 5. Default class: Other traffic gets standard treatment
set traffic-policy shaper DOWNLOAD_POLICY default bandwidth 10%
set traffic-policy shaper DOWNLOAD_POLICY default ceiling 100%
set traffic-policy shaper DOWNLOAD_POLICY default priority 1
set traffic-policy shaper DOWNLOAD_POLICY default queue-type fair-queue
# 6. Apply the policy to both interfaces:
# - eth9: outbound internet-facing interface
set interfaces ethernet eth9 traffic-policy out DOWNLOAD_POLICY
set interfaces ethernet eth9 description "WAN interface with QoS policy applied to outbound traffic"
commit
save
exit
📝🎮📺
đź“şGetting your Emby library scanned for new content in four ways.
Running Emby en Radarr/Sonarr on different servers or in different container can be a pain in the ass to get your content scanned.
But it is much easier than you think if you know what’s going on inside the services. The TL;DR is keep your content path the same in EMBY and in Sonarr/Radarr. If you don’t keep them the same, you run into trouble.
First, I show two options when you can’t keep the paths the same (say you have a linux node running EMBY and a Windows (don’t mention it) node for Sonarr/Radarr. It’s obvious the paths will never be the same, as Linux will give you a PATH like below.
/home/user/video/movies/Your Old Movie (1921)/Your.Old.Movie.1921.Remux-2160p.mkv
Where Windows (sh*t have to mention it again) will give you a path
D:\data\video\movies\Your Old Movie (1921)\Your.Old.Movie.1921.Remux-2160p.mkv
The slashed only (Linux forward slash / and Windows a backslash \ ) already makes this a problem.
Then I will show you two options if you can the PATHs the same. This can be done when:
- You run all your software on one node, therefor the content PATHs in EMBY en Sonarr/Radarr will be the same (Obviously)
- When you run multiple nodes of the SAME OS and run the services (EMBY, Sonarr, Radarr) in DOCKER, and therefore you can keep the PATHs the same inside your docker containers.
- When you run multiple nodes of THE SAME OS (for EMBY, Sonarr and Radarr) and store the content on the same network share (a third node) via Radarr/Sonarr and EMBY will scan the content on that same third node.
Option 1 different paths (you can’t keep the paths the same)
This method triggers a complete library scan. This work fine but is suboptimal. When your library is large, or you frequently trigger the library scan, it will have an impact on EMBY.
curl -X POST "${EMBY_HOST}/Emby/Library/Refresh?api_key=${EMBY_APIKEY}" --data ""
Where:
EMBY_HOST = http:/192.168.1.1:0896 or https://www.yourdoamin.com:443
EMBY_APIKEY = XXXXXXYYYYYZZZZZZ
Option 2 different paths (you can’t keep the paths the same)
This method tiggers a scan for a certain library in your collection (EMBY_PARENT_ID). The advance to the pervious option is that you only trigger one collection in the library. The downside is you have to fiddle around with the EMBY_PARENT_ID. These can be obtained from the URL when you open the collection in EMBY.
curl -X POST "${EMBY_URL}/emby/Items/${EMBY_PARENT_ID}/Refresh?Recursive=true&MetadataRefreshMode=Default&ImageRefreshMode=Default&ReplaceAllMetadata=false&ReplaceAllImages=false&api_key=${EMBY_APIKEY}" -H "accept: */*" -d “”
Where:
EMBY_HOST = http:/192.168.1.1:0896 or https://www.yourdoamin.com:443
EMBY_APIKEY = XXXXXXYYYYYZZZZZZ
EMBY_PARENT_ID = 12
Option 3 Same paths (you can keep the path in *arr equal to the path in EMBY)
This is the simplest to configure the default trigger in Sonarr/Radarr
Option 4 Same paths (you can keep the path in *arr equal to the path in EMBY)
This method tiggers a scan for a certain library in your collection. The advance to the pervious option is that you add your own logic to the scan. The other advance is that you can fill-out Path and UpdateType from the environment variables in Linux (or your container)
curl -X POST "${EMBY_URL}/emby/Library/Media/Updated?api_key=${EMBY_APIKEY}" -H "accept: */*" -H "Content-Type: application/json" -d "{\"Updates\":[{\"Path\":\"${Path}\",\"UpdateType\":\"${UpdateType}\"}]}"
Where:
EMBY_HOST = http:/192.168.1.1:0896 or https://www.yourdoamin.com:443
EMBY_APIKEY = XXXXXXYYYYYZZZZZZ
Path = /movies/Old Movie, The (1921)/The.Old.Movie.1921.Bluray-2160p.mkv
UpdateType = [ Created | Modified | Deleted ] (optional)
Here is an example script I use for Radarr and Sonarr. I like separate scripts for Radarr and Sonarr, but you easily merge them and create one script for both Radarr and Sonarr.
Radarr
#!/bin/bash
NOW=$(date +"%d-%m-%Y %H:%M")
LOG_FILE="/logging/radarr/emby_scan.txt"
TMP_FILE="/tmp/tmp_emby_radarr.txt"
DL_FILE="/scripts/dl_radarr.txt"
DEL_FILE="/scripts/del_radarr.txt"
REN_FILE="/scripts/ren_radarr.txt"
EMBY_URL="https://emby.yourdomain.com"
EMBY_RADARR_APIKEY="xxxyyyyyzzz"
if [ "${radarr_eventtype}" != "" ]; then
if [ "${radarr_eventtype}" == "ApplicationUpdate" ] || [ "${radarr_eventtype}" == "MovieAdded" ] || [ "${radarr_eventtype}" == "Grab" ] || [ "${radarr_eventtype}" == "HealthIssue" ] || [ "${radarr_eventtype}" == "Test" ]; then
(echo "${NOW} - [Emby Library Scan] Radarr Event Type is ${radarr_eventtype}, exiting."; cat ${LOG_FILE}) > ${TMP_FILE}; mv ${TMP_FILE} ${LOG_FILE}
exit
fi
(echo "${NOW} - [Emby Library Scan] Radarr Event Type is ${radarr_eventtype}, updating Emby Library for ${radarr_movie_title}."; cat ${LOG_FILE}) > ${TMP_FILE}; mv ${TMP_FILE} ${LOG_FILE}
if [ "$radarr_eventtype" == "Download" ]; then
echo "${radarr_movie_title} (${radarr_movie_year})" >> ${DL_FILE}
UpdateType="Created"
Path="${radarr_movie_path}"
fi
if [ "${radarr_eventtype}" == "MovieDelete" ]; then
echo "${radarr_movie_title} (${radarr_movie_year})" >> ${DEL_FILE}
UpdateType="Deleted"
Path="${radarr_movie_path}"
fi
if [ "$radarr_eventtype" == "Rename" ]; then
echo "${radarr_movie_title} (${radarr_movie_year})" >> ${REN_FILE}
UpdateType="Modified"
Path="${radarr_movie_path}"
fi
curl -X POST "${EMBY_URL}/emby/Library/Media/Updated?api_key=${EMBY_RADARR_APIKEY}" -H "accept: */*" -H "Content-Type: application/json" -d "{\"Updates\":[{\"Path\":\"${Path}\",\"UpdateType\":\"${UpdateType}\"}]}"
else
(echo "${NOW} - [Emby Library Scan] Radarr Event Type is empty."; cat ${LOG_FILE}) > ${TMP_FILE}; mv ${TMP_FILE} ${LOG_FILE}
fi
# write a status file with date of last run. Helps troubleshoot that cron task is running.
echo "$(basename $0) last run was at $(date)" > /logging/radarr/_$(basename $0)_lastrun.txt
Sonarr
#!/bin/bash
NOW=$(date +"%d-%m-%Y %H:%M")
LOG_FILE="/logging/sonarr/emby_scan.txt"
TMP_FILE="/tmp/tmp_emby_sonarr.txt"
DL_FILE="/scripts/dl_sonarr.txt"
DEL_FILE="/scripts/del_sonarr.txt"
REN_FILE="/scripts/ren_sonarr.txt"
EMBY_URL="https://emby.yourdomain.com"
EMBY_SONARR_APIKEY="zzzzzyyyyyxxxxx"
if [ "${sonarr_eventtype}" != "" ]; then
if [ "${sonarr_eventtype}" == "ApplicationUpdate" ] || [ "${sonarr_eventtype}" == "Grab" ] || [ "${sonarr_eventtype}" == "HealthIssue" ] || [ "${sonarr_eventtype}" == "Test" ]; then
(echo "${NOW} - [Emby Library Scan] Sonarr Event Type is ${sonarr_eventtype}, exiting."; cat ${LOG_FILE}) > ${TMP_FILE}; mv ${TMP_FILE} ${LOG_FILE}
exit
fi
(echo "${NOW} - [Emby Library Scan] Sonarr Event Type is ${sonarr_eventtype}, updating Emby Library for ${sonarr_series_title} - ${sonarr_episodefile_episodetitles}."; cat ${LOG_FILE}) > ${TMP_FILE}; mv ${TMP_FILE} ${LOG_FILE}
if [ "$sonarr_eventtype" == "Download" ]; then
echo "${sonarr_series_title} - ${sonarr_episodefile_episodetitles}" >> ${DL_FILE}
UpdateType="Created"
Path="${sonarr_episodefile_path}"
fi
if [ "$sonarr_eventtype" == "EpisodeFileDelete" ]; then
echo "${sonarr_series_title} - ${sonarr_episodefile_episodetitles}" >> ${DEL_FILE}
UpdateType="Deleted"
Path="${sonarr_episodefile_path}"
fi
if [ "$sonarr_eventtype" == "SeriesDelete" ]; then
echo "${sonarr_series_title}" >> ${DEL_FILE}
UpdateType="Deleted"
Path="${sonarr_series_path}"
fi
if [ "$sonarr_eventtype" == "Rename" ]; then
echo "${sonarr_series_title}" >> ${REN_FILE}
UpdateType="Modified"
Path="${sonarr_series_path}"
fi
curl -X POST "${EMBY_URL}/emby/Library/Media/Updated?api_key=${EMBY_SONARR_APIKEY}" -H "accept: */*" -H "Content-Type: application/json" -d "{\"Updates\":[{\"Path\":\"${Path}\",\"UpdateType\":\"${UpdateType}\"}]}"
else
(echo "${NOW} - [Emby Library Scan] Sonarr Event Type is empty."; cat ${LOG_FILE}) > ${TMP_FILE}; mv ${TMP_FILE} ${LOG_FILE}
fi
# write a status file with date of last run. Helps troubleshoot that cron task is running.
echo "$(basename $0) last run was at $(date)" > /logging/sonarr/_$(basename $0)_lastrun.txt
These scripts also maintain lists of the content names, so they can be used in reporting or notification. You can, of course, strip these.
I run these scripts in the Connect section in Radarr/Sonarr.
📺🎥📽️
👨🏽‍💻Choosing your Python editing weapon
Well a few days passed since I got a micro:bit… Started out with MakeCode to play around with it, but the goal was python. For this I switched to mu-editor at first. A great editor… I like the check en tidy function. But I missed the simulator I had in MakeCode. Of course you can (and must) run your code on the physical micro:bit but a simulator is great for testing and trying little pieces of code. But I accepted the fact that mu-editor and python coding didn’t had a simulator.
Then I picked up a “old” tool from the shed on my Macbook, Visual Studio code. I started to use it again to edit shell scripts because brackets announced to stop service this coming summer. Why not pop in python script in there. This editor offered me to install python extensions. And I started exploring the nets of inter what was possible with VScode.
And I came across Device Simulator Express, an extension for VScode. Just the tool I needed to have my micro:bit simulated in VScode. Just open the extensions tab in VScode and look for “Device Simulator Express” and install the beast.
There is one pitfall in installing this extension. After installing the extension en pressing shift-command-p to open de command-pane : enter “device simulator express : New file”. This will install some more dependencies and you might run in the problem that you get a message “Device Simulator Express: Install Extension Dependencies”. Telling you the install failed.
When pressing shift-command-p and giving the command “Device Simulator Express: Install Extension Dependencies” as instructed, you will find that they will never get installed and you end up with a reoccurring error all the time. This seems to have something to do with python 3.9. You can fix this doing the following:
On Python 3.9:
Edit the file “users<user>.vscodeextensionsms-python.devicesimulatorexpress-2020.0.36321outrequirements.txt” and changed the line
Pillow==7.0.0
to
Pillow==8.1.0
Restart VScode and press shift-command-p and give the command “Device Simulator Express: Install Extension Dependencies” again. Now the dependencies get installed nicely.
What I extra installed was a linter. Shift-command-x : this opens the extensions. Enter “flake8” in the search field. Sometimes you need to restart VS-code to force install-popup for flake8. Now you have error-highting in the editor as you go.
And what I also installed was code formatter for python. Shift-command-x : this opens the extensions. Enter “autopep8” in the search field. Press shift-option-f: to start the formatter when your cursor is on a line that needs formatting. This little baby formats your code. Cut the extra white-space. Correct the indenting. All kind of stuff.
I stopped using Windows at home for more than 11 years and left everything from Microsoft alone, switching to Apple and got the complete fruitbasket with all it’s products. But VScode has surprised me and it’s gonna stay for python editing…… 📝🖋️
👨🏽‍💻Starbit Commander II
Well, on my quest to learn python, I poked and peeked around at the web of inters. And came across some nice sites about the micro:bits, projects and coding.
One site inspired me to create a python version of Starbit Commander. This site is home to many projects of Derek Graham. Next to Micro:bit bits, he has many other things too on his site.
But why remake of Starbit Commander? Derek has this pleasant example of “Tiny Astroids”, which resembled a lot like Starbit Commander. And he created it in Python… so. Mine Starbit Commander should get a python version too… In this little gem from Derek, he created a really, really pleasant piece of code that animated an explosion on this 5×5 LED display. I loved it. And although I wanted to create Starbit from scratch in OO, I really wanted to adopt this pleasant little tiny explosion. So, I contacted Derek via Twitter, asked if he agreed to me use his explosion-code. We had a short but very nice chat on Twitter, and he agreed for the code to be reuse in Starbit Commander. Thanks, Derek!
Well, I wanted to test and learn more python and also OO in python. So, I decided to try to make Starbit Commander in an OO-style. I have no experience in this, but I think (as far as I can tell) this is an OO-version of the game now. Derek mentioned on micro:bit V1’s coding OO would let you ran into out of memory errors. I haven’t seen them on the microbit V2 with Starbit Commander. (Now I’m worried if I did a proper OO-coding style ;-0)
I differed a bit from the original Starbit code. While de MakeCode version has power-ups to collect, in this python-version I decided to skip that and make the astroid field a bit more challenging by starting slow and easy and ending fast and astroid-crowded. Giving you more and more bonus score along the way you go.
This code can be found here. And watch that explosion… Thanks again, Derek Graham! 🎮🖖🏻🚀
👨🏽‍💻Micro:bit Fireflies
My first attempt to write some (micro)Python code. Fireflies. This leds simulate fireflies in the air…
By the way, I found that display.get_pixel does not return the correct value for the brightness of a LED on a micro:bit. I created this function to correct this behavior in the code (fixPixelBug).
from microbit import *
import random
# This fucntions fixes the "display.get_pixel" bug.
# a full bright led does not return a "9" but a "255".
def fixPixelBug(brightness):
if brightness == 4:
return 3
elif brightness == 8:
return 4
elif brightness == 16:
return 5
elif brightness == 32:
return 6
elif brightness == 64:
return 7
elif brightness == 128:
return 8
elif brightness == 255:
return 9
else:
return brightness
while True:
sleep(50)
brightness = random.randint(1, 9)
x = random.randint(0, 4)
y = random.randint(0, 4)
if display.get_pixel(x, y) == 0:
display.set_pixel(x, y, brightness)
for fireflies in range(0, 5):
x = random.randint(0, 4)
y = random.randint(0, 4)
if fixPixelBug(display.get_pixel(x, y)) > 0:
display.set_pixel(x, y, fixPixelBug(display.get_pixel(x, y)) - 1)
Code here 📝✏️
đź“źHow do you find the RSS Feed of a website (if it exists)
Summary
This article describes various methods to find RSS feeds on websites. For WordPress, you append /feed to the URL to find the RSS feed, while for Tumblr and Medium, /rss and /feed/ are added respectively. For a Blogger site, the string at the end of the URL is longer, feeds/posts/default. YouTube channel pages also act as RSS feeds. Additionally, you can extract the RSS feed from the page source by searching for ‘Application/rss’ or ‘Atom’ in the source code. Safari users can use an app that adds a ready-to-use RSS button to their browser. Lastly, ‘educated guessing’ is a technique where you add /feed/ or /rss/ to the domain in the hope that one of them leads you to the correct feed.
Comments
Method 2 is the most practical in practice.
Content
Method 1 – Investigating the CMS Used
- To access a WordPress RSS feed, simply append /feed to the site’s URL. So, if the WordPress site’s URL is, for example, example.com, to find the RSS feed, you would add /feed to obtain example.com/feed. When used in an RSS reader, this URL would allow you to view the site’s content in feed form.
- In the case of sites hosted on Tumblr, the method is slightly different but still very straightforward. You need to add /rss to the end of the Tumblr site’s URL. So, if the Tumblr address is example.tumblr.com, the RSS feed can be found at example.tumblr.com/rss.
- For blogs hosted on Blogger, you need to add a slightly longer string to the end of the URL: feeds/posts/default. So, for a Blogger site at the address example.blogspot.com, the RSS feed can be found at example.blogspot.com/feeds/posts/default.
- If you want to find an RSS feed for a publication hosted on Medium, you have to add /feed/ before the publication’s name in the URL. If the publication’s address on Medium is medium.com/example-site, the URL for the RSS feed would change to medium.com/feed/example-site to view it in your RSS reader.
- YouTube channel pages also have a built-in functionality to act as RSS feeds. As such, you simply need to copy the channel’s URL and paste it into your RSS reader. Additionally, if you are already subscribed to various channels on YouTube, you can find an OPML file with all your subscriptions here, which you can then import into your RSS reader for easy access to all subscribed channels.
Method 2 – Extract RSS Feed from Page Source
To find the RSS or Atom feeds of a specific website, start by opening the website in your browser. Then, right-click anywhere on the page—it doesn’t matter where, as long as it’s not a link or image.
In the context menu that appears, choose the option “View Page Source” or something similar; the exact text may vary depending on the browser you are using. This will show you the page source, which is the underlying HTML and CSS skeleton that determines how the page looks and functions.
Once in the page source, use your browser’s search function (often accessible via Ctrl + F or Command + F on a Mac) to search for the term “Application/rss”. This is the most common way websites indicate their RSS feeds.
If you don’t get any results searching for “Application/rss,” try searching for “Atom”. Atom is an alternative feed standard that some sites may use instead of or in addition to RSS.
If you find any of these terms in the page source, follow the corresponding URL to access the feed.
Method 3 – Browser Extensions
If you use Safari, there is a user-friendly solution you can apply to receive RSS feeds, which is the RSS Button for the Safari app. This feature is designed with simplicity and ease of use in mind. It adds an RSS button to your Safari browser, allowing you quick and easy access to RSS feeds of websites you visit. The only caution is that this convenient button requires a small fee of $0.99. While this is a minimal cost, some users may object to paying for such functionality.
Method 4 – The Principle of Educated Guessing
This is an interesting method, particularly intended for cases where you want to find the RSS feed for a particular website, but it is not clearly indicated on the site itself. In such circumstances, “Educated Guessing” is helpful.
As an illustrative example, let’s take example.com/. Suppose you would like to find the RSS feed for this website, but you cannot find it in the usual sections of the website (such as the footer or sidebar). In such a situation, you can resort to educated guessing. This means trying out some obvious URL paths in the hope that one of them leads you to the correct feed.
Start by appending ‘/feed/’ after the domain. So, in our example, you would try navigating to example.com/feed/. If this doesn’t work, you can try another popular URL structure, such as ‘/rss/’. This means navigating to example.com/rss/.
This is, of course, not an exact science, and there is no guarantee that any of these guesses will lead you to the desired RSS feed. However, in practice, you will find that many websites follow these general URL structures for their feeds, making this a valuable technique to try. It is undoubtedly a quick and simple method to attempt to find the RSS feed if you are unable to locate it directly on the website. 📝✏️
🛜Blocking (or Allowing) whole Countries or a Provider or a Network with your Ubiquiti Edgerouter
I recently picked up a nice second hand Ubiquiti Edgerouter X from Marktplaats, which is the Dutch Craigslist. I wanted to play with a firewall in my network. This quickly became a complete rebuild of the network with multiple Vlans in my house. Which was enjoyable because exploring and learning is always a good thing.
This learning also counts for the firewall in the Edgerouter (which runs EdgeOS). I had some experience with firewalls in a very far past with a product called Checkpoint Firewall-1. I am unsure if it still exists. The company Checkpoint still does, though.
Playing around with the firewall wasn’t very hard. Of course, you need to learn the interface and the possibilities of the Firewall. I checked three videos on YouTube to get a head start. This video jumpstarted the head start for me.
As I progressed and got all the firewall rules in place as I wanted them, I really wanted to block off certain countries to be able to reach my webserver, which is behind a nginx reverse-proxy. The Ubiquiti doesn’t have preloaded sets of country networks. And I would rather not add all the networks used by a country manually. This is a daunting task. If I check for my country for example, there are 5999 networks (date:2024-01-18). This is impossible. So this must be done smarter.
Adding a complete country to a network-group in EdgeOS
For this solution, we lean heavily on te website http://www.ipdeny.com/ with the following script I wrote.
I saved this script on the Edgerouter in my $HOME directory with the following name: create_networkgroup_countrycode.sh
#!/bin/sh
# Name: creete_networkgroup_asn.sh
# Coder: Marco Janssen (mastodon [@marc0janssen@mastodon.online](https://micro.blog/marc0janssen@mastodon.online))
# date: 2024-01-17 21:30:00
# update: 2024-01-31 21:39:00
if [ $# -eq 0 ]; then
echo "No parameters provided. Provide a countrycode. Example nl or de"
else
countrycode="$1"
NOW=$(date +"%Y%m%d")
echo "*** Downloading networkaddress blocks for countrycode $countrycode"
curl "http://www.ipdeny.com/ipblocks/data/countries/$countrycode.zone" -o "./$countrycode.zone" -s
echo "*** Writing networkgroup script"
echo "delete firewall group network-group $countrycode.zone" > "./networkgroup-$countrycode.$NOW.sh"
echo "set firewall group network-group $countrycode.zone description \"All networks $countrycode\" on $NOW" >> "./networkgroup-$countrycode.$NOW.sh"
sed -e "s/^/set firewall group network-group $countrycode.zone network /" "./$countrycode.zone" >> "./networkgroup-$countrycode.$NOW.sh"
cp "./networkgroup-$countrycode.$NOW.sh" nwgs.sh
echo "*** Archiving zone-list"
mv "$countrycode.zone" "$countrycode.zone.$NOW"
echo
echo "Now execute the following commands on the prompt of your Edgerouter"
echo
echo "configure"
echo ". ./nwgs.sh"
echo "commit"
echo "save"
echo "exit"
fi
What will this script do?
- It will check if you give it a parameter with a single country code. For example, nl or dk or de.
- Then it will try to download the network address blocks for the country-code and save it in a file. For The Netherlands (if you give it country-code nl) it will create a file called “nl.zone”.
- Then it will try to write a script for you which creates a network group in the Edgerouter. The file will be called “networkgroup-nl.<date>.sh (if you give it country-code nl).
- The script which is generated in step 3 will be copied to a shorter name “nwgs.sh” for your convenience. You can archive the longer one in step 3 and use the shorter one in this step.
- The zone list used in step 3 will also be archiving for you in a format “nl.zone.<date>” (if you give it country-code nl).
- Finally, you get further instructions to use the script and create the network-group.
Running the script
-
Get this script on your Edgerouter (ssh, scp, sftp) and place it in a directory of your likings. I place it in my $HOME on the router.
-
Name the script “create_networkgroup_countrycode.sh”
-
Make sure the script is executable:
chmod +x ./create_networkgroup_countrycode.sh
-
Run the script (in this example for The Netherlands, i.e., nl):
./create_networkgroup_countrycode.sh nl
-
Execute the following commands after the script has run.
configure . ./nwgs.sh commit save exit
A note of warning
For The Netherlands alone it will add 5999 networks to your Edgerouter. (Date: 2024-01-18). To run the generated network group script with the above command . ./nwgs.sh
will take you some 18 minutes on the router. The following step commit
will take you another 9 minutes on the router. If you now open your Edgerouter GUI, you will see the steps created a network-group called “nl.zone” of 5999 items. To open this group in the GUI takes a long(er) time. A reboot of the router takes 25 minutes with 5999 networks, don’t think your router is bricked…. It just needs loads of time to boot with this kind of lists.
What is next?
Now you have your network-group called “nl.zone” (for this example) in the Edgerouter. You can use it like any other resource in a firewall rule. You can now allow this network-group through your firewall and block all others that are not in this group (so allow only this country to your webserver or whatever). Or you can block this network-group from your webserver and allow all other countries. It’s up to you.
Adding a complete provider to a network-group in EdgeOS
Maybe for you, it is not necessary to allow or block complete countries. I started using the following method, to only allow certain providers in my country to reach my webserver. Why? First, I only share this content with family members so not all networks are needed to achieve this. Second I did not like the long boot times of the Edgerouter with a complete country list. It was not necessary for me to allow a complete country to my server, I just wanted my family members. Why didn’t you just allow their home IP address of their ISP, you would say? Well, they are dynamic, and I would rather not revise the rules often. So I allow their complete provider.
For this solution, I used to following URL’s, but maybe you have beter ones on the internet.
https://www.nirsoft.net/countryip/nl_owner.html
- Nirsoft gives a nice overview of all the providers with the IP-addressblocks.
- ASNTOOL helps to get all the networks in an Autonomous System.
- TEXTCLEAR helps to clean up the list you get from ASNTOOL.
- NETWORKSDB is an example of a website to find your ASN.
ASN stands for Autonomous System Number in networks. It is a unique identifier that is globally available and allows an autonomous system (AS) to exchange routing information with other ASes. An AS is a large network or group of networks that operates under a single routing policy. Each AS is assigned a unique ASN, which is a number used to differentiate and identify the AS in the global routing system. ASNs are essential for network operators to control routing within their networks and facilitate the exchange of routing information.
Save the following script in the $HOME of your Edgerouter (or any directory of your likings). And give it the name “create_networkgroup_asn.sh”.
#!/bin/bash
# Name: create_networkgroup_asn.sh
# Coder: Marco Janssen (mastodon [@marc0janssen@mastodon.online](https://micro.blog/marc0janssen@mastodon.online))
# date: 2024-01-17 21:30:00
# update: 2024-01-31 20:50:00
print_usage() {
echo "No parameters provided. Provide an ASN with IP blocks. Example: AS1136."
}
create_networkgroup_script() {
local asn
local temp_filename
local output_filename
local archive_filename
local networkgroup_script_filename
asn="$1"
temp_filename=".temp.txt"
output_filename="$asn.txt"
archive_filename="$asn.$NOW.txt"
networkgroup_script_filename="networkgroup-$asn.$NOW.sh"
echo "*** Writing new $asn output"
whois -h whois.radb.net -- "-i origin $asn" | grep -Eo "([0-9.]+){4}/[0-9]+" | uniq -s 0 > "$temp_filename"
echo "*** Getting owner of $asn"
local owner
owner=$(whois -h whois.radb.net -- "-i origin $asn" | grep "^descr:" | awk '{print $2}' | sort | uniq -c | sort -nr | head -1 | awk '{ print $NF }')
echo "--- Owner of $asn: $owner"
echo "*** Checking for changes in $asn"
if [[ -f "$output_filename" && $(diff "$output_filename" "$temp_filename") == "" ]]; then
echo "--- No Changes in $asn"
echo "*** Cleaning temporary output"
rm "$temp_filename"
else
echo "*** Writing networkgroup script for $asn"
cat <<EOF >"$networkgroup_script_filename"
delete firewall group network-group $asn
set firewall group network-group $asn description "All networks $asn by $owner on $NOW"
$(sed -e "s/^/set firewall group network-group $asn network /" "$temp_filename")
EOF
cp "$networkgroup_script_filename" nwgs.sh
echo "*** Archiving $asn output"
cp "$temp_filename" "$archive_filename"
mv "$temp_filename" "$output_filename"
echo
echo "Now execute the following commands on the prompt of your Edgerouter"
echo
echo "configure"
echo ". ./nwgs.sh"
echo "commit"
echo "save"
echo "exit"
fi
}
main() {
if [[ $# -eq 0 ]]; then
print_usage
else
NOW=$(date +"%Y%m%d")
ASN="$1"
readonly NOW
readonly ASN
create_networkgroup_script "$ASN"
fi
}
main "$@"
What will this script do?
- It will check if you give it a parameter with the name your ASN. For example, AS1136.
- Write a textfile with all the networks in the AS
- Try to get the owner of an AS
- Then it will try to write a script for you which creates a network group in the Edgerouter. The file will be called “networkgroup-AS1136.<date>.sh (if you give it ASN AS1136).
- The script which is generated in step 4 will be copying to a shorter name “nwgs.sh” for your convenience. You can archive the longer one in step 4 and use the shorter one in this step.
- Finally, you get further instructions to use the script and create the net-workgroup.
Running the script
-
Find the ASN you like to use for your network group. This can be done, for example, with the following site to find your ASN: https://networksdb.io/
Another way to get and ASN for your desired provider is to look up their IPblocks. I do this with a site from Nirsoft, https://www.nirsoft.net/countryip/nl.html. Here with the IPblocks for The Netherlands. If you want to have the ASN for a provider called “Alma International B.V.” for example, you just take the first available IP-address in their block “2.16.0.1”.
-
Get the above script on your Edgerouter (ssh, scp, sftp) and place it in a directory of your likings. I place it in my $HOME on the router.
-
Name the script “create_networkgroup_asn.sh”
-
Make sure the script is executable:
chmod +x ./create_networkgroup_asn.sh
-
Run the script (in this example for Alma International B.V., i.e., AS20940):
./create_networkgroup_asn.sh AS20940
-
Execute the following commands after the script has run.
configure . ./nwgs.sh commit save exit
A note
These network-groups are considerably smaller, I don’t really notice any extra boottime of the router.
What is next?
Now you have your network-group called “AS20940” (for this example) in the Edgerouter. You can use like it like any other resource in a firewall rule. You can now allow this network-group through your firewall and block all others that are not in this group (so allow only this provider to your webserver or whatever). Or you can block this network-group from your webserver and allow all other providers. It’s up to you. 📝
đź“źPretty Good Privacy with keybase.io
2020 : This post is obsolete since Keybase was acquired by Zoom.
Used PGP back in the 90s just because it was possible. The internet was growing, and my friends and I liked to experiment in those days with all that we found on the internet. PGP was one of those things. We had great fun back then, but never used it again the following decades.
But a few days ago I saw a talk from Mike Godwin about privacy on the internet. He pointed out https://www.keybase.io/ in his talk as a start to set up PGP and ways to communicate with him.
I got curious again about PGP and Keybase.io, and I had no trouble at all to quickly set up an account and a PGP key pair with these guys. They have nice low-level tooling to encrypt and decrypt messages on their website.
What I wanted again was a way to have my email encrypted, like I had back then in the 90s when my friends and I played around with it. I found a great tutorial on the internet from the Electronic Frontier Foundation on how to het PGP setup on a MAC.
It is set up with Thunderbird Mail Client and within this tutorial they let you generate a PGP key pair with GnuPG. Which will do the job, but I wanted to set it up with my Keybase.io key pair. I needed to export my Keybase.io key pair to the GnuPG keychain.
Reading the docs at their site, I found out that I could pull my Keybase.io Key to GnuPG keychain by the following command.
keybase pgp pull-private --all
But this gave me the following error
â–¶ ERROR .keys doesn't exist
Just following this workaround to fix it.
Make sure that in the linked device to your keybase.io account, the option “Forbid account changes from the website” is disabled in the advanced settings. By disabling this option, more possibles are enabled on the keybase.io site. One of them is to export your private key.
After you have disabled this option on your device, go to the website of keybase.io and visit your profile page. And find an “edit” link behind the signature of the public key. Select the edit link, and you get the option to export your private key.
Copy the key and save it to your desktop. Use the following command to import the private key to the GnuPG keychain. Where “Private_Key.asc” holds your private key.
gpg2 --allow-secret-key-import --import Private_Key.asc
Also save your public key to your desktop. And import this one with the following command. Where “Public_Key.asc” holds your public key.
gpg2 --import Public_Key.asc
This serie of actions will replace the generation of a PGP key pair with GnuPG and import your keybase.io key pair.
Don’t forget to delete your private key from your desktop.
Make sure it’s cleaned up!
Now finish the thunderbird tutorial from EFF with the keybase.io key pair. And you have a PGP mailclient setup with keybase.io. There is also a nice integration possible with the macOS MAIL.APP with https://gpgtools.org, but this requires a paid license. Whatever suites your needs.
Ok, I hope this helps you.
Find me here on keybase.io and if you like to send me a PGP encrypted email, here is my public key. But this one can also be found on keybase.io.
Oh, and don’t forget to send me your public key or your keybase.io profile page, so I can download your public key if you shared it. If you want a message back, of course ;-).
Use the following command to pull in my public key in your GnuPG keychain. And a follow on keybase.io.
curl https://keybase.io/marc0janssen/pgp_keys.asc | gpg2 --import
keybase follow marc0janssen
Thanks for reading! 📝🖋️
👨🏽‍💻Starbit Commander
This game is my first coding for The BBC Micro:bit. The goal is to have a nice target to practice and learn microPython on this device. But first I wanted to have a go on the makeCode editor for the Micro:bit.
I made this simple little game. Flying a spaceship through an astroid field. Occasionally, a blinking power-up will appear. This gives the advance to survive an astroid collision. Catching two power-ups will destroy all astroids in the field. Astroids will speed up in time, but slow down if a double power-up is acquired.
To edit this repository in MakeCode.
- Open https://makecode.microbit.org/
- Click Import and select Import URL
- Paste https://github.com/marc0janssen/starbit-commander
- Select import
🎮🖖🏻🚀
👨🏽‍💻Simple Simon Says
The game Simon in the late 70s was maybe the first “computer” game I played. Well, perhaps it was a real computer. I was just a kid, but the game always stayed with me. In my mind, that is. The game was from a friend, and we played it for ages.
Now with the Micro:bit I wanted to revive this memory and use the Micro:bit as a vehicle to get this to life again. Below is my make code attempt. I used a Micro:bit version2 for this.
This Simon listens to Button A, Button B, Button C (the touch sensitive logo), Button D (= Button A+B).
It was fun to create and a good way to set me off with the possibilities of the Micro:bit…
To edit this repository in MakeCode:
- open https://makecode.microbit.org/
- Click Import and select Import URL
- paste https://github.com/marc0janssen/simple-simon-saysÂ
- select import 🎮📝🖌️
đź“şSetting Up Jellyseerr for Emby
Jellyseerr is a free and open source software application for managing requests for your media library. It is a a fork of Overseerr built to bring support for Jellyfin & Emby media servers! Follow this link for more info.
Docker
Docker-compose.yml
version: '3'
services:
jellyseerr:
image: fallenbagel/jellyseerr:latest
container_name: jellyseerr
environment:
- TZ=Europe/Amsterdam
- JELLYFIN_TYPE=emby
ports:
- 5055:5055
volumes:
- /path/to/dir/config:/app/config
restart: unless-stopped
docker-compose -p "jellyseerr" -f /path/to/file/docker-compose.yml up -d --remove-orphans
Swag
NGINX - jellyseerr.subdomain.conf
The following code should be included in file at the following location: /path/to/dir/swag/config/nginx/proxy-confs/jellyseerr.subdomain.conf
The trick is to copy the jellyseerr.subdomain.conf.sample file to jellyseerr.subdomain.conf. You can create a new file from scratch, but the Swag startup could complain the file does not have to correct timestamp. This is a warning not a “showstopper”.
## Version 2022/09/08
# make sure that your dns has a cname set for jellyseerr and that your jellyseerr container is named jellyseerr
server {
listen 80;
server_name jellyseerr.*;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name jellyseerr.*;
server_name jellyseerr.yourdomain.com;
resolver 1.1.1.1 1.0.0.1 valid=300s;
resolver_timeout 10s;
gzip on;
gzip_vary on;
gzip_min_length 1000;
gzip_proxied any;
gzip_types text/plain text/css text/xml application/xml text/javascript application/x-javascript image/svg+xml;
gzip_disable "MSIE [1-6]\.";
location / {
proxy_pass http://192.168.1.1:5055;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
}
fail2ban - jail.local
Add the configuration below to the file: /path/to/dir/swag/config/fail2ban/jail.local
The path /jellyseerr_log/ should be added to your docker-compose.yml of SWAG.
This config let the intruder try 3 times, if failed within 10 minutes they’re blocked for 1 hour. IP’s from 192.168.1.0/24 are ignored.
[jellyseerr]
enabled = true
filter = jellyseerr
port = http,https
logpath = /jellyseerr_log/overseerr-*.log
maxretry = 3
findtime = 10m
bantime = 1h
ignoreip = 192.168.1.0/24
fail2ban - jellyseerr.conf
Add the following config to /path/to/dir/swag/config/fail2ban/filter.d/jellyseerr.conf
# Fail2Ban for jellyseerr
#
#
[Definition]
failregex = \[info\]\[Auth\]: Failed login attempt from user with incorrect Jellyfin credentials \{"account":\{"ip":"<HOST>","email":".*","password":"__REDACTED__"\}\}
ignoreregex =
📺🎮
🛜OpenWRT and Common Internet File System (CIFS)
This protocol is based on the widely used samba protocol. CIFS is a network protocol for sharing files, printers, serial ports, and other communications between computers. I gonna use this protocol to connect my NAS device to my OpenWrt-powered box.
Installing the packages
To get everything up and running, we need to install the following package on the linksys device, kmod-cifs_2.4.30-brcm-5_mipsel.ipk. If you wish, you can also install the mount helper package, cifsmount_1.5-2_mipsel.ipk. I decided not to do so.
A quick check
If we want to do a quick check if all is working, follow the following procedure.
root@Hellhound:~# insmod cifs
Using /lib/modules/2.4.30/cifs.o
Make a directory to mount the share on the remote machine.
root@Hellhound:~# mkdir /tmp/mnt
Now we can try to mount the share.
root@Hellhound:~# mount -t cifs //192.168.1.2/share /tmp/mnt -o unc=\\\\192.168.1.2\\share,ip=192.168.1.2,user=admin,pass=geheim ,dom=workgroup
Just double check it is mounted.
root@Hellhound:~# df
//192.168.1.2/share243748864 46485504 197263360 19% /tmp/mnt
Setting up the system
The above mount command is a hell of a command to execute every time you need your NAS device to be connected to your OpenWrt box. So, we are going to make life a little easier for yourself. We will edit the /etc/fstab. If you don’t have a fstab file, make one now. To the fstab file, we add the following line.
//192.168.1.2/share /mnt/bulky cifs unc=\\192.168.1.2\share, ip=192.168.1.2,user=admin,password=geheim,dom=workgroup,rw 0 0
The cifs kernel module does not need to loaded separately. The package also installs the file /etc/modules.d/30-cifs. This will load the module at boot-time. Now we can mount the share with the following command.
root@Hellhound:~# mount /mnt/bulky
Since OpenWrt does not execute the command
mount -a
at boot-time, the configured mount points in fstab are not automagicly mounted at boot-time. To solve this, I added the mount command.
mount /mnt/bulky
to the file /etc/init.d/S95custom-user-startup. 📚📖🎮
🛜OpenWRT and SNMP bandwidth monitor
I would appreciate it if we could monitor the bandwidth of the network devices in our wrt54gs. To do so, we need SNMP enabled on our device and some kind of application to monitor the snmp messages. I choose PRTG to monitor the SNMP messages sent.
Installing the package
We need to install the snmp daemon on OpenWrt. So, we require the following package, snmpd_5.1.2-2_mipsel.ipk. Optionally, you van install the package snmp-utils_5.1.2-2_mipsel.ipk. Since I have my router modded with the MMC/SD card mod, I am going to install the package on this card with the following command.
root@Hellhound:~# ipkg -d opt install snmpd_5.1.2-2_mipsel.ipk
In my case, this will install the package on the mount point /opt. I will refer to this in the paper. Change it to your setup accordingly.
Set up the package
Now we need to set up the snmp package. If you install it in the root of linux, there is no problem. But if you installed op your MMC/SD card you need to take some extra steps. First we have to copy the /opt/etc/default, /opt/etc/init.d and /opt/etc/snmp to the /etc directory. Now we have to copy the file /etc/init.d/snmpd to the file /etc/init.d/S60snmpd. After copying this file, we have to add 2 lines to the script. This is only needed for the ones who have installed the package on the MMC/SD-card. The new file will look like this.
#!/bin/sh
export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/opt/bin:/opt/sbin:
/opt/usr/bin:/opt/usr/sbin
export LD_LIBRARY_PATH=/lib:/usr/lib:/opt/lib:/opt/usr/lib
DEFAULT=/etc/default/snmpd
LIB_D=/var/lib/snmp
LOG_D=/var/log
RUN_D=/var/run
PID_F=$RUN_D/snmpd.pid
[ -f $DEFAULT ] && . $DEFAULT
case $1 in
start)
[ -d $LIB_D ] || mkdir -p $LIB_D
[ -d $LOG_D ] || mkdir -p $LOG_D
[ -d $RUN_D ] || mkdir -p $RUN_D
snmpd $OPTIONS
;;
stop)
[ -f $PID_F ] && kill $(cat $PID_F)
;;
*)
echo "usage: $0 (start|stop)"
exit 1
esac
exit $?
You can also follow the instructions on this page. You will need the script in paragraph 4.3. So, you don’t have to follow the instructions pointed out above here. Just follow the instructions on that page in paragraph 4.3.
What’s next?
You can now reboot your router and the snmpd daemon will start automatically. Now it is time to set up a monitoring tool. I used PRTG, but there are others around. It is out of the scope of this paper to describe the use of PRTG. There is enough material around to help you on that one. 📝🖋️
🛜OpenWRT and Snort
Set up snort on OpenWrt
After installing the MMC/SD card mod, I have enough room to install snort on my wrt54gs. So, this paper will assume that the package will be set up on the MMC/SD card. Which is mounted on /opt. You can read about setting up the MMC/SD card on this page.
Installing the package
To install snort on your wrt54gs install the following package
root@Hellhound:~# ipkg -d opt install snort_2.4.4-1_mipsel.ipk
This will install snort in the directory /opt.
Remote syslog
I want snort to log all its messages to a remote syslog server. I already discussed this on the page which discussed using fwbuilder with OpenWrt. Look on this page to set up wallwatcher.
Downloading rules files
We need to get some rule files. These rule files can be downloaded from the snort website. Download the snort 2.4 rule files from this website. These rule files need to be unpacked in the directory /opt/etc/snort/rules/
root@Hellhound:~# tar zxf snortrules-pr-2.4.tar.gz
Setting up snort
Now we have to set up the snort.conf file. In this file, many snort settings are configured. We want to set up snort in a way it will log all messages to our remote syslog, it this case wallwatcher. The first thing we have to do is to set the option HOME_NET
var HOME_NET 192.168.1.0/24
Next, we have to uncomment a line in the snort.conf file.
output alert_syslog: LOG_AUTH LOG_ALERT
We need to change the rules path
var RULE_PATH /opt/etc/snort/rules
At the bottom of the snort.conf there are a lot of rule files included. Make sure you comment all the rule files at first. Just to make sure that we do not flood the memory of the wrt54gs and hang the device when we start snort.
Make the directory snort in the directory /var/log
root@Hellhound:~# mkdir /var/log/snort
Giving snort a test run
To check if snort is running correctly on your device, give the following command on the prompt and open a website in your favorite browser.
root@Hellhound:~# snort -v -i vlan1
This will give you the following output
01/25-22:02:12.344117 195.37.77.141:80 -> 10.0.0.100:2053
TC`1 P TTL:46 TOS:0x0 ID:55048 IpLen:20 DgmLen:40 DF
***A**** Seq: 0xC5D2F3DE Ack: 0xBD63343B Win: 0x5B4 TcpLen: 20
=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+
End the test run by pressing crtl-C.
Running Snort
Now we know snort runs okay, we set up the rule files. These can be uncommented at the bottom of the /opt/etc/snort/snort.conf file. I would advise you to enable the rule files one by one and keep an eye on your memory usage. After each run of snort check your memory and determine if you can enable another rule. Run snort with the following command.
root@Hellhound:~# snort -c /opt/etc/snort/snort.conf -i vlan1&
Check your memory usage by the following command
root@Hellhound:~# top
If snort is detecting any “bad” traffic, it will be logged to your remote syslog server.
Checking your setup
Snort can be quiet sometimes, how do we check if snort is doing its job?? To check your setup, kill the snort proces that is running at the moment. Make sure you have the LOCAL.RULES enabled in the snort.conf file. Then edit local.rules and add the following line of code.
alert ip any any -> any any (msg:"Got an IP Packet"; classtype:not-suspicious; sid:2000000; rev:1;)
Now start snort with the following command
root@Hellhound:~# snort -c /opt/etc/snort/snort.conf -i vlan1&
And keep a good eye on your remote syslog server…. I will start logging network traffic. 📝🖋️
🛜OpenWRT and Traffic Monitor
A nice way of keeping track of your traffic is vnstat. This beauty is found in the repository of white russian. On the website of the author of vnstat you can find a complete reference of the possibilities of vnstat.
Installing the package
You can get this fine piece of software on your router by installing the following package provided by OpenWrt, vnstat_1.4-1_mipsel.ipk. On the OpenWrt forum a guy called arteqw made a impressive setup to be used with the x-wrt webif^2. I will add his setup in the next paragraphs.
Prepare OpenWrt
After installing the package, we need to create the “database” to collect our data. First, we need to create a directory to hold the database.
mkdir /var/lib/vnstat
Now we create the database in this directory. So first change to this directory before executing the command. We will be creating a database on the WAN device of the router.
vnstat -u -i vlan1
We also want this setup to survive a reboot, so in the /etc/init.d directory we edit the file S95custom-user-startup.
mkdir -p /var/lib/vnstat
vnstat -u -i vlan1
gettraffic.sh
Note: The script gettraffic.sh will be discussed later, for now, we just add this command.
Now we need to get this database to be updated on a regular interval, here cron is helpful. The crontab can be edited by the command crontab -e. In the example below, we update the database every 5 minutes.
*/5 **** vnstat -u -i vlan1
Script gettraffic.sh
The following script must be placed in the directory /usr/sbin and will write the status of the data in the database to a file in the /tmp directory called traffic_stats.inc. This file will be picked up by the web-interface to display the values within the webif^2 interface of OpenWrt.
#!/bin/sh
IFACE_WAN=$(nvram get wan_ifname)
IFACE_LAN=$(nvram get lan_ifname)
IFACE_WLAN=$(nvram get wl0_ifname)
rm /tmp/traffic_stats.inc
echo "<br ><center>" >> /tmp/traffic_stats.inc
vnstat -tr -i $IFACE_WAN | grep -v seconds >> /tmp/traffic_stats.inc
echo "</center><br ><b ><th>Hourly at $IFACE_WAN[WAN]</th></b ><br ><center>" >> /tmp/traffic_stats.inc
vnstat -h -i $IFACE_WAN | grep -v $IFACE_WAN >> /tmp/traffic_stats.inc
echo "</center><br ><b ><th>Daily at $IFACE_WAN[WAN]</th></b ><br ><center>" >> /tmp/traffic_stats.inc
vnstat -d -i $IFACE_WAN | grep -v $IFACE_WAN >> /tmp/traffic_stats.inc
echo "</center><br ><b ><th>Weekly at $IFACE_WAN[WAN]</th></b ><br ><center>" >> /tmp/traffic_stats.inc
vnstat -w -i $IFACE_WAN | grep -v $IFACE_WAN >> /tmp/traffic_stats.inc
echo "</center><br ><b ><th>Monthly at $IFACE_WAN[WAN]</th></b ><br ><center>" >> /tmp/traffic_stats.inc
vnstat -m -i $IFACE_WAN | grep -v $IFACE_WAN >> /tmp/traffic_stats.inc
echo "</center>" >> /tmp/traffic_stats.inc
Last thing we need to do is to add an entry in the crontab (crontab -e). This entry will run the gettraffic.sh every 5 minutes, so that the file /tmp/traffic_stats.inc will be updated.
*/5 * * * * gettraffic.sh
Web interface add-on
Finally, we need to extend the web interface, so we can see the traffic stats in our browser. To do so, we need to place the file traffic.sh in the directory /www/cgi-bin/webif. This file will pick up the file /tmp/traffic_stats.inc.
#!/usr/bin/webif-page
<?
. /usr/lib/webif/webif.sh
header "Status" "Traffic Statistic" "@TR<<Traffic Statistic>>"
?>
<pre><? cat /tmp/traffic_stats.inc ?></pre>
<? footer ?>
<!--
##WEBIF:name:Status:5:Traffic Statistic
-->
Email stats
I wanted to receive the stats of my router hourly by email. So, I looked at a package called mini-sendmail_1.3.5-1_mipsel.ipk which gives the ability to send emails. In the link section at the top I included the manpage for mini-sendmail. So to get the stats by email, I added the following line to the crontab.
0 9-16 * * 1-5 cat /tmp/traffic\_stats.inc | mini\_sendmail -fsend@domain.org -ssmtp.server.org receive@domain.org
This crontab entry will email the contents of /tmp/traffic_stats.inc to the address receive@domain.org. It will be emailed every full hour between 09:00 and 16:00 from Monday to Friday. 📝🖋️
🛜OpenWRT and Dynamic DNS
Having the major dynamic DNS services provide me with a domain name for my internet connection, I configured updatedd on my OpenWrt box. I stopped using ez-ipupdate because this application did not seem to pick up the WAN address of my ADSL router, but picked up the address of my wrt54gs and set that address to be my external address. Which gave problems, of course. Therefore I started to use updatedd which also supports all major dynamic DNS service providers.
Installing the packages
To get updatedd up and running, you need to install the package updatedd_2.5-1_ mipsel.ipk. Since I like to use the provider dyndns.org en no-ip.org I also installed the following packages updatedd-mod-noip_2.5-1_mipsel.ipk and updatedd-mod-dyndns_2.5-1_mipsel.ipk.
Update your IP
Now all the needed packages are installed. Now it’s time to update your IP with the chosen providers, for me, this is no-ip.org and dyndns.org.
updatedd noip username:password nodename.bounceme.net
updatedd dyndns username:password nodename.homeftp.org
Update your IP regularly
Now we want this action to be updated regularly. This means cron will be helpful. To edit cron you can use the command crontab -e. Which uses the VI commands to edit the content. I set up cron in the following way.
0 0 * * * updatedd noip username:password nodename.bounceme.net
0 0 * * * updatedd dyndns username:password nodename.homeftp.org
This will update my dynamic DNS accounts every day at midnight. 📝✏️
🛜OpenWRT, fwbuilder and wallwatcher
Installing the right packages
To get the fwbuilder generated scripts up and running on a OpenWrt powered device, you need to install a few packages. Those packages are found in de standard repository of white russian. The packages you need to install from that repository are ip_2.6.11-050330-1_mipsel.ipk, iptables-mod-extra_1.3.3-2_mipsel.ipk and iptables-utils_1.3.3-2_mipsel.ipk
Editing /etc/firewall.user
I am using a SquashFS version of OpenWrt. This means that the real filesystem is readonly and that all files are available with symbolic links on a writeable JFFS filesystem. What we now need to do is to delete the symbolic link and copy the real file in place of the symbolic link in the /etc directory.
Now we need to add the following code to the firewall.user file.
insmod ipt_LOG
insmod ipt_limit
if [not -f /usr/sbin/firewallscript.fw ] ; then
{orginal script}
else /usr/sbin/firewallscript.fw
fi
Now you have to place your firewall script in the /usr/sbin directory (or place it at a location of your choice, but you will have to edit the code above to match your location).
Instead of adding the two insmod lines in the firewall.user file, you can also edit the file /etc/modules. You can just edit the file like the example below.
ipt_LOG
ipt_limit
Remote syslog with wallwatcher
Now we have to set up remote syslog, so we can log the output of your firewall script with wallwatcher. Remember to put some rules in fwbuilder on logging, otherwise we will never log a thing. You need to set up OpenWrt to use a remote syslog server. You need to replace the xxx.xxx.xxx.xxx with the ip-address of the system that will be running wallwatcher.
nvram set log_ipaddr=xxx.xxx.xxx.xxx
nvram commit
The only thing you have to do next is to set up wallwatcher router tab
Change Startup order
With the RC5 release of OpenWrt all was working just fine, but when I started to use the RC6 version of OpenWrt I discovered that the firewall script was not executed after a reboot of the router. The problem seemed to be that the /etc/init.d/S35Firewall is executed before /etc/init.d/S40Network. So, When I renamed /etc/init.d/S35Firewall in /etc/init.d/S45Firewall and rebooted the router, the firewall script was executed and all worked just fine. 📝🖋️
đź“źFediverse and WordPress
What is it?
ActivityPub is the glue or the oil, if you like, for the Fediverse. It glues all the services together in the Fediverse. It lets mastodon servers communicate with each other, but it also lets Pixelfed talk with mastodon and vice versa. All the social media that is ActivityPub aware can exchange messages with each other.
WordPress
WordPress is by its nature not ActivityPub aware. So, it can’t exchange messages with the Fediverse. But there is a solution. Matthias Pfefferle created a WordPress plugin to connect WordPress to the Fediverse. This enables you to get your WordPress posts across the Fediverse.
What do you need
- Webfinger WordPress Plugin installed on your WordPress instance;
- ActivityPub WordPress Plugin installed on your WordPress instance.
Setup
There is not much needed to get this working. Installed both plugins. The ActivityPub plugin has default settings that could work for you, but fiddle with them if you like. The Webfinger plugin doesn’t need configuration at all. When both plugins are installed, you will see it advises you for 2 more plugins. They are not needed to get ActivityPub working. But will enhance the experience.
Checking the setup
- First check the Site Health under Tools > Site Health. If all is working correctly, you will get no critical errors on this page.
- Go to the Webfinger website and check if you get a JSON response from your ActivityPub plugin by entering the e-mail address at the top. HINT: This is not your regular e-mail address, but the account name and domain name of your WordPress instance. So, if you have an account “Jake” on your WordPress instance at the domain “great.blog.com”, your “e-mail address” will be jake@great.blog.com. You can enter it at the top of the page. As a result, you should get a JSON response.
- Go to your Mastodon account and search for your WordPress account (i.e., jake@great.blog.com) in the search bar of Mastodon.
Dos and Don’ts and Hints
- Don’t install a cache enhancing plugin. It will mess with the ActivityPub or Webfinger Plugin. You will see critical errors after a while in your Site Health menu.
- I found that https://mastodon.social messes with the avatar. It will not come through, even after a few posts on WordPress. I checked a few other mastodon instances (also mastodon.online, which is the other flagship server) they were fine. So, you should be careful with the mastodon server you choose to test this. It could make you think the plugin isn’t working properly.
- Some hosting providers protect the .well-known directory. When this directory is inaccessible for the plugin. It will not work. (This made me host my WordPress instance at home). 📝🖋️