tdkyo

Clean and Undisturbed Space for Thoughts and Writing

    Auto Check and Reconnect WiFi for a Raspberry Pi

    I have several headless Raspberry Pis around my house, and some of the Raspberry Pis use WiFi to connect to the home network. The WiFi connections are pretty stable most of the time, but I sometimes notice that my Pis get disconnected from WiFi and never reconnect to the home network. Usually, I can solve this problem by either turning off and on the WiFi antenna via command prompt on the affected Pi (which requires hooking up a keyboard, mouse, and a monitor) or simply rebooting the system by disconnecting and reconnecting the power supply. Although these two solutions solved the problem, I wanted to manage the problem by setting up an automated solution to the WiFi connection problem.

    How can I have the Pi disconnect and reconnect the WiFi antenna whenever there is a WiFi problem? Using a prewritten bash script and cronjob, I was able to have my Raspberry Pi (1) auto-check whether there was a WiFi problem, and (2) reconnect to the WiFi network by disconnecting and reconnecting the WiFi antenna.

    Creating the bash script

    First, I created a bash script on my home directory by using GNU nano. I named the bash script “WiFichecker.sh”.

    cd && nano WiFichecker.sh

    The cd command changes the current working directory to the user’s home directory. && or AND operator allows me to run consecutive commands, where the subsequent command (nano WiFichecker.sh would run only if the previous command (cd) ran successfully. Finally, nano WiFichecker.sh opens up nano (our text editor) with a bash script file named “WiFichecker.sh” ready to be saved.

    Once nano is open, we can write the following commands in sequence on each line.

    ping -c4 google.com > /dev/null

    To determine whether we have a WiFi connection, we simply use the ping command to check whether our Raspberry Pi can connect to one of Google’s servers. Parameter -c4 indicates that we want to ping Google four times in case the first few pings do not work. The > operator redirects any output of ping -c4 google.com to a specified destination right of the operator. The > /dev/null ensures that all of the output of the ping command gets thrown away. (/dev/null can be seen as a black hole in Linux)

    For the next block of code, it makes sense to see it as a whole in multiple lines.

    if [ $? != 0 ]
    then
            sudo ip link set wlan0 down
            sleep 15
            sudo ip link set wlan0 up
            sleep 15
            sudo dhcpcd
    else
            echo "nothing wrong"
    fi
    

    When we run the previous ping -c4 google.com > /dev/null command, there is an exit status left in memory, where we can access the value of the exit status by looking at the variable $?. The $? variable will usually give a value of “0” if the previous command was successful and a value other than “1” if there were any issues. If our ping command did not encounter any errors (i.e., the device was able to ping and get a response from Google’s servers), then the value of $? would be “0”. However, if there were issues with our ping command (e.g., the device could not ping Google’s servers or Google’s servers did not respond back with our ping request), then the value of $? would not be “0”.

    Thus, we can set a conditional statement in bash, where we can write a set of commands only we encounter a problem with our ping command (value of $? is not 0). When we have ping issues, we can write a series of commands to (1) shut off the WiFi device, (2) wait 15 seconds, (3) turn on the WiFi device, (4) wait 15 seconds, and (5) reconfigure the network interface to ensure that we can reconnect to the network.

    sudo ip link set wlan0 down

    This command turns off our WiFi device (wlan0) via superuser privileges.

    sleep 15

    This command makes our device wait for 15 seconds until moving on to the next command.

    sudo ip link set wlan0 up

    This command turns back on our WiFi device via superuser privileges.

    sleep 15

    Just like the previous sleep command, our device will wait for 15 seconds until moving on to the next command.

    sudo dhcpcd

    Assuming we are reconnected to our WiFi network, we can use the dhcpcd command via superuser privileges to reconfigure the network interface (e.g., determining which IP address to use for the network) to ensure that we can interface with the network.

    That’s it! These following sets of commands, whenever we do not have access to Google’s servers, should successfully reset our WiFi network interface to reconnect to our designated network.

    The command echo "nothing wrong" is nested inside our conditional statement when we have successfully been able to ping google. I intentionally left this echo statement for logging purposes.

    Our script as a whole is the following:

    ping -c4 google.com > /dev/null
    
    if [ $? != 0 ]
    then
            sudo ip link set wlan0 down
            sleep 15
            sudo ip link set wlan0 up
            sleep 15
            sudo dhcpcd
    else
            echo "nothing wrong"
    fi
    

    We can press the “Control” key and “O” together to save the bash script, and we can press the “Control” key and “X” together to exit nano.

    Scheduling the WiFi checkup

    Now that we have our script, we can use a cronjob entry to run our script at regular intervals. Because we have some commands that require superuser privileges, we need to run cronjob via superuser privileges. To run our script via cronjob with superuser privilege, we can open crontab via superuser privileges.

    sudo crontab -e

    If this is your first time running crontab via superuser privileges, crontab may ask which text editor to use to edit the cron table. I pick nano because I am most familiar with this text editor. Once the crontab is open, we can add the following command at the end of the text file.

    */5 * * * * sudo bash /home/[username]/WiFichecker.sh

    The first five columns proceeding our command denotes time variables, where we can adjust to run the script to our liking. I want to run the script every five minutes, so I write */5 * * * * to indicate that the script should run every 5 minutes, of every hour, of every day, of every month, and of every weekday. Feel free to adjust this parameter to your liking.

    Next, sudo bash /home/[username]/WiFichecker.sh essentially executes our bash script (located at our home directory) via superuser privileges.

    Afterward, we can save our cron table (“Control” key and “O” on nano) and exit the text editor (“Control” key and “X on nano).

    Finally, we can either restart the Raspberry Pi (sudo reboot) or restart the cron service (sudo service cron restart) to apply the changes to our cron table.

    No more disconnected Raspberry Pis

    After implementing the WiFi checker and reconnection script, my Raspberry Pi never had a network disconnection issue. I hope this guide ensures that your headless Raspberry Pis always have a stable WiFi connection!

    Backing Up Video to the Cloud for Motioneye

    Background

    One of the first projects I did when I got my first Raspberry Pi was making a security camera for my house. After my Raspberry Pi Zero W Camera Pack came from adafruit, I hooked up the Raspberry Pi Zero W with the included camera module and installed motioneyeos to my Pi device. After logging on to the motioneye web interface on my browser, my new security camera was ready to record videos and photos based on motion detection. One of the major benefits of using a Raspberry Pi with motioneye installed is that I had a fully automated WiFi security camera that could upload captured media on major cloud storage providers (e.g., Google Drive) without any monthly fees!

    Although I liked using motioneye, I did not particularly like using motioneyeos, because the Linux distribution did not allow installing third-party programs via apt-get command. Motioneyeos was designed to be a single purpose distribution, where the operating system’s focus was on video surveillance and self-updating with minimal user intervention. Motioneyeos satisfies many people’s needs, including those who may not be familiar with working with the Linux operating system. As I become more accustomed to the Linux command line interface, I wanted to look beyond what motioneyeos has in store.

    Motioneye under Raspbian

    Taking out the SD card, I downloaded and installed Raspbian (now known as the Raspberry Pi OS) to my Raspberry Pi 3 B+ device and hooked my camera module to the more powerful Raspberry Pi device. I decided to use my Raspberry Pi 3 B+ instead of my Raspberry Pi Zero because I found that the Pi Zero was too slow to capture video at high resolutions. I also decided to use Raspbian instead of motioneyeos because I wanted to use Rclone to backup my video and photo files to my Google Drive.

    Long term storage on Google Drive with a small SD Card

    Even with a relatively large SD card (128 GB), I can easily accumulate enough motion-activated video footage to fill up the SD card within three to four days. I have a large cloud storage on Google Drive that could store a lot more videos than my SD card. How can I set up an automated system, where motioneye would only keep videos only for a few days, while backing up the video footage for longer term storage, such as keeping video files up to three weeks?

    I used a combination of motioneye, rclone, and cron job to get it done all automatically.

    Motioneye

    After installing Raspbian, I installed motioneye by following these steps. After logging into motioneye, I opened the settings tab and toggle opened both the “Still Images” and “Movies” section. Under “Preserve Pictures” and “Preserve Movies”, I set it to “For One Day.” Motioneye will only keep one day worth of media moving forward. We will use rclone to create long term storage on our Google Drive.

    rclone for interfacing with Google Drive

    Rclone is “a command line program to manage files on cloud storage.” rclone.org Rclone allows users to interface with various cloud storage providers, almost like an attached storage drive. Rclone can interface with Google Drive, so I followed the instructions on rclone’s Google Drive page to set up a Rclone remote drive interfaced with my Google Drive. I named my Google Drive remote as GoogleDrive:.

    Now, I want to backup all my motioneye captured media files on a folder within my Google Drive. So, I used the following command to create a folder named “motioneye” on the root of my Google Drive.

    rclone mkdir GoogleDrive:motioneye

    Granted, I could use Google Drive’s web interface to create the same folder on my browser, but I wanted to familiarize myself with all the features rclone has available for managing my cloud storage. To check whether the folder was created, I used the ls command on rclone,

    rclone ls GoogleDrive:

    where, among other files and folders, rclone would list our newly created folder “motioneye.”

    Placing all the commands together on a bash script

    Using my favorite Linux text editor (GNU nano), I wrote the following commands in order to a bash file “rcloneBackup.sh”.

    killall rclone;

    This command checks and kills any previously running rclone instance. Sometimes, due to network issues, rclone may run for a long time, and we may inadvertently launch another rclone instance, which may slow the backing up process even further. Thus, we make sure we only will have one rclone running moving forward.

    rclone delete GoogleDrive:motioneye --min-age 21d;

    Via rclone, we can selectively delete files on our remote drive’s motioneye folder based on the age of the file. Using the --min-age option, we can specify the minimum age of the files before rclone can go ahead and delete them. I set it to 21 days for my own personal preference. (You can adjust this parameter based on your own liking.)

    rclone rmdirs GoogleDrive:motioneye --min-age 22d --leave-root;

    If the previous delete command deleted files, thenrmdirs command deletes folders. If our previous file deletion command ran successfully, we would have empty folders starting after 21 days. Thus, I set the --min-age parameter to one day later from the previous command. The --leave-root option prevents rclone from attempting to delete the root folder “motioneye”.

    rclone cleanup GoogleDrive:;

    If you delete files on Google Drive via rclone, the deleted files end up on Google Drive’s Trash. To prevent Drive’s Trash from accumulating with deleted files (and fill up the storage quota for the Google account), I use this command to tell rclone to empty Drive’s Trash.

    (sleep 230m && killall -9 rclone) & (rclone copy /var/lib/motioneye/ GoogleDrive:motioneye --exclude "*.thumb" --transfers=1 --vv);

    There are two concurrent sets of commands going on. The first half of the code, (sleep 230m && killall rclone), acts as a timer to wait 230 minutes until our rclone instance gets terminated. We add this timer in to make sure that we do not have concurrent instances of rclones running on top of each other when we decide to run these series of commands again after 230 minutes.

    The second half of the code, (rclone copy /var/lib/motioneye/ GoogleDrive:motioneye --exclude "*.thumb" --transfers=1 --vv);, copies all the files where we have our recorded media from motioneye (at the default folder path of /var/lib/motioneye/) to our rclone remote drive’s folder. I used the --exclude parameter to exclude any *.thumb files from being transferred because those files are merely thumbnails of each of the media files which we don’t necessarily need for cloud storage. I also used the --transfers parameter to limit my transfers to one file at a time, because I found that having concurrent transfers with exceedingly large files tend to not upload on time within our 230 minutes window. Finally, I added the --vv parameter to force rclone to report every progress to our command line console.

    That’s it! Placing all of the commands in sequence together, we have the following series of commands for our bash script “rcloneBackup.sh”.

    killall rclone;
    rclone delete GoogleDrive:motioneye --min-age 21d;
    rclone rmdirs GoogleDrive:motioneye --min-age 22d --leave-root;
    rclone cleanup GoogleDrive:;
    (sleep 230m && killall -9 rclone) & (rclone copy /var/lib/motioneye/ GoogleDrive:motioneye --exclude "*.thumb" --transfers=1 --vv);
    

    I saved our bash script to our home directory, which is usually located at /home/[username]/. Afterward, I tested the script to make sure that all the commands are working.

    bash rcloneBackup.sh
    

    Crontab to regularly run the backup script

    After verifying that our bash script is working, we now have to crontab to run our bash script at regular intervals. If you are not familiar with cron, please check out a short guide by the Raspberry Pi Foundation. We can start by opening crontab.

    crontab -e

    If this is the first time running crontab, the system will ask you to pick a text editor to modify our crontab (also known as cron table). I usually pick nano, because that is the text editor that I am most familiar with editing text files in Linux.

    Next, I navigate to the end of the crontab text file to enter our crontab entry. If you recall, we set a timer of 230 minutes before terminating the rclone instance. The reason for picking 230 minutes was to ensure that we could run our bash script again every 240 minutes or 4 hours. (Feel free to modify those time values.) Assuming that I wanted to schedule our bash script to run every 4 hours, I added the following line to our crontab.

    0 */4 * * * bash "/home/[username]/rcloneCopy.sh"

    The first column represents “minutes” on crontab, and placing 0 on the first column tells cronjob that we want to run the script when the minute is at 0. (e.g., 1:00 AM, 2:00 PM, 3:00 AM; but not 1:01 AM, 2:02 PM, or 3:03 AM) The second column represents “hour”, and writing */4 tells cronjob to run the script every four hours after 12:00 AM. The third column represents days, the fourth column represents months, and the fifth column represents weekday. I placed a * on the third, fourth, and fifth column because I want to run the script every day, month, and weekday.

    On the second part of the script, bash "/home/[username]/rcloneCopy.sh, I wrote the bash command for cronjob to run every four hours. This command runs our prewritten bash script from earlier.

    After making sure that there aren’t any errors in our crontab entry, I save our modified crontab and exit our text editor.

    To make sure that our new crontab entry is loaded to our cronjob, I restarted the cron service.

    sudo service cron restart

    Await results

    That’s it! Every four hours, our Raspberry Pi will upload all of our captured media to our Google Drive and save it for three weeks. Fortunately, motioneye will delete all the locally captured media files after one day, so our SD card would not get filled up to capacity.

    You can now navigate to your Google Drive motioneye folder to view your captured media using your web browser from anywhere you have an internet connection. I hope this write-up helps you manage your limited SD card storage on your Raspberry Pi while having a much larger archive of captured media files in the cloud!