Jump to content

Arch -- useful user tips


abarbarian

Recommended Posts

This thread is dedicated to useful tips on running an Arch install.

 

CCACHE To Help Speed Up Compilation Process When Installing Packages From AUR

 

You need to install ccache package first. For those wondering, ccache is a fast c/c++ compiler which is used to speed up the compilation process. It speeds up recompilation by caching previous compilations and detecting when the same compilation is being done again. It supports C, C++, Objective-C and Objective-C++.

 

:breakfast:

  • Like 1
Link to comment
Share on other sites

Arch linux can be pretty daunting to a new user. Cylon could be a very useful program to help a new user gain some experience with running Arch.

Arch is designed for experienced users but how do you gain experience and what do you need to gain experience in ? Cylon can help in that it gives you an insight to some programs that as a new user you may not even have heard of let alone had any experience with. For instance I have never heard of "rmlint" even though I have been playing with Arch for several years.

There is a danger that if you use Cylon you may end up just scanning a menu and entering a number choice which would not really help you to gain experience on the finer points of running a Arch install.

However if you use Cylon and then read up on and study each of the programs you use you will gain enough knowledge to be able to use the programs on their own in a safe and useful way.

 

Cylon – The Arch Linux Maintenance Program (2017)

 

 

Cylon-The-Arch-Linux-Maintenance-Program-720x340.png

 

Meet Cylon, a maintenance program for Arch Linux and derivatives. It is a menu-driven Bash script which provides updates, maintenance, backups and system checks for Arch Linux. Cylon is mainly a CLI program, and also has a basic dialog GUI.

 

It provides over 100s of useful options and tools, including the following:

  • cower: AUR package for AUR work
  • gdrive: AUR package for google drive backup
  • lostfiles: AUR package for finding lost files
  • pacaur: AUR helper
  • arch-audit: collect CVE data
  • rmlint: Finds lint and other unwanted
  • rkhunter: finds root kits malware
  • clamav: used for finding malware
  • bleachbit: used for system clean
  • gnu-netcat: used for checking network
  • ccrypt: used for encrypting
  • rsync: used for backup
  • inxi: system information viewer
  • htop: interactive process viewer
  • wavemon: wireless network monitor
  • speedtest-cli: internet bandwidth
  • lynis: system audit tool
  • openbsd-netcat: used for checking network

 

:breakfast:

  • Like 1
Link to comment
Share on other sites

The summary was nice but I wonder why people simply copy the wiki entry to their articles instead of simply linking to the wiki: https://wiki.archlin...ndex.php/ccache

 

Well it is easier to copy than write something original. Mind you sometimes the original does not need any re-write as it is well presented and contains all the necessary information.

At least it looks as though the author is running a Arch install and I am guessing that he has tried out the programs he writes articles for. Unlike some article writers who just copy and paste with no real knowledge of what they are copying and pasting.

 

:breakfast:

 

Wow! Cylon seems very neat!! Thanks for post that, I'll have to check that one out. :thumbup:

 

You don't need Cylon as it is for the lazy/new Arch user not super geeks like you. :tease:

  • Like 1
Link to comment
Share on other sites

  • 3 weeks later...

The Recommended Way To Clean The Package Cache In Arch Linux

 

The above link gives details of how to manually deal with your package cache. It also gives a way to automate this process and I do love ways to make life easier.

 

Automatically clean the package cache

 

You can automate this task using pacman hooks. The pacman hook will automatically clean the package cache after every pacman transaction.

 

To do so, create a file /etc/pacman.d/hooks/clean_package_cache.hook

 

# nano /etc/pacman.d/hooks/clean_package_cache.hook

 

Add the following lines

 

[Trigger]

Operation = Upgrade

Operation = Install

Operation = Remove

Type = Package

Target = *

[Action]

Description = Cleaning pacman cache...

When = PostTransaction

Exec = /usr/bin/paccache -ruk1

 

From now on, the package cache will be cleaned automatically after every pacman transactions (like upgrade, install, remove). You don’t have to run paccache command manually every time.

 

The guide shows

 

Exec = /usr/bin/paccache -r

 

in the script but I have added in the "uk1" as I only want to keep one version of the cached packages and would like to get rid of uninstalled package caches.

Any excuse to fly the flag :Laughing:

 

:breakfast:

Link to comment
Share on other sites

Excellent, abarbarian! I've recently started running paccache here, seems like a nice tool.

 

 

The guide shows

 

Exec = /usr/bin/paccache -r

 

in the script but I have added in the "uk1" as I only want to keep one version of the cached packages and would like to get rid of uninstalled package caches.

 

 

See: https://wiki.archlin...e_package_cache

 

The wiki seems to be saying to run paccache -rk 1 and paccache -ruk0 as separate commands:

 

You can also define how many recent versions you want to keep:

 

# paccache -rk 1

 

To remove all cached versions of uninstalled packages, re-run paccache with:

 

# paccache -ruk0

 

 

I wondered why it was explained that way in the wiki, so I played around with the --dryrun operation here, with the following results:

 

steve[~]$ sudo paccache -dk 1

==> finished dry run: 25 candidates (disk space saved: 194.54 MiB)

steve[~]$ sudo paccache -duk0

==> finished dry run: 2 candidates (disk space saved: 18.27 MiB)

steve[~]$ sudo paccache -duk1
==> no candidate packages found for pruning

 

 

By the way, it doesn't seem to matter if there's a space before the <num> or not:

steve[~]$ sudo paccache -dk1

==> finished dry run: 25 candidates (disk space saved: 194.54 MiB)

steve[~]$ sudo paccache -duk 0

==> finished dry run: 2 candidates (disk space saved: 18.27 MiB)
steve[~]$ sudo paccache -duk 1
==> no candidate packages found for pruning

 

 

Anyway, seems to me that the -u option does make paccache target only uninstalled packages, even though the word "only" isn't included here:

 

-u, --uninstalled target uninstalled packages.

  • Like 1
Link to comment
Share on other sites

securitybreach

I just use the following line in my /etc/pacman.conf:

 

CleanMethod = KeepCurrent

KeepCurrent basically runs pacman -Sc which keeps only the current versions of the installed packages in the cache. I believe this defaults to the last 3 versions as that is what my /var/cache/pacman/pkg directory shows.

 

I do use a hook to automatically take care of mirrorlist.pacnew files though.

 

/etc/pacman.d/hooks/mirrorlist.hook

[Trigger]
Type = Package
Operation = Install
Operation = Upgrade
Target = pacman-mirrorlist

[Action]
Description = Updating mirrorlist...
When = PostTransaction
Exec = /usr/bin/env sh -c "reflector --country 'United States' --latest 50 --age 24 --sort rate --save /etc/pacman.d/mirrorlist; if [[ -f /etc/pacman.d/mirrorlist.pacnew ]]; then rm /etc/pacman.d/mirrorlist.pacnew; fi"

 

This basically uses reflector to grab the latest 50 mirrors for the USA and sort them by speed. Then it removes the /etc/pacman.d/mirrorlist.pacnew file.

  • Like 1
Link to comment
Share on other sites

 

 

The wiki seems to be saying to run paccache -rk 1 and paccache -ruk0 as separate commands:

 

 

 

By the way, it doesn't seem to matter if there's a space before the <num> or not:

 

 

In the article he uses "rk1" and "rk 1" and it had me puzzled too so I tried both and they both worked so it must be the way the program is coded as some programs would throw a fail if you left/not left a space.

 

The " u " is indeed for uninstalled packages as I installed and uninstalled a package to see what it did.

 

He also mentions Securitybreache's " pacman -Sc " tip though not in as much detail.

 

Thanks for the tip about the pacman mirror list hook SB I'll have a look at it later as I am geeked out at the moment. :breakfast:

  • Like 1
Link to comment
Share on other sites

securitybreach

He also mentions Securitybreache's " pacman -Sc " tip though not in as much detail.

 

Thanks for the tip about the pacman mirror list hook SB I'll have a look at it later as I am geeked out at the moment. :breakfast:

 

Well the -Sc switch is the built in method

 

pacman stores its downloaded packages in /var/cache/pacman/pkg/ and does not remove the old or uninstalled versions automatically, therefore it is necessary to deliberately clean up that folder periodically to prevent such folder to grow indefinitely in size.

 

The built-in option to remove all the cached packages that are not currently installed is:

# pacman -Sc

 

https://wiki.archlin...e_package_cache

Link to comment
Share on other sites

He also mentions Securitybreache's " pacman -Sc " tip though not in as much detail.

 

Thanks for the tip about the pacman mirror list hook SB I'll have a look at it later as I am geeked out at the moment. :breakfast:

 

Well the -Sc switch is the built in method

 

pacman stores its downloaded packages in /var/cache/pacman/pkg/ and does not remove the old or uninstalled versions automatically, therefore it is necessary to deliberately clean up that folder periodically to prevent such folder to grow indefinitely in size.

 

The built-in option to remove all the cached packages that are not currently installed is:

# pacman -Sc

 

https://wiki.archlin...e_package_cache

 

Yeah but your "CleanMethod = KeepCurrent" tip is a very neat tip as it does not need user input apart from the initial tweak. :laugh:

  • Like 1
Link to comment
Share on other sites

  • 2 years later...
abarbarian

Example of rsync for simple backups with alias's

 

I have recently been looking into various ways to make backups, both data and full system backups and there are some excellent methods out there. I tried out a few different offerings and found them a tad too un KISS for my simple needs. Besides I like to tinker.

I have my Arch set up with,

Boot in a EFI partition

Root on a partition

Home on a partition

I wanted to make backups of data in my /home and as I am using Arch more for gaming I wanted to make backups of my root partition as linux game data is kept there aswell as in the /home partition. I am not sure what game files are kept where but there seem to be an awful lot of them spread around all over the place> I can not be bothered tracking down what is where and does it need saving so decided to just make backups of whole partitions with some exclusions.

I have an external usb3 dock where I can slot in two sata drives if needed. This is very handy as I can utilise all my older hdd's and newer ssd's. The dock is only turned on when I need to use it. This means that a lot of the backup solutions that offer scheduled automatic backups or cron jobs are not much use to me.

So I settled on using rsync. I made three rsync commands and made alias's for them and they created folders on my backup drive called HISTORY and saved the necessary data to them. The first run of the commands take some time to run, half an hour to three quarters of an hour, as my home is about 80 GB and root is about 38 GB, boot is a mere 250 MB. On subsequent runs only a couple of minutes is needed, time depends mainly on how many different browsers I have used and how many updates pacman has done.

These are the alias commands,

 

alias    bback="sudo rsync -vaAXHShix --delete /boot/ /run/media/bloodaxe/HISTORY/BOOT"

alias    sback="sudo rsync -vaAXHShix --delete --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/home/","/var/lib/dhcpcd/*","/lost+found"} / /run/media/bloodaxe/HISTORY/SYSTEM"

alias    hback="rsync -vaHAXShix --delete --exclude=slots --exclude=tempfile --exclude=.local/share/Trash --exclude=projects --exclude=.local/share/gvfs-metadata --exclude=/home/*/.gvfs /home/bloodaxe/  /run/media/bloodaxe/HISTORY/HOME"

here is a sample output after they have run,truncated as the output can be quite long.

 

Quote

16:54:14-->Wed May 27-->~
-->bback
sending incremental file list
*deleting   EFI/BOOT/icons-backup/os_trusty.png
*deleting   EFI/BOOT/icons-backup/os_mac.png
*deleting   EFI/BOOT/icons-backup/mouse.png
.d..t...... EFI/BOOT/icons-backup/
sent 7.62K bytes  received 151 bytes  15.55K bytes/sec
total size is 82.04M  speedup is 10,555.11

 

16:54:24-->Wed May 27-->~
-->sback
sending incremental file list
.d..t...... dev/
.d..t...... etc/
>f..t...... etc/ld.so.cache
.d..t...... root/
>f.st...... root/.bash_history
.d..t...... tmp/
.d..t...... usr/
.d..t...... usr/bin/
>f..t...... usr/bin/ffmpeg
>f..t...... usr/bin/ffplay
>f..t...... usr/include/libavcodec/avcodec.h
*deleting   var/lib/pacman/local/ffmpeg-1:4.2.3-1/mtree
*deleting   var/lib/pacman/local/ffmpeg-1:4.2.3-1/files
*deleting   var/lib/pacman/local/ffmpeg-1:4.2.3-1/desc
*deleting   var/lib/pacman/local/ffmpeg-1:4.2.3-1/
.d..t...... var/lib/pacman/
.d..t...... var/lib/pacman/local/
cd+++++++++ var/lib/pacman/local/ffmpeg-1:4.2.3-2/
>f+++++++++ var/lib/pacman/local/ffmpeg-1:4.2.3-2/desc
sent 310.49M bytes  received 39.60K bytes  7.48M bytes/sec
total size is 39.58G  speedup is 127.45

 

16:55:14-->Wed May 27-->~
-->hback
sending incremental file list
.d..t...... ./
>f.st...... .bash_history
>f.st...... client_state.xml
>f.st...... client_state_prev.xml
>f..t...... daily_xfer_history.xml
>f.st...... job_log_www.worldcommunitygrid.org.txt
*deleting   .cache/mozilla/firefox/2wtpmdoa.default/cache2/ce_YSwJaHR0cHM6Ly93d3cuYW1hem9uLmNvLnVr
.d..t...... notices/
>f..t...... notices/archive_www.worldcommunitygrid.org_viewNoticesRSSFeed.action.xml
>f..t...... notices/www.worldcommunitygrid.org_viewNoticesRSSFeed.action.xml
sent 43.75M bytes  received 27.91K bytes  1.79M bytes/sec
total size is 78.03G  speedup is 1,782.43
16:55:47-->Wed May 27-->~
-->

 

I may have gone overboard with the rsync switches but I wanted to cover all the bases. I included the " i " option as it gives a nice output of what is going on with each file. You get a "f" for a file "d" for directory,the "s" and"t" tell you size and time has altered and "deleting" is obvious.

I still have some fine tuning to do as I run BOINC and there are a lot of small files relating to that in /home that I do not need to backup.Adding them to the rsync command would make for a very long command so I will use the exclude from text option at some time later on.

Whilst searching for information I came across information for using bash to run several commands one after the other so I created a alias to run all three backup commands like so,

alias    allback="bback && sback && hback"

I have to have a separate command for boot as it is on its own partition and I have included the "x" option which stops rsync crossing filesystem boundaries when recursing. Good job I checked the contents of the system backup whilst doing a trial.

Also if playing around with rsync it is best to use the " n "  dry run option first before doing a real run through.

 

When I have time I am going to see if I can successfully transfer the three backups to another ssd and get a functional system running. According to the Arch wiki that should be possible. I need bit of a rest for now so will try at some future date.

 

😎

Edited by abarbarian
  • Agree 1
Link to comment
Share on other sites

abarbarian

Example of rsync as script with functions for simple backups

 

Arch has a folder " ~/bin " for placing user made scripts. Scripts placed there can be called up and run by simply using the scripts name. I can not remember if Arch automatically makes this "~/bin" folder on a new install. If it does then it will be included in your users "$PATH". If you have the folder you can check to see if it included in your user $PATH by running "set",

 Put brain in gear befor pressing enter14:27:35-->Sun May 31-->~
-->set
BASH=/bin/bash
OSTYPE=linux-gnu
PATH=/usr/lib/ccache/bin/:/usr/lib/ccache/bin/:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/home/bloodaxe/bin:/usr/share/applications:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/home/bloodaxe/bin:/usr/share/applications
 Put brain in gear befor pressing enter14:27:35-->Sun May 31-->~
-->

You can see that I have the "~/bin" (/home/bloodaxe/bin)folder in my $PATH.

 

If you do not have the "~/bin" folder open a terminal create it, make sure that your "pwd" is "~/" when doing this,

 Put brain in gear befor pressing enter14:44:29-->Sun May 31-->~
-->mkdir bin

Then add this to your "~/.baschrc",

export PATH=$PATH:$HOME/bin

and do

 Put brain in gear befor pressing enter14:58:19-->Sun May 31-->~
-->source ~/.bashrc

Now you can make a script. Go to "~/bin" and make a new file give it the name you wish to call your script. Mine is called "dback" shorthand for dailybackup.In the script I have made three functions to back up data from three different partitions, "/boot", my root "/", and "/home" with a fourth function that runs the three scripts in order.The destination for the backups are folders on an external usb drive,BOOT,SYSTEM and HOME. The drive is mounted as "/run/media/bloodaxe/HISTORY".

 

The next thing we have to do is give the shell permission to execute your script. This is done with the chmod command as follows

 Put brain in gear befor pressing enter12:25:02-->Mon Jun 01-->~
-->chmod 700 dback

Using "755" will give you read, write, and execute permission. Everybody else will get only read and execute permission. If you want your script to be private (i.e., only you can read and execute), use "700" instead.

 

Now to use the script I run "dback" in a terminal.

 Put brain in gear befor pressing enter14:58:14-->Sun May 31-->~
-->dback

However my backup destination is a external drive and if it is not switched on or the drive is not mounted then this happens if I run "dback",

 Put brain in gear befor pressing enter14:58:14-->Sun May 31-->~
-->dback
sending incremental file list
rsync: mkdir "/run/media/bloodaxe/HISTORY/BOOT" failed: No such file or directory (2)
rsync error: error in file IO (code 11) at main.c(664) [Receiver=3.1.3]
sending incremental file list
rsync: mkdir "/run/media/bloodaxe/HISTORY/SYSTEM" failed: No such file or directory (2)
rsync error: error in file IO (code 11) at main.c(664) [Receiver=3.1.3]
sending incremental file list
rsync: mkdir "/run/media/bloodaxe/HISTORY/HOME" failed: No such file or directory (2)
rsync error: error in file IO (code 11) at main.c(664) [Receiver=3.1.3]
 Put brain in gear befor pressing enter14:58:19-->Sun May 31-->~
-->

With the drive mounted running "dback" gives me the expected result as detailed in the previous post.

 

Here is the script,

#!/bin/bash
#backups for /boot , / , and /home. Scripts runs all three functions. 

# /boot backup
bback () {
    sudo rsync -vaAXHShix --delete \
         /boot/ \
         /run/media/bloodaxe/HISTORY/BOOT
}

# root backup
sback () {
     sudo rsync -vaAXHShix --delete \
     --exclude=/dev/* \
     --exclude=/proc/* \
     --exclude=/sys/* \
     --exclude=/tmp/* \
     --exclude=/run/* \
     --exclude=/mnt/* \
     --exclude=/media/* \
     --exclude=/home/* \
     --exclude=/var/lib/dhcpcd/* \
     --exclude=/lost+found \
     / \
     /run/media/bloodaxe/HISTORY/SYSTEM
}

# /home backup
hback () {
    rsync -vaHAXShix --delete \
        --exclude=slots \
        --exclude=tempfile \
        --exclude=.local/share/Trash \
        --exclude=projects \
        --exclude=.local/share/gvfs-metadata \
        --exclude=/home/*/.gvfs \
        /home/bloodaxe/ \
        /run/media/bloodaxe/HISTORY/HOME
}

# Establish run order
main ()  {
      bback
      sback
      hback
}

main

 

I still have some work to do on this script, it does work as is. Though I would like to exclude some more folders and files to speed up the backup and save some space. To do this I will create a "exclude text file" and add it to the script.I'll post the new script later on once I have worked out exactly how to do it and tested it.

 

😎

Edited by abarbarian
  • Agree 1
Link to comment
Share on other sites

securitybreach
1 hour ago, abarbarian said:

Arch has a folder " ~/bin " for placing user made scripts. Scripts placed there can be called up and run by simply using the scripts name. I can not remember if Arch automatically makes this "~/bin" folder on a new install. If it does then it will be included in your users "$PATH".

 

Nope, I have never had a ~/bin folder before.

Link to comment
Share on other sites

abarbarian
3 hours ago, securitybreach said:

 

Nope, I have never had a ~/bin folder before.

 

I did not think I had one in a fresh Arch install but I noticed that I had two "/home/bloodaxe/bin" entries in my $PATH so could not be sure. I have fiddles around with scripts before and thought I had made and installed the ~/bin folder along with the $PATH hack from a guide I was following but could not be sure.

I just checked out the $PATH I posted and it looks like 90% of it is duplicated which is a puzzle, maybe I should post that in my Puzzle thread 🤣

  • Agree 1
Link to comment
Share on other sites

abarbarian

Example of rsync as script with functions for simple backups using a exclude-from file

 

Follow the same steps as the previous post but in the script instead of adding every "--exclude" on a separate line use a "exclude-from=text file".

 

I created two exclude-from files "ex-root" and "ex-home" and placed them in "/home/bloodaxe/Linux/Scripts/dback/". This the "ex-root" exclusion file,

 

Quote

/dev/*
/proc/*
/sys/*
/tmp/*
/run/*
/mnt/*
/media/*
/home/*
/var/lib/dhcpcd/*
/lost+found

 

here are the changes I made to the script,

 

Quote

 

#!/bin/bash
#backups for /boot , / , and /home. Scripts runs all three functions.

# /boot backup
bback () {
    sudo rsync -vazAXHShix --delete \
         /boot/ \
         /run/media/bloodaxe/HISTORY/BOOT
}

# root backup
sback () {
     sudo rsync -vazAXHShix --delete \
     --exclude-from=/home/bloodaxe/Linux/Scripts/dback/ex-root \
     / \
     /run/media/bloodaxe/HISTORY/SYSTEM
}

# /home backup
hback () {
    rsync -vazAXHShix --delete \
        --exclude-from=/home/bloodaxe/Linux/Scripts/dback/ex-home \
        /home/bloodaxe/ \
        /run/media/bloodaxe/HISTORY/HOME
}

# Establish run order
main ()  {
      bback
      sback
      hback
}

main

 

 

As you can see a much neater script and having a "exclude-from=file" makes it easier to add extra files and folders for exclusion.

 

Searching on the net I found that there are a few different ways to write the "exclude-from=file". The way I settled on seemed simplest to me and also looks the neatest.

 

These links were helpful,

https://www.howtogeek.com/168009/how-to-exclude-files-from-rsync/

 

https://sites.google.com/site/rsync2u/home/rsync-tutorial/the-exclude-from-option

 

https://askubuntu.com/questions/320458/how-to-exclude-multiple-directories-with-rsync

 

I will be refining the script again but have a load of reading to do first. So an update may take some time.

 

😎

Edited by abarbarian
tidy up script
  • Agree 1
Link to comment
Share on other sites

Debian doesn't have a ~/bin by default but it recognises it in $PATH if you make one. I have several scripts in there. I installed something with PIP recently which it placed in ~/.local/bin/ and PIP kindly alerted me to manually add that directory to $PATH.

 

You can simplify the syntax a little for your rsync exclude file. Here's mine for home backup:

chromium
rsyncerrors
rsynclog
Sync
temp
.cache
.local
.mixxx
.thumbnails
.wpa_cli_history

Note that 2 of those are files and the rest directories. Everything is a file in Linux! 😉  Doesn't need "/" or "*" and excludes the directory and all its contents. I think yours only excludes contents.

I actually just keep the commands for rsync in a text file as I usually like to do a dry-run first so copy/paste that and the just delete the --dry-run to do the actual sync.

rsync -avi --dry-run --delete --progress --exclude-from=rsync_exclude /home/roger/ /mnt/stash/Backup/home-brain2/ 1>rsynclog 2>rsyncerrors

Which reminds me - it's about backup time :) I don't automate it as it doesn't change that fast and I don't keep much important stuff in home anyway. I treat it as a staging post for downloading and daily work before moving things I want to keep to a storage drive.

 

A short time later - Backup done. I changed the command a little so it would write messages to a file simultaneously with showing on the terminal. Previously it only wrote to 2 files and the "rsync-errors" file was always empty anyway. And remove "--progress" - it shows only progress of each file rather than the whole process so is not particularly useful in this case:

rsync -avi --dry-run --delete --exclude-from=rsync_exclude /home/roger/ /mnt/stash/Backup/home-brain2/ 2>&1 | tee -a rsynclog

 

Link to comment
Share on other sites

abarbarian
3 hours ago, sunrat said:

Note that 2 of those are files and the rest directories. Everything is a file in Linux! 😉  Doesn't need "/" or "*" and excludes the directory and all its contents. I think yours only excludes contents.

 

You are looking at the exclude file for my root partition backup. You are correct in thinking that the way  I have written it keeps the folder but does not keep the contents, this is because I will try to use this script to make a fresh install on a new drive.So I would like to have the folders created but not the temporary contents. MY exclude file for home looks pretty much like yours but a little longer.

I keep a copy of the script in my Zim on a sub page  of the Scripts entry and I use the "n" option whilst testing. Never came across mention of logging the rsync so do not do so, seems a bit superfluous if testing with the "n" option.

There are a ton of guides out there but a lot of them are cut and paste jobs just presented slightly differently. The rsync man pages are good but take some working through and are a bit confusing for a novice like me. Though you can find some clear and concise guides or guides that have some relevant sections out there.Rsync was designed for backing up servers to remote locations not for small local backups so it is hardly surprising that the guides are a tad complex and lacking in basic instructions.

Where do you keep your exclusion file as I notice you do not state a path for it ? 😎

Link to comment
Share on other sites

38 minutes ago, abarbarian said:

Where do you keep your exclusion file as I notice you do not state a path for it ? 😎

 

Haha, I had a feeling you'd ask that but my post was long enough already. I keep it in HOME and always run rsync from there. Of course it can go anywhere if you specify its full path.

I never rsync the full system as I do a regular Clonezilla backup.

  • Agree 1
Link to comment
Share on other sites

abarbarian
1 hour ago, sunrat said:

 

Haha, I had a feeling you'd ask that but my post was long enough already. I keep it in HOME and always run rsync from there. Of course it can go anywhere if you specify its full path.

I never rsync the full system as I do a regular Clonezilla backup.

 

Ah ha, I see. Why on earth do you not have your rsync command as an alias ? You could make changes to it easily enough. 😎

Link to comment
Share on other sites

13 minutes ago, abarbarian said:

 

Ah ha, I see. Why on earth do you not have your rsync command as an alias ? You could make changes to it easily enough. 😎

 

¯\_(ツ)_/¯

Link to comment
Share on other sites

  • 1 year later...
abarbarian

For any one running the Surfshak VPN there is now a gui application in the AUR.

 

https://aur.archlinux.org/packages/surfshark-gui-bin

 

I installed it and it works but has a few very minor kinks on my Arch, these kinks could be down to me or the program.

 

I am using icwm as my window manager and Surfshark shows up in the application menu. On starting Surfshark a pop up box appears asking for a keyring password. This I believe is connected with "gnome-keyring" which asked me to add a new password on initial setup. Once the password is entered Surfshark opens to the gui.

 

Once opened Surfshark shows a disconnected pop up. A click dismisses this and you can click the disconnect button, which throws up another pop up saying this is connection is already co figured and does not connect. Clicking on this dismisses the pop up and you can click on your chosen connection and you are successfully connected. You can the close the gui and Surfshark runs in the background ok with a small icon showing in the task bar. I believe that this surfshark-gui-bin program uses "wireguard" and not open-VPN.

 

I added this to my "startup folder" so that I could have Surfshark start on boot up. I have added other programs to this folder and they start as intended. However Surfshark does not as it requires the afore mentioned password.

 

Quote

Added this to /home/bloodaxe/bin/startup

# start surfshark-gui-bin
/usr/bin/surfshark  &

 

So I have a few questions for you good folk and maybe you can help. I have read relevant pages in the Arch Wiki but can not see any clear answers.

 

Is it possible to automatically enter the password so I can have Surfshark start automatically when I boot up ?

 

Do I need to make some alterations in the wireguard configs so that when the Surfshark gui starts it automatically finds my chosen connection and connects ?

 

Other than the couple of quick clicks and entering the password needed, and they are only  minor annoyances,  Surfshark runs ok in the background.It runs nicely on my android phone too. 

 

Thanks in anticipation for the excellent help you folks will give me. 😎

 

 

Link to comment
Share on other sites

securitybreach

Well I have never used surfshark, it sounds like you need to autostart gnome-keyring via IceWM: https://wiki.archlinux.org/title/IceWM#Autostarting

 

If you are using IceWM, you should also start surfshark via the icewm config file ~/.icewm/startup

 

As far as connections, perhaps this https://support.surfshark.com/hc/en-us/articles/360017418334-How-to-set-up-Surfshark-VPN-on-Linux-Legacy-version-

 

That said, Surfshark is proprietary so who knows what the client is actually doing. I prefer Mullvad and have been using them for years now. It's $5 a month and you can send them cash in the mail to Sweden. They require zero registration or even an email address. You go to the site, pay for 1 month via bitcoin or card and you get an autogenerated account number that you log in with. Then if you want, you can write the generated token from your account on a piece of paper and send cash for payment. It usually takes about a month to arrive and post but its complete anonymity. That said you can use credit, debit, bank transfer, cryto, cash, etc to make payment.

 

Plus Mullvad helped develop Wireguard in the beginning and still are one of the main supporters https://en.wikipedia.org/wiki/WireGuard

Link to comment
Share on other sites

abarbarian
6 hours ago, securitybreach said:

If you are using IceWM, you should also start surfshark via the icewm config file ~/.icewm/startup

 

I had my startup in the wrong folder. Thanks.

 

Just tried a reboot and now automatically get gnome-keyring asking for a password so that is one step closer to my goal.

 

6 hours ago, securitybreach said:

it sounds like you need to autostart gnome-keyring via IceWM

 

Hmm if I did,

 

Quote

# start gnome-keyring

 

/usr/bin/gnome-keyring

 

in  ~/.icewm/startup would I still have to input the password. It would be one step less I guess.

Trouble is there is a gnome-keyring-3 and a gnome-kering-daemon. Hmmmm more reading and info needed methinks.

 

Mullard is good by all accounts and kudos to them for the wireguard work. I still have some time to go with my paid Surfshark so I will stick for now.

 

😎

Edited by abarbarian
Link to comment
Share on other sites

securitybreach
1 hour ago, abarbarian said:

Trouble is there is a gnome-keyring-3 and a gnome-kering-daemon. Hmmmm more reading and info needed methinks.

 

Mullard is good by all accounts and kudos to them for the wireguard work. I still have some time to go with my paid Surfshark so I will stick for now.

 

😎

 

Sorry, havent messed with gnome since the 2.x days.

 

BTW its Mullvad, not mullard ;)

  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...