Jump to content

Ruinning two os's on same pc ???


abarbarian

Recommended Posts

abarbarian

Way back when I tried installing two linux distros on two separate drives on my pc. I remember running into problems as they both tried to use just one home partition.

 

I would like to install Arch on two separate ssd's on my pc. Use one as my main workstation os and use the other as a test bed. Will I run into any problems if I install a separate copy of Arch on each of the ssd's ????

 

😎

  • Like 1
Link to comment
Share on other sites

If I were going to do this (and I am not) I would set the test system up on an external SSD and then boot from it using the setup menu. You are using rEFInd as i recall so that boot manager might facilitate a dual boot. I would be concerned about trashing the kernel on your production system. Josh will likely have some good suggestions.

 

https://teejeetech.com/2020/09/05/linux-multi-boot-with-refind/

Edited by raymac46
  • Like 1
Link to comment
Share on other sites

Hedon James

I'm sure you have your reasons, but wondering why not just set up your "test bed" Arch in a VM?  For me at least, that's the whole point of VMs....try it out in a "test bed" situation before you commit to your production OS...thereby preventing your production OS from getting FUBARed.

  • Like 1
Link to comment
Share on other sites

Pussies! 🤣 At last count, I have seven OSs on one SSD. No shared home though, separate data drives or partitions for saved data.

  • Haha 1
  • Agree 2
Link to comment
Share on other sites

Over the years I have amassed a collection of junker laptops that enables me to run the handful of distros I am interested in "on the rails." No need for chainloading or whatever you do with EFI these days.

I used a VM to practice my Arch install technique until I got it right. However I found that Arch in a VM gave me more than my share of black screens, crashes, and recovery console events. I think it's because Arch is a bleeding edge distro and often "outruns" the updates in VirtualBox and its Guest Additions.

Link to comment
Share on other sites

V.T. Eric Layton

At one time, I was running 18 GNU/Linux OSes on a single (mechanical) drive. The trick is to partition properly. Make a /(root) and a /home partition for each OS. This will allow you to keep the operating systems separated; the user would have a dedicated /home for each of them.

  • Agree 1
Link to comment
Share on other sites

Hedon James

I'll echo Sunrat and VT's statements.  I also have root and home on the same partition, but ALWAYS create a symlink to my DataDrive, which is ALWAYS a separate HDD.  I also backup the DataDrive to a separate external BackupDrive.  I'm just super-cautious and redundant that way because of a catastrophic event I experienced in my early 20s.  It took YEARS to rebuild my database of files.  I've been a diligent backer-upper ever since.

 

But I digress....back to Barbarians scenario.....I'm always /root and /home on 1 disk (I used to partition for dual or triple booting, but I don't do that anymore); DataDrive on a 2nd disk; and Backup on a 3rd disk.  If ANY of these drives fail, I'm just a LiveUSB re-install away from my OS being restored intact; or a restored backup to my DataDrive being restored in full; or a newly created Backup on new drive.  JMO....it works for ME.

Link to comment
Share on other sites

The last time I had several separate drives on a pc, I used a HD switch.  I'm not really familiar with having multiple ssd drives on a pc...  So, are you talking about several, separate ssd's or several partitions on a single ssd?  🤔

 

Link to comment
Share on other sites

On my desktop Linux system I have an SSD and an HDD but I don't symlink anything. I keep /home/ray on the same SSD partition with / and then simply edit fstab to mount the HDD as /home/ray/datadrive2.

Edited by raymac46
Link to comment
Share on other sites

Hello,

Is there any reason not to run the second instance of Arch inside of a virtual machine as a guest on the host operating system?

Regards,

Aryeh Goretsky
 

Link to comment
Share on other sites

securitybreach

You could install each one like normal to each drive and then make an entry in your bootloader menu to point to the secondary drive.

Link to comment
Share on other sites

securitybreach

I've haven't done so but I don't see why it wouldn't work. Tons of guides out there but they are for doing so with windows and linux.

Link to comment
Share on other sites

V.T. Eric Layton

Oh, my... I may be a wee bit inebriated. This isn't good behavior for an Admin, dammit. :(

  • Like 1
  • Haha 1
  • Agree 1
  • +1 1
Link to comment
Share on other sites

V.T. Eric Layton

Hey... have I told you folks that I love you lately?

 

Yeah... Yeah... drunken texts. ;)

  • Like 2
  • Haha 2
Link to comment
Share on other sites

abarbarian
18 hours ago, raymac46 said:

You are using rEFInd as i recall

 

Yes I am and will be using GPT as my set up is UEFI. As Arch is UEFI friendly rEFInd will automatically find a bootloader and show it at the boot screen. That is one of the best features of UEFI set ups you do not need to mess around with grub any more.

 

17 hours ago, Hedon James said:

why not just set up your "test bed" Arch in a VM?

 

Why on earth would I want to mess around with a VM. By the time I have figured out how to set up and run one I could have a real install set up and running in far less time. As far as my limited knowledge knows VM's do not run the same as a real install. They do not give you a full and clean running experience. They add an extra layer of complexity which can go wrong. For test purposes they may be useful. They are like driving simulators or FPS games ok for learning how a system works but nothing like living a real experience.

 

16 hours ago, raymac46 said:

I used a VM to practice my Arch install technique until I got it right.

 

A fine example of when to use a VM.

 

16 hours ago, raymac46 said:

However I found that Arch in a VM gave me more than my share of black screens, crashes, and recovery console events.

 

A fine example of running in a VM as opposed to running for real.

 

16 hours ago, V.T. Eric Layton said:

Make a /(root) and a /home partition for each OS.

 

That was my intention as it makes sense. 18 on one drive bet that kept the old grey cell awake.

 

17 hours ago, Hedon James said:

try it out in a "test bed" situation before you commit to your production OS...thereby preventing your production OS from getting FUBARed.

 

There may have been improvements to VM's but in the past they had problems with graphics and some other necessary processes which did not act like a real install. So yes you can test some stuff but but not a full set.

 

11 hours ago, wa4chq said:

The last time I had several separate drives on a pc, I used a HD switch.  I'm not really familiar with having multiple ssd drives on a pc...  So, are you talking about several, separate ssd's or several partitions on a single ssd?

 

I am puzzled as to why you had a HD switch. If you plug in a drive hdd/ssd/nvme then the bios will recognise it and most os's will recognise it straight away and let you mount or unmount it.

 

I am going to have one nvme for my main Arch which will have separate boot/root/home/swap. One nvme with one partition for game and stuff storage. One ssd  with one partition for clone backups of the main Arch.One ssd with separate root/home partitions for the Arch Test os. I will not need a separate boot on the Test Arch and will not even install a boot loader as systemd comes with one which rEFInd will find automatically.

 

14 hours ago, Hedon James said:

but ALWAYS create a symlink to my DataDrive, which is ALWAYS a separate HDD. 

 

I have no idea about symlinks. I just use my file manager to access drives internal or external, permanent or transitionary. If I want them to be permanently accessible I can add them to fstab.

 

8 hours ago, raymac46 said:

On my desktop Linux system I have an SSD and an HDD but I don't symlink anything. I keep /home/ray on the same SSD partition with / and then simply edit fstab to mount the HDD as /home/ray/datadrive2.

 

A good example of how easy using linux is.

 

7 hours ago, goretsky said:

Is there any reason not to run the second instance of Arch inside of a virtual machine as a guest on the host operating system?

 

Yes. I know nothing about running in a VM and do not wish to do the research into it. Also I believe a VM does not act 100% like a real install. I could probably do a fresh install of Arch in around an hour. It would take me considerably longer to read up on VM's. Running from a VM would open me up to the possibility of operator error and any other VM error. I can not see any positives to doing so.

 

7 hours ago, securitybreach said:

You could install each one like normal to each drive and then make an entry in your bootloader menu to point to the secondary drive.

 

No need to add an entry to a bootloader as rEFInd will find the included EFI stub loader.

 

Quote

rEFInd has several features:[7]

  • Automatic operating systems detection.
  • Customisable OS launch options.
  • Graphical or text mode. Theme is customisable.[8]
  • Mac-specific features, including spoofing booting process to enable secondary video chipsets on some Mac.
  • Linux-specific features, including autodetecting EFI stub loader to boot Linux kernel directly and using fstab in lieu of rEFInd configuration file for boot order.

 

I have dual booted for over ten years using different techniques so am quite happy doing so.

 

My original Q was because I have memory of difficulty with running two linux os's on the same pc. This was when I was using hdd's and MBR with the only four primary partitions per drive. I think I was trying to run Mandriva and Ubuntu and they were both trying to just use one common /home which as you can see led me into all sorts of problems. As I knew little about pc's and less about linux I had a darn steep learning curve.

 

18 hours ago, sunrat said:

Pussies! 🤣 At last count, I have seven OSs on one SSD. No shared home though, separate data drives or partitions for saved data.

 

I recon that answers my Q. Do you have each os on just the  one partition so home and  root are together. Or do you have a separate /root and /home partition for each os ?

 

Thanks for all the replies folks. 😎

  • Like 2
  • +1 1
Link to comment
Share on other sites

I had a switch because I had three hd's and it was mid-90's.  It was a little bit more involved unplugging and plugging hard drives back then....lol.

switch

Link to comment
Share on other sites

abarbarian
14 minutes ago, wa4chq said:

I had a switch because I had three hd's and it was mid-90's.  It was a little bit more involved unplugging and plugging hard drives back then....lol.

switch

 

Neat solution. 😎

 

A friend uses slot in drive carriers that go into the cd/dvd drive slots. Plays games with a Windows slot in drive takes it out slots in a Mint linux drive for daily use.

Link to comment
Share on other sites

2 minutes ago, abarbarian said:

 

Neat solution. 😎

 

A friend uses slot in drive carriers that go into the cd/dvd drive slots. Plays games with a Windows slot in drive takes it out slots in a Mint linux drive for daily use.

That'll work.  The only drawback I see is the need to store the drive outside the box and have it handy when you want to use a different one.  With my luck, I'd forget where I put the thing!

 

  • Haha 1
Link to comment
Share on other sites

VMs are pretty good as long as you don't need fancy graphics and don't mind a little latency. There is a bit of a learning curve with Guest Additions in VirtualBox but some of the more user friendly distros work out of the box to give you a full screen experience.

I have had good results with the more old school point release distros like Linux Mint or Ubuntu.

In terms of user experience, On the Rails > Virtual Machine >> Live on USB.

It is worth learning a bit about VMs but I don't see them as a permanent way to run a distro. Much better than distro farming and multibooting if all you want to do is test. You can bork them without destroying your main system. Wifi is dead easy as the VM thinks it is wired through your (working) main system wifi.

That said I think you already know how to multiboot with EFI and if you are careful with partitioning you'll be fine.

  • Like 1
Link to comment
Share on other sites

I wouldn't think you'd need a separate /home partition for a test system even if installed on bare metal. You shouldn't have a lot of mission critical data on a test system so when you reinstall you just start from scratch. Of course with Arch you never reinstall...:w00tx100:

Edited by raymac46
  • Like 1
Link to comment
Share on other sites

Hedon James
5 hours ago, abarbarian said:

 

Why on earth would I want to mess around with a VM. By the time I have figured out how to set up and run one I could have a real install set up and running in far less time. As far as my limited knowledge knows VM's do not run the same as a real install. They do not give you a full and clean running experience. They add an extra layer of complexity which can go wrong. For test purposes they may be useful. They are like driving simulators or FPS games ok for learning how a system works but nothing like living a real experience.

 

A fine example of when to use a VM.

 

A fine example of running in a VM as opposed to running for real.

 

There may have been improvements to VM's but in the past they had problems with graphics and some other necessary processes which did not act like a real install. So yes you can test some stuff but but not a full set.

 

I have no idea about symlinks. I just use my file manager to access drives internal or external, permanent or transitionary. If I want them to be permanently accessible I can add them to fstab.

 

Yes. I know nothing about running in a VM and do not wish to do the research into it. Also I believe a VM does not act 100% like a real install. I could probably do a fresh install of Arch in around an hour. It would take me considerably longer to read up on VM's. Running from a VM would open me up to the possibility of operator error and any other VM error. I can not see any positives to doing so.

 

Well it is your machine, and your decision, but in my defense, you did say in your initial post:

Quote

I would like to install Arch on two separate ssd's on my pc. Use one as my main workstation os and use the other as a test bed.

 

From that statement, I gathered you want to experiment with new learnings without risking the continued viability of your everyday system.  And that is the perfect use-case scenario for a VM, IMO.  If you think the learning curve for VMs is greater than the time spent installing on bare metal, I think you overestimate the difficulty of VMs.  From my perspective, I'm quite impressed (and sometimes amazed) at your demonstrated abilities to ferret out gremlins and exterminate them.  A fellow that possesses THAT skillset will find that VMs are quite easy....perhaps even boring?!  I would only add that there are other virtualization offerings besides VirtualBox, which includes some proprietary software in the form of Guest Additions (for the graphics and peripherals, and virtual sharing of resources).  I use Virtual Machine Manager (VMM), which I believe is a combination of qemu for the backend and virt-manager for the GUI.  It is a 100% linux offering, with open source packages, and I have had no issues whatsoever....EVER....especially with graphics.  Although I certainly understand the concern there.  I don't know that I know enough to assuage your concerns, but I'll bet SB does.  He's the one that introduced me to VMM in one of his posts, and I just pursued it on my own.  My only difficulties with VMM was that it wasn't VirtualBox, so I had to learn some things differently.  But it really is quite easy, IMO.

 

But if a VM isn't what you want....for WHATEVER reason you decide, that's your call....I'm just saying that it is an option, and a very simple one, based on your "test bed" comment.  If a metal installation is what you want, I'm also confident you will be able to install 2 instances of Arch, to 2 different drives on your machine, with no issues as long as you have a root and /home for EACH installation, and keep them together on the same disk for the sake of tidiness and keeping things sorted out properly.  I'm not sure if GRUB would like that setup, but rEFInd should make it a non-issue.  I remembered you were a rEFInd guy, and that's something I plan to steal from you on my next installation!

 

I don't think you need it, but I'll wish you good luck.  And even if you bork it up somehow, something tells me you have the skillz to make it right again; although I can't blame you for trying to do a little research in hopes of avoiding a lot of troubleshooting.  Have at it, buddy, and tell us how things turned out!

Link to comment
Share on other sites

V.T. Eric Layton
6 hours ago, abarbarian said:

18 on one drive bet that kept the old grey cell awake.

 

 

Well, I was feeling my way around GNU/Linux at the time and was interested in seeing how the different distros did things. I came to the conclusion that, thanks to common foundation of GNU, there are very few differences between distributions; mostly just different package management and desktop environments.

  • Agree 1
Link to comment
Share on other sites

V.T. Eric Layton
1 hour ago, raymac46 said:

I wouldn't think you's need a separate /home partition for a test system even if installed on bare metal.

 

With GNU/Linux, as long as you intend to have a non-Root user accessing the installation, you MUST have a /home/<user> directory; it doesn't have to have its own partition, though. If you're just going to play around/test as Root, then possibly you wouldn't need the /home directory; although, there would automatically be a /home/root directory in the tree somewhere... can't operate without it.

Link to comment
Share on other sites

V.T. Eric Layton

For instance, on my Slackware, this is the directory tree:

 

root@ericsbane07/:# ls
bin   dev  home  lib64	     media  opt   root	sbin  sys  usr
boot  etc  lib	 lost+found  mnt    proc  run	srv   tmp  var

 

The "home" for root is labeled "root" and contains this:

 

root@ericsbane07~:# ls -al
total 80
drwx--x---  9 root root 4096 Apr  9 13:18 .
drwxr-xr-x 22 root root 4096 Jun 20 13:06 ..
-rw-------  1 root root    0 Feb  1 11:30 .Xauthority
-rw-------  1 root root 8770 Jun 20 13:06 .bash_history
-rw-r--r--  1 root root  762 Jul 11  2021 .bashrc
drwx------  2 root root 4096 Jul 11  2021 .cache
drwx------  5 root root 4096 Dec  7  2022 .config
drwx------  2 root root 4096 Jul 11  2021 .gnupg
drwxr-xr-x  2 root root 4096 Jan 25  2022 .hplip
drwx------  2 root root 4096 Dec  7  2022 .kde
drwxr-xr-x  3 root root 4096 Jul 11  2021 .local
drwxr-xr-x  2 root root 4096 Apr  9 13:18 .vim
-rw-------  1 root root 8338 Apr  9 13:18 .viminfo
-rw-r--r--  1 root root  289 Jun 20 13:04 .wget-hsts
-rw-r--r--  1 root root 1198 Jul 10  2021 .xinitrc
-rwxr-xr-x  1 root root 1198 Jul 10  2021 .xsession
-rw-r--r--  1 root root 2620 Apr  9 13:22 xorg.conf.new

 

The directory labeled "home" in the above tree is the default user directory and contains this (on my system, anyway):

 

root@ericsbane07/home:# ls -al
total 36
drwxr-xr-x  6 root   root   4096 Jun  2  2020 .
drwxr-xr-x 22 root   root   4096 Jun 20 13:06 ..
drwxr-xr-x  2 root   root   4096 Jun  2  2020 ftp
drwx------  2 root   root  16384 Dec  1  2016 lost+found
drwx------ 47 vtel57 users  4096 Jul  4 08:02 vtel57
drwxr-xr-x  2 root   root   4096 Jan 15  2020 vtel57_archives

 

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Well of course any user will need a /home/user directory. It doesn't need its own partition unless you want to reinstall the distro while preserving your data on the system. That is up to the user to decide.

  • Like 2
Link to comment
Share on other sites

I have stuck with VirtualBox largely because I have tested VMs on Windows as well as Linux. But if you are strictly running a VM on a Linux system I see the appeal of a FOSS application.

Link to comment
Share on other sites

I think it's really great that a Linux user has so many choices for testing out a distro today:

  1. Use a junk machine to install and run the distro .
  2. Test out the distro in a Virtual Machine.
  3. Install on a separate SSD or HDD (external or internal) and boot from there.
  4. Set up another distro on the same drive using a boot manager.
  5. Boot a Live ISO on a thumbdrive or even an optical drive.

Linux is all about CHOICE.

  • Like 1
  • Agree 3
Link to comment
Share on other sites

V.T. Eric Layton
2 hours ago, raymac46 said:

Linux is all about CHOICE.

 

INDEED, YES! This has always been one of those sterling aspects of GNU/Linux, I think. Don't like this? Try that. Don't like that? Try another. :)

  • Agree 2
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...