Building my TrueNAS Scale server

Introduction

As for anyone that reads my blog before. I have quite an extensive home data storage setup. As evidenced by my last post about it My network attached storage – Early 2022. Since then a few things have changed slightly. Firstly the sync copy server I called “BADIDEA” has been retired as it was just wasting electricity. I still sync my data to my USB drives. The 2nd change is that as one more drive in MegaServer died, I only replaced them with 2TB drives instead. As I had hoped to put all the drives to 2TB eventually.

I always wanted to move to TrueNAS scale as this is Debian based and a platform I am more familiar with. But I didn’t like the idea of importing a BSD ZFS pool into a Linux system. So I wanted to build a new machine.

Hardware

For this build, I wanted to use an off-shelf server, not something that I have put together myself. As I wanted hardware that is designed to run 24/7. I also wanted to use hardware that had an IPMI management interface.

I chose the “Dell PowerEdge 2950 MKII” as I had

one laying around. The server I had. Came with the following specs:

  • 1x Intel Xeon E5310 Quad-Core 1.60GHz
  • 1x Empty 771 socket
  • 8GB of DDR2 666Mhz fully buffered RAM
  • 4x 300GB 10k SAS drives (3.5″)
  • 2x 600GB 10k SAS drives (3.5″)
  • 2x 1GB network cards
  • Adaptec SCSI HBA
  • Some combined NIC and SCSI card
  • 2x PSUs

Upgrades

Drives

I obviously needed to upgrade the drives here as having 300GB drives was never going to allow me to migrate my data from MegaServer. But I discovered that SAS drives are significantly cheaper than SATA right now. So I decided to go with SAS drives. I chose 6x 4TB SAS drives to use in a ZFS pool. If I configure these in a single VDEV, this will give me 18TB of storage space. If I use these in two VDEVs then this will give me 14TB. As it stands. MegaServer gives me 10TB of storage.

Host bus adaptor (HBA)

The next issue I had to overcome is that the PowerEdge 2950 has a PERC 5/i RAID card installed. Unfortunately, this does not have a mode to allow it to just pass the drives to the OS (HBA). During my testing, I had 6 drives set individually into a RAID0. But this presented problems as I could not just plug and play the drives to replace them. I had to reboot the server and create a new RAID0 without disturbing the other 5 arrays.

I reached out to “The Art of Server” and he recommended that I tried his Dell PERC 6/i that has been re-flashed to work as an LSI HBA card. This would allow me to pass through my drives directly to the OS. As it came out of a Dell server. I also would not have to replace the SAS cables in the server. This would have been very expensive.

Memory (RAM)

8GB of RAM was never going to cut it. I needed much more. My current TrueNAS “MegaServer” has 20GB and that struggles. I decided to start with 32GB with the option to upgrade to 64GB if I need to. During testing, I managed to keep 20GB free but granted that was using 500GB drives and not 4TB ones.

CPU

During testing, I kept the E5310 in the server. I was initially worried that 1.60GHz was not going to cut it. But I was pleasantly surprised that the CPU usage did not exceed 50% during file access. The CPU load did increase when I was streaming a movie from plex, however. I decided to take the risk and stick with this CPU. But I did order another one to work alongside it. Meaning that this server will have 8 cores (no hyperthreading).

Boot drive

I didn’t spend much time looking into this. I got the cheapest SSD that I could find. I found this one for about £20 on Amazon. Its 120GB. Plenty for a boot drive. I did try and use two mirroring 32GB USBs but this did not work at all. It was way too slow on the server’s old USB bus.

Configuring

Setting up the BIOS differently per machine and up to user discretion. But here is what I did:

  • Set a setup password for security
  • Enabled setup security
  • Enabled the internal SATA controller
  • Enabled SATA port A (for the boot drive)
  • Disabled booting for anything that is not SATA A
  • Disable PXE booting

After I had the BIOS setup. I removed all the data drives that I would later add to my ZFS pool and inserted the TrueNAS scale boot USB. I ran through a standard install of TrueNAS scale and after it booted, for some reason I could not assign a static IP. I had to allow it to get a dynamic IP and then later change this in the web UI. Strange.

Once the OS was installed. I then proceeded to configure by data pool. As mentioned above, I had 6x 4TB SAS drives to install and I pondered over what level and configuration of ZFS I wanted to use. At first, I wanted to use RAID Z1 with 2 V-DEVs. This would give me about 14TB. Then I looked at one V-DEV in RAID Z2, and this gave pretty much the same result. The difference between the two was in the 2 V-DEV scenarios, I could lose two drives, but they would have to be one in each V-DEV. With Z2, I could lose any 2. After a few days of pondering. I decided that for years I had been running RAID Z1 with no issues and I kept reminding myself that RAID is not a backup. So I was happy to continue with RAID Z1. Then I thought “Have I spent all this money to only get an extra 4TB of usable space?”. So I opted just to put all the drives in one V-DEV in RAID Z1. Giving me 18TB of usable space. There are some trade-offs with this. Mainly this would seriously limit my expansion options. But 18TB is a crazy amount of space for me to use. So I am happy with the choice I made. But I do need to keep a cold spare drive on hand to replace if a drive dies.

Moving my data

Moving my data was something that I put quite a lot of thought into. At first, I wanted to just copy it over using my good old friend Beyond Compare. But then I remembered that TrueNAS has built-in replication over SSH. So I wanted to give this a go. I first set up all my datasets and then configured the replication. Unfortunately, I did not realize that I set the replication to also replicate the properties of the datasets. So my plan to use de-duplication failed. Oh well, it would have been nice but life goes on.

I have to say, I was very impressed with the replication process. It moved all my data with ease and with no issues. It did not take that long either. I think in all it took just over 24 hours to move all the data from the old server to the new one. I then did a quick check on beyond compare to check that it was all there. It was!!!

After a month of running it

I didn’t dismantle my old server right away, as I wanted to make sure that the new one was fully up and running and stable before moving on. I just shut down the old server and left it there, just in case I needed to move back to it. I also kept regular backups of the new server. But I have to say, I have had no problems with it and I now plan to correctly decommission the old server.

I am loving the docker containers in the new TrueNAS Scale system. So far I have setup. Pi-hole, Nextcloud, Handbreak, MakeMKV, and plex. The container system is very easy to use and powerful.

Conclusion

Overall, I am very happy with this build. It did not break my bank account, but I think I got very good results from it considering the age of the hardware. I may consider a CPU upgrade in the future though as I am limited to a single core in a virtual machine.


Hardware, Networking, Storage, Tech
November 8, 2022
placeholder user
Author: John Hart

Leave a Reply

Your email address will not be published. Required fields are marked *