Supercharging your home network on the cheap

Its no secret that if you want to do multi-gig networking on the cheap, sites like eBay are the place to visit. Castoff enterprise gear can be had for pennies on the dollar, if you don’t mind getting equipment used. Used it may be, but this stuff was absolutely top of the line a decade ago, and it will still impress you with the performance and stability for casual use cases.

My first rule for doing multi-gig on the cheap: Do not overpay!

The kinds of network cards I’ll be mentioning in this article are often literally being thrown away into ewaste. Not because they’re not good or anything like that, they cost a small fortune 8 – 10 years ago… but in the enterprise, nothing gets kept that long.

Here are two examples of extremely affordable 10 GB networking, on the cheap. Both cards use an Intel chipset… what does that mean? World class stability and reliability, mature robust drivers, and excellent support under both BSD based operating systems aswell as Linux. These two cards use different chipsets, but all you need to know for now is that both are reasonably solid and battle tested options. What’s the difference? The media they use.

Intel X540 T2Intel 10GB NIC with SFP+The first card is the X540-T2, and this is the dual RJ45 version. This readily takes twisted pair ethernet. Now, on the surface you’re probably thinking “OK, that would be the one I want!” and you may be right. Let’s get into it.

So yes, that first card will take normal Cat 5 / 6 / whatever twisted pair ethernet cabling… the stuff you’re already using at home to do gigabit. There is a catch though. We’ll get back to that. The second card, instead of having RJ45 jacks actually takes SFP+ modules. These come in many different options, and are typically used for fiber optic networking. SFP and its variants can support everything from 1 GB all the way to 400 or even 800GB on modern network gear.

If you’re like me, you’re thinking well why would I want that? I don’t want that! (That was what I thought, early on in this endeavor)

Cards set up for SFP+ transceivers generally consume less energy and as a result don’t get quite as hot as 10 gig gear which takes standard twisted pair ethernet. Notice that the X540 has a fan while the second card does not? Well, that second card actually runs substantially cooler! Even when using a transceiver which furnishes an RJ45 10GB ethernet connection!

There is a catch though. Fiber optic moduiles can be found very cheap. You can also often find direct-attatch cables (DACs) which are essentially two SFP modules joined by a wire… these are also a good affordable and energy efficient option. There is one reason why you may not want to go with SFP style interfaces, atleast not on too much of the gear you pick up… and that would be if you’re planning on running it with twisted pair anyway. Sure, you can buy transceivers on Ebay and Amazon, but that is an additional $25 – $30 per port you’ll need to invest, and boy do those suckers run HOT.

The information above covers use cases for home servers and NAS builds. It probably won’t be too helpful on your desktop or gaming PC though… And the reason is PCIE lane availability. Consumer platforms only have a limited number of PCIE lanes… basically just enough to give you 16 lanes for your graphics card slot, and then another 4 for the primary NVME/M.2 slot. Everything else is used by the chipset, and chances are that if you do have a second M.2 slot or additional PCIE 1x, 4x, 8/16x slots that the chipset is what drives them. Also, don’t be fooled. There is a chance you can configure a consumer board with two physical 16x slots to run both at 8x bandwidth… but if you have your graphics card getting 16 lanes, you will not have more than 4 lanes left over… And more than likely, you’ll be working with just a single lane!

The achelies heel of those old enterprise castoff 10 gig cards is their age. They’re probably going to be gen 2 PCIE, which is why they need 8 lanes for the two 10gig interfaces. Will it work at 4x? Sure. But not at 1x… Even if the card does work (it might!) the bandwidth just isn’t there.

Your modern system will have fast PCIE, likely gen 3, 4 or perhaps even 5… But if the peripheral you’re dropping in (the NIC) only supports gen 2, then that is what we need to account for to determine bandwidth needs.

For my desktop, I had a secret weapon…

We don’t want an 8x card if we’re only going to be giving it a single lane… Know what would be great? Using something with a more modern interface. Gen 3 x 1 lane can darn near do 10gig. I’m trying to keep this on a shoe-string budget though, and since my server uses SATA SSDs for the bulk storage I only needed roughly 500MB/sec to take nearly full advantage of what those disks can do.

So what we want is a card with a gen 3, single lane interface. We want do avoid total no-name solutions… Stick to brands which you associate with the IT world. Intel, Mellanox (now nVidia), Chelsio, Aquantia are some good ones to start with. Don’t buy a Realtek 5 or 10 gig card, if you want my advice. You can get something much more reliable/performant for the same or less cost.

Aquantia 5 Gig NICFor just $20, I was able to score this Aquantia 5GB/sec network card. It is a gen 3 card, and is only a 1x card anyway. Perfect! It also isn’t a furnace like the 10gig RJ45 cards are… this is another big bonus since I like my workstation as quiet as possible.

Connecting it all together…

You’ll need a switch that supports these faster standards. As of lately, there are some no-name switches with a half dozen or s0 2.5 GB ports and then a pair of 10GB ports… there are tons of these on the market, and they are dirt cheap. What’s the catch? Well they’re no-name for one. And you’ll need to accept the fact that they’re going to all use SFP+ for their 10GB ports. Fear not.

Cheap switch /w 10GigFor around $40 I got this “managed” switch. Why did I put it in quotes? Well because this thing is kind of a joke… but what the heck, it works!

SFP+ transceiverThis is one of the two SFP+ transceivers I ordered. I got my second one off Amazon, and pair $10 less for that one. The ebay (goodtop) one seems to run noticably hotter! I’d recommend ordering modules by HiFiber instead. The HiFiber module I got even says right on it that it supports 1GB, 2.5GB, 5GB and 10GB… this is good to know because there is a lot of 10GB gear (especially older stuff) which only supports two speeds. 10GB and 1GB. Got a 2.5 or 5 GB siwtch? Too bad, if you’ve got something like the X540-T2.

For the rest of your PCs

How about 2.5 gig? The cheap switches mostly have 2.5 gig ports, so I got a couple cards. Again, avoid Realtek. Intel chipsets are better, but some can be buggy. Avoid the i225, try and stick with something like the i226 cards. Expect to pay $25 – $30 for a card. Perhaps just skip it and go for 5 gig? Maybe… Just make sure whatever you get can negotiate to speeds other than 1 and 5 gig. (Example: you have a 5 GB nic and 2.5 GB switch, but you’re stuck @ 1 GB because your nic can’t negotiate @ 2.5GB…)

Intel 2.5GB NIC

Performance: Desktop to Server (5gb -> 10gb)

iperf testExcellent. Beats the pants off 1Gig! Something is going on there where we’re seeing a little more in one direction than the other, but I’m not too worried about that. What I’m happy with is I am seeing a substantial uplift from what I was getting with a 2.5 GB nic in the same situation.

How about NFS performance? Benchmark of an NVME disk in my server, mounted on my workstation.

NFS benchmarkWhile it may not be 10 gigabit, this is nothing to snuff at. I’m very happy with the results, given the restriction of only being able to use a 1x PCIE card.

Modernizing a Barracuda Backup Appliance: Upgrades, FreeBSD + ZFS

My Barracuda “Backup Server 290” Journey

Back in October of last year, I got bit hard by the eBay bug. “Woah, that’s actually a pretty reasonable price… I’ll do SOMETHING with it!” — and just like that, I became the owner of a lightly used Barracuda “Backup Server 290.”

Barracuda 290 eBay listing

What is the BBS290, one might ask? Essentially, it’s a 1U rack-mount backup appliance. This one was running CentOS 7 with some custom interface stuff at both the VGA console as well as on a web server running on the box — basically a proprietary backup solution built from open-source software and a mix of consumer PC hardware, with some enterprise bits included.

I got mine for $40 USD, with the unusually low shipping price of $5. Shipping is usually the killer on these things; expect any 1U server to be listed with a shipping cost between $30 and $100.


What $45 Got Me

Honestly, not a bad little backup server. Funnily enough, that is exactly what I’m going to use it for: a target to back up my main server to.

As it arrived, it included:

• 1U Mini-ITX rackmount chassis
• 1U ATX power supply rated @ 400W, 80+ Bronze
• Western Digital enterprise-grade 2 TB 7,200 RPM SATA hard drive (Mfg 2022)
• Celeron N3150 on an OEM variant of the MSI N3150I ECO
• 1× 8GB DDR3 memory module

Not winning the lottery here, but with a few upgrades, this machine can fill a very real need in my setup. For this tier of hardware, I would not recommend paying more than $60–$70 total at the very most. That is up to you though. The platform (CPU, DDR3) isn’t worth a whole lot, and performance is underwhelming at best… but it is sufficient for what I wanted to do.

Low power draw, low heat, and the case and power supply are probably the best part of the “deal.”

The lightly used 2TB enterprise-grade drive was a nice bonus if you’ll actually use it for something. For instance, if 2TB was enough for your backup needs, then this box as-is is an excellent value. Most of us will want a bit more storage though.


Upgrades

  1. WD 8TB Enterprise Drive
    Replaced the original 2TB HDD. This is a CMR drive, 7,200 RPM — not SMR or 5,400 RPM (two big gotchas to look out for). I shucked it from an 8TB WD MyBook purchased locally for $150; it had only ~600 power-on hours. Shucking is far from foolproof, but I got very lucky to get a top-tier hard drive that had barely been used. It will live much longer being used in this server than in the fanless plastic heat trap it came in as a MyBook.
  2. G.Skill 8GB DDR3L RAM ×2 (16GB total)
    Helps with ZFS caching. Cost: $16.86. A basic setup could run with 8GB, but doubling helps ZFS performance. Additionally, using two memory modules allows the memory controller to operate in dual-channel mode, effectively doubling memory bandwidth. On an anemic CPU like the N3150, this can make a surprisingly substantial difference for I/O, especially when ZFS is handling many small files or metadata-heavy operations.
    16 GB RAM Kit
  3. Intel 480GB SATA SSD
    Data center–grade SSD costing about $25 on eBay. It allows the OS and root filesystem to live off the spinning disk and can also be used to accelerate ZFS performance via a special vdev for small file storage. You don’t need to do this if your data is mostly large files, media, or ISOs — but for small files, the performance boost is noticeable. If the special vdev disk dies, the pool dies — which is acceptable here, because this machine is strictly a backup target.
    Intel SSD
  4. Intel i226-V 2.5GbE NIC
    Cost: $30, combined with a PCIe x1 ribbon riser ($8.59) and some DIY shielding. This upgrade doubles network throughput over the onboard 1GbE Realtek NIC for very little money. Drivers are mature and stable on both BSD and Linux. For nighttime backups or casual use, the onboard NIC is fine; this is a small cost for a large convenience.
    Intel i226-V Network Interface

Total upgrade costs:

• RAM: $16.86
• SSD: $25
• NIC: $30
• PCIe riser: $8.59
• 8TB WD CMR HDD: $150

Grand total: $230.45 (including the original $45 for the machine itself)


Chassis and Cooling

The chassis originally had a lit Barracuda Networks logo and a cheap internal 40mm fan. I removed both and resprayed the case dark red for a fresher feel. The stock fan was noisy, and the PSU provides sufficient airflow, so I skipped adding a replacement.

I’ll keep an eye on temperatures. The CPU doesn’t require a fan at all. The 7,200 RPM disk gets slightly toasty, but it’s far better off here with airflow than in a MyBook enclosure with none.


OS Choice

I mostly run Linux, but I appreciate the technical merits of FreeBSD, especially for enterprise-grade storage and high-performance, low-latency applications. On FreeBSD, ZFS is a first-class citizen, unlike Linux where it’s often bolted on.

I initially experimented with XigmaNAS but wanted more control, so I went with FreeBSD 15.0-RELEASE.

Honestly, if you want to keep things simple, just go for XigmaNAS or TrueNAS Core. Both are solid FreeBSD-based storage appliance OSes which make ZFS much more approachable. Linux ZFS implementations like Ubuntu’s are fine, but FreeBSD is where it truly shines.


Installation

I wrote the 15.0-RELEASE image to a USB stick and booted it. Setup asks whether to install via Distribution Sets or Packages (Tech Preview); I used Distribution Sets.

• Disabled kernel debugging and lib32 support
• Selected the igc0 NIC (leaving re0 unused)
• Chose manual partitioning:
– 480GB SSD: MBR, 64GB partition for root / (UFS), SUJ off, TRIM on
– Swap: 2GB partition as freebsd-swap
– Remaining HDD space left unpartitioned for ZFS setup post-install

Enabled SSHD, NTPD, and powerd. Added a user in the wheel group. Other options left at defaults.


Post-Installation Storage Configuration

Check free space on the SSD:

gpart show ada0

This revealed ~383GB free for the ZFS special vdev:

gpart add -t freebsd -s 383G -a 4k
ada0

gpart create -s BSD ada0s2

Create the main pool on the 8TB HDD:

zpool create -f tank /dev/ada1

Add the special vdev on the SSD for small files:

zpool add tank special /dev/ada0s2
zfs set special_small_blocks=128K tank

Set mountpoint and ownership:

zfs set mountpoint=/mnt/tank tank
chown -R 1000:1000 /mnt/tank


Enabling and Setting Up NFS

Enable ZFS and NFS-related services:

sysrc zfs_enable="YES"
sysrc rpcbind_enable="YES"
sysrc nfs_server_enable="YES"
sysrc mountd_enable="YES"
sysrc rpc_lockd_enable="YES"
sysrc rpc_statd_enable="YES"

The zfs_enable=YES setting is important: without it, ZFS pools may not automatically import and mount at boot. This was the reason the pool initially failed to remount after a reboot.

Start services manually:

service rpcbind start
service mountd start
service nfsd start

Edit /etc/exports:

/mnt/tank -network 10.16.16.0 -mask
255.255.254.0 -alldirs -maproot=1000:1000

-network / -mask restricts access to your LAN
-alldirs allows mounting subdirectories
-maproot=1000:1000 maps all remote users to a local UID/GID

Apply the configuration:

service mountd restart
service nfsd restart

This method alone works reliably. Using zfs
set sharenfs
is unnecessary here and can introduce confusion.


Syncing Data via NFS

Mount the NFS share on the main server at /mnt/cuda, then ensure permissions:

chown -R 1000:1000 /mnt/tank
chmod -R 755 /mnt/tank

Run rsync:

rsync -avh --info=progress2 --modify-window=1 /mnt/sda1/ /mnt/cuda/

-a preserves timestamps, permissions, symlinks, etc.
--info=progress2 shows real-time progress
--modify-window=1 handles timestamp differences between Linux and FreeBSD

Observations:

• The SSD-backed special vdev noticeably improved small-file performance
• Dual-channel memory helped I/O on this low-power CPU
• The 2.5GbE NIC provides a large convenience boost
• Transfer speeds are currently limited by the source system’s storage and workload characteristics


Real-World Testing

Copying a 4.1GB Debian ISO from the Barracuda to my desktop completed in roughly 10 seconds. Both machines and the switch are 2.5GbE capable. Renaming the file and pushing it back (desktop → Barracuda) took about 15 seconds.

Htop reported 100–200 MB/s in both cases, though reads from the Barracuda are clearly faster than writes.

Pings between the two machines show excellent latency and consistency:

100 packets transmitted, 100 received, 0% packet loss
rtt min/avg/max/mdev = 0.103/0.114/0.175/0.009 ms


Closing Thoughts

For now, all my personal goals for this project have been met. Eventually, I plan to implement scheduled wake-on-LAN (or something conceptually similar) so the box only powers on when backups are needed. I don’t need it running 24/7 — it’s here to quietly snag incremental backups in case something goes wrong elsewhere.

For those new to FreeBSD, maintenance is fairly simple. Updates are handled with freebsd-update fetch install. After fetching, you’ll see a wall of text — press q, and the install will proceed.

That’s all for now.

© 2025 LostGeek.NET - All Rights Reserved. Powered by ClassicPress, NGINX, Debian GNU/Linux.