Supercharging your home network on the cheap

Its no secret that if you want to do multi-gig networking on the cheap, sites like eBay are the place to visit. Castoff enterprise gear can be had for pennies on the dollar, if you don’t mind getting equipment used. Used it may be, but this stuff was absolutely top of the line a decade ago, and it will still impress you with the performance and stability for casual use cases.

My first rule for doing multi-gig on the cheap: Do not overpay!

The kinds of network cards I’ll be mentioning in this article are often literally being thrown away into ewaste. Not because they’re not good or anything like that, they cost a small fortune 8 – 10 years ago… but in the enterprise, nothing gets kept that long.

Here are two examples of extremely affordable 10 GB networking, on the cheap. Both cards use an Intel chipset… what does that mean? World class stability and reliability, mature robust drivers, and excellent support under both BSD based operating systems aswell as Linux. These two cards use different chipsets, but all you need to know for now is that both are reasonably solid and battle tested options. What’s the difference? The media they use.

Intel X540 T2Intel 10GB NIC with SFP+The first card is the X540-T2, and this is the dual RJ45 version. This readily takes twisted pair ethernet. Now, on the surface you’re probably thinking “OK, that would be the one I want!” and you may be right. Let’s get into it.

So yes, that first card will take normal Cat 5 / 6 / whatever twisted pair ethernet cabling… the stuff you’re already using at home to do gigabit. There is a catch though. We’ll get back to that. The second card, instead of having RJ45 jacks actually takes SFP+ modules. These come in many different options, and are typically used for fiber optic networking. SFP and its variants can support everything from 1 GB all the way to 400 or even 800GB on modern network gear.

If you’re like me, you’re thinking well why would I want that? I don’t want that! (That was what I thought, early on in this endeavor)

Cards set up for SFP+ transceivers generally consume less energy and as a result don’t get quite as hot as 10 gig gear which takes standard twisted pair ethernet. Notice that the X540 has a fan while the second card does not? Well, that second card actually runs substantially cooler! Even when using a transceiver which furnishes an RJ45 10GB ethernet connection!

There is a catch though. Fiber optic moduiles can be found very cheap. You can also often find direct-attatch cables (DACs) which are essentially two SFP modules joined by a wire… these are also a good affordable and energy efficient option. There is one reason why you may not want to go with SFP style interfaces, atleast not on too much of the gear you pick up… and that would be if you’re planning on running it with twisted pair anyway. Sure, you can buy transceivers on Ebay and Amazon, but that is an additional $25 – $30 per port you’ll need to invest, and boy do those suckers run HOT.

The information above covers use cases for home servers and NAS builds. It probably won’t be too helpful on your desktop or gaming PC though… And the reason is PCIE lane availability. Consumer platforms only have a limited number of PCIE lanes… basically just enough to give you 16 lanes for your graphics card slot, and then another 4 for the primary NVME/M.2 slot. Everything else is used by the chipset, and chances are that if you do have a second M.2 slot or additional PCIE 1x, 4x, 8/16x slots that the chipset is what drives them. Also, don’t be fooled. There is a chance you can configure a consumer board with two physical 16x slots to run both at 8x bandwidth… but if you have your graphics card getting 16 lanes, you will not have more than 4 lanes left over… And more than likely, you’ll be working with just a single lane!

The achelies heel of those old enterprise castoff 10 gig cards is their age. They’re probably going to be gen 2 PCIE, which is why they need 8 lanes for the two 10gig interfaces. Will it work at 4x? Sure. But not at 1x… Even if the card does work (it might!) the bandwidth just isn’t there.

Your modern system will have fast PCIE, likely gen 3, 4 or perhaps even 5… But if the peripheral you’re dropping in (the NIC) only supports gen 2, then that is what we need to account for to determine bandwidth needs.

For my desktop, I had a secret weapon…

We don’t want an 8x card if we’re only going to be giving it a single lane… Know what would be great? Using something with a more modern interface. Gen 3 x 1 lane can darn near do 10gig. I’m trying to keep this on a shoe-string budget though, and since my server uses SATA SSDs for the bulk storage I only needed roughly 500MB/sec to take nearly full advantage of what those disks can do.

So what we want is a card with a gen 3, single lane interface. We want do avoid total no-name solutions… Stick to brands which you associate with the IT world. Intel, Mellanox (now nVidia), Chelsio, Aquantia are some good ones to start with. Don’t buy a Realtek 5 or 10 gig card, if you want my advice. You can get something much more reliable/performant for the same or less cost.

Aquantia 5 Gig NICFor just $20, I was able to score this Aquantia 5GB/sec network card. It is a gen 3 card, and is only a 1x card anyway. Perfect! It also isn’t a furnace like the 10gig RJ45 cards are… this is another big bonus since I like my workstation as quiet as possible.

Connecting it all together…

You’ll need a switch that supports these faster standards. As of lately, there are some no-name switches with a half dozen or s0 2.5 GB ports and then a pair of 10GB ports… there are tons of these on the market, and they are dirt cheap. What’s the catch? Well they’re no-name for one. And you’ll need to accept the fact that they’re going to all use SFP+ for their 10GB ports. Fear not.

Cheap switch /w 10GigFor around $40 I got this “managed” switch. Why did I put it in quotes? Well because this thing is kind of a joke… but what the heck, it works!

SFP+ transceiverThis is one of the two SFP+ transceivers I ordered. I got my second one off Amazon, and pair $10 less for that one. The ebay (goodtop) one seems to run noticably hotter! I’d recommend ordering modules by HiFiber instead. The HiFiber module I got even says right on it that it supports 1GB, 2.5GB, 5GB and 10GB… this is good to know because there is a lot of 10GB gear (especially older stuff) which only supports two speeds. 10GB and 1GB. Got a 2.5 or 5 GB siwtch? Too bad, if you’ve got something like the X540-T2.

For the rest of your PCs

How about 2.5 gig? The cheap switches mostly have 2.5 gig ports, so I got a couple cards. Again, avoid Realtek. Intel chipsets are better, but some can be buggy. Avoid the i225, try and stick with something like the i226 cards. Expect to pay $25 – $30 for a card. Perhaps just skip it and go for 5 gig? Maybe… Just make sure whatever you get can negotiate to speeds other than 1 and 5 gig. (Example: you have a 5 GB nic and 2.5 GB switch, but you’re stuck @ 1 GB because your nic can’t negotiate @ 2.5GB…)

Intel 2.5GB NIC

Performance: Desktop to Server (5gb -> 10gb)

iperf testExcellent. Beats the pants off 1Gig! Something is going on there where we’re seeing a little more in one direction than the other, but I’m not too worried about that. What I’m happy with is I am seeing a substantial uplift from what I was getting with a 2.5 GB nic in the same situation.

How about NFS performance? Benchmark of an NVME disk in my server, mounted on my workstation.

NFS benchmarkWhile it may not be 10 gigabit, this is nothing to snuff at. I’m very happy with the results, given the restriction of only being able to use a 1x PCIE card.

Modernizing a Barracuda Backup Appliance: Upgrades, FreeBSD + ZFS

My Barracuda “Backup Server 290” Journey

Back in October of last year, I got bit hard by the eBay bug. “Woah, that’s actually a pretty reasonable price… I’ll do SOMETHING with it!” — and just like that, I became the owner of a lightly used Barracuda “Backup Server 290.”

Barracuda 290 eBay listing

What is the BBS290, one might ask? Essentially, it’s a 1U rack-mount backup appliance. This one was running CentOS 7 with some custom interface stuff at both the VGA console as well as on a web server running on the box — basically a proprietary backup solution built from open-source software and a mix of consumer PC hardware, with some enterprise bits included.

I got mine for $40 USD, with the unusually low shipping price of $5. Shipping is usually the killer on these things; expect any 1U server to be listed with a shipping cost between $30 and $100.


What $45 Got Me

Honestly, not a bad little backup server. Funnily enough, that is exactly what I’m going to use it for: a target to back up my main server to.

As it arrived, it included:

• 1U Mini-ITX rackmount chassis
• 1U ATX power supply rated @ 400W, 80+ Bronze
• Western Digital enterprise-grade 2 TB 7,200 RPM SATA hard drive (Mfg 2022)
• Celeron N3150 on an OEM variant of the MSI N3150I ECO
• 1× 8GB DDR3 memory module

Not winning the lottery here, but with a few upgrades, this machine can fill a very real need in my setup. For this tier of hardware, I would not recommend paying more than $60–$70 total at the very most. That is up to you though. The platform (CPU, DDR3) isn’t worth a whole lot, and performance is underwhelming at best… but it is sufficient for what I wanted to do.

Low power draw, low heat, and the case and power supply are probably the best part of the “deal.”

The lightly used 2TB enterprise-grade drive was a nice bonus if you’ll actually use it for something. For instance, if 2TB was enough for your backup needs, then this box as-is is an excellent value. Most of us will want a bit more storage though.


Upgrades

  1. WD 8TB Enterprise Drive
    Replaced the original 2TB HDD. This is a CMR drive, 7,200 RPM — not SMR or 5,400 RPM (two big gotchas to look out for). I shucked it from an 8TB WD MyBook purchased locally for $150; it had only ~600 power-on hours. Shucking is far from foolproof, but I got very lucky to get a top-tier hard drive that had barely been used. It will live much longer being used in this server than in the fanless plastic heat trap it came in as a MyBook.
  2. G.Skill 8GB DDR3L RAM ×2 (16GB total)
    Helps with ZFS caching. Cost: $16.86. A basic setup could run with 8GB, but doubling helps ZFS performance. Additionally, using two memory modules allows the memory controller to operate in dual-channel mode, effectively doubling memory bandwidth. On an anemic CPU like the N3150, this can make a surprisingly substantial difference for I/O, especially when ZFS is handling many small files or metadata-heavy operations.
    16 GB RAM Kit
  3. Intel 480GB SATA SSD
    Data center–grade SSD costing about $25 on eBay. It allows the OS and root filesystem to live off the spinning disk and can also be used to accelerate ZFS performance via a special vdev for small file storage. You don’t need to do this if your data is mostly large files, media, or ISOs — but for small files, the performance boost is noticeable. If the special vdev disk dies, the pool dies — which is acceptable here, because this machine is strictly a backup target.
    Intel SSD
  4. Intel i226-V 2.5GbE NIC
    Cost: $30, combined with a PCIe x1 ribbon riser ($8.59) and some DIY shielding. This upgrade doubles network throughput over the onboard 1GbE Realtek NIC for very little money. Drivers are mature and stable on both BSD and Linux. For nighttime backups or casual use, the onboard NIC is fine; this is a small cost for a large convenience.
    Intel i226-V Network Interface

Total upgrade costs:

• RAM: $16.86
• SSD: $25
• NIC: $30
• PCIe riser: $8.59
• 8TB WD CMR HDD: $150

Grand total: $230.45 (including the original $45 for the machine itself)


Chassis and Cooling

The chassis originally had a lit Barracuda Networks logo and a cheap internal 40mm fan. I removed both and resprayed the case dark red for a fresher feel. The stock fan was noisy, and the PSU provides sufficient airflow, so I skipped adding a replacement.

I’ll keep an eye on temperatures. The CPU doesn’t require a fan at all. The 7,200 RPM disk gets slightly toasty, but it’s far better off here with airflow than in a MyBook enclosure with none.


OS Choice

I mostly run Linux, but I appreciate the technical merits of FreeBSD, especially for enterprise-grade storage and high-performance, low-latency applications. On FreeBSD, ZFS is a first-class citizen, unlike Linux where it’s often bolted on.

I initially experimented with XigmaNAS but wanted more control, so I went with FreeBSD 15.0-RELEASE.

Honestly, if you want to keep things simple, just go for XigmaNAS or TrueNAS Core. Both are solid FreeBSD-based storage appliance OSes which make ZFS much more approachable. Linux ZFS implementations like Ubuntu’s are fine, but FreeBSD is where it truly shines.


Installation

I wrote the 15.0-RELEASE image to a USB stick and booted it. Setup asks whether to install via Distribution Sets or Packages (Tech Preview); I used Distribution Sets.

• Disabled kernel debugging and lib32 support
• Selected the igc0 NIC (leaving re0 unused)
• Chose manual partitioning:
– 480GB SSD: MBR, 64GB partition for root / (UFS), SUJ off, TRIM on
– Swap: 2GB partition as freebsd-swap
– Remaining HDD space left unpartitioned for ZFS setup post-install

Enabled SSHD, NTPD, and powerd. Added a user in the wheel group. Other options left at defaults.


Post-Installation Storage Configuration

Check free space on the SSD:

gpart show ada0

This revealed ~383GB free for the ZFS special vdev:

gpart add -t freebsd -s 383G -a 4k
ada0

gpart create -s BSD ada0s2

Create the main pool on the 8TB HDD:

zpool create -f tank /dev/ada1

Add the special vdev on the SSD for small files:

zpool add tank special /dev/ada0s2
zfs set special_small_blocks=128K tank

Set mountpoint and ownership:

zfs set mountpoint=/mnt/tank tank
chown -R 1000:1000 /mnt/tank


Enabling and Setting Up NFS

Enable ZFS and NFS-related services:

sysrc zfs_enable="YES"
sysrc rpcbind_enable="YES"
sysrc nfs_server_enable="YES"
sysrc mountd_enable="YES"
sysrc rpc_lockd_enable="YES"
sysrc rpc_statd_enable="YES"

The zfs_enable=YES setting is important: without it, ZFS pools may not automatically import and mount at boot. This was the reason the pool initially failed to remount after a reboot.

Start services manually:

service rpcbind start
service mountd start
service nfsd start

Edit /etc/exports:

/mnt/tank -network 10.16.16.0 -mask
255.255.254.0 -alldirs -maproot=1000:1000

-network / -mask restricts access to your LAN
-alldirs allows mounting subdirectories
-maproot=1000:1000 maps all remote users to a local UID/GID

Apply the configuration:

service mountd restart
service nfsd restart

This method alone works reliably. Using zfs
set sharenfs
is unnecessary here and can introduce confusion.


Syncing Data via NFS

Mount the NFS share on the main server at /mnt/cuda, then ensure permissions:

chown -R 1000:1000 /mnt/tank
chmod -R 755 /mnt/tank

Run rsync:

rsync -avh --info=progress2 --modify-window=1 /mnt/sda1/ /mnt/cuda/

-a preserves timestamps, permissions, symlinks, etc.
--info=progress2 shows real-time progress
--modify-window=1 handles timestamp differences between Linux and FreeBSD

Observations:

• The SSD-backed special vdev noticeably improved small-file performance
• Dual-channel memory helped I/O on this low-power CPU
• The 2.5GbE NIC provides a large convenience boost
• Transfer speeds are currently limited by the source system’s storage and workload characteristics


Real-World Testing

Copying a 4.1GB Debian ISO from the Barracuda to my desktop completed in roughly 10 seconds. Both machines and the switch are 2.5GbE capable. Renaming the file and pushing it back (desktop → Barracuda) took about 15 seconds.

Htop reported 100–200 MB/s in both cases, though reads from the Barracuda are clearly faster than writes.

Pings between the two machines show excellent latency and consistency:

100 packets transmitted, 100 received, 0% packet loss
rtt min/avg/max/mdev = 0.103/0.114/0.175/0.009 ms


Closing Thoughts

For now, all my personal goals for this project have been met. Eventually, I plan to implement scheduled wake-on-LAN (or something conceptually similar) so the box only powers on when backups are needed. I don’t need it running 24/7 — it’s here to quietly snag incremental backups in case something goes wrong elsewhere.

For those new to FreeBSD, maintenance is fairly simple. Updates are handled with freebsd-update fetch install. After fetching, you’ll see a wall of text — press q, and the install will proceed.

That’s all for now.

Installing Debian onto Mirrored rootfs (Soft RAID 1)

SSD Mirror
Introduction

I’ve been in the process of upgrading my server… going all the way from the modest i5 7500T all the way up to a Ryzen 5 5500. Since I started acquiring the first couple items for the new build, I had the goal of booting this machine from a mirrored pair of SSDs… This way even if one had some kind of catastrophic failure, I can still boot from the second drive and replace the failed one when convenient. Now, I must state the obligatory “RAID IS NOT A BACKUP!”… If and when something gets screwed up, it will immediately screw up both copies instantaneously. So you’re really only protected from an actual disk failure. Frequent backups are still something one must do, to protect from all other scenarios.

So, why are we here?

Glad you asked! Some Linux distributions have evolved to be a bit more ‘helpful’ in scenarios like this… for example, while it isn’t as easy as it perhaps could be, Fedora had absolutely no problem doing what I wanted the first try. No fuss. It just worked.

Debian on the other hand… I’ll admit I re-installed 3 or 4 times. Once I realized the problem, it was obvious… But if you’re reading this, you’re probably wondering what the problem is!? Why won’t this junk work??

What worked for me

So, what we need to understand here are a couple of limitations. I’m not totally sure if they’re limits of EFI, grub, or debian’s specific configuration but none the less, this is what I’ve learned.

You can’t put the ESP/EFI partition on your linux mdraid. What you want to do instead is, slice up both your disks like normal… in my case, I did:

— 500 MB EFI
— 477 GB RAID PARTITION
— 2 GB RAID PARTITION

Do both disks exactly the same. The key here is, set up that first partition as a normal EFI partition… and do it on BOTH disks. Once both disks are sliced up, go into the Linux Software RAID option… Create an MD device. Pick your two devices for RAID 1… Firstly the large (in my case 477 GB) partitions… Then run through again, this time pairing up the smaller ones for SWAP. Finish the MD setup.

Back on the partitioner’s main screen, you’ll now see two additional devices above your physical disks. Double-click them… setting the first one up with ext4, xfs, whatever you’d like… and a mountpoint of / for rootfs. Next, do the swap.

In each disk, ensure that the ESP/EFI partition has the bootable flag checked and the EFI System partition type has been used (not RAID, fat32, etc).

Finish the install… it should go without a hitch. Now, we’re in pretty good shape here. Everything on this Debian system will be mirrored to both disks, meaning each write is simultaneously sent to each disk. One can die, and nothing will be lost, but we still need to resolve EFI. To get our second EFI partition populated, read on…

On the initial boot

Go ahead and run lsblk and see which disk you’ve booted from. It will likely be sda but could very-well also be sdb. Whichever disk we *did not* boot from, we want to mount that disk’s EFI partition.

Make a directory, /mnt/efi2 and mount that partition there… then copy everything from /boot/efi into the mirror disk’s EFI partition.

(as root)
mkdir -p /mnt/efi2
mount /dev/sdb1 /mnt/efi2
cp -a /boot/efi/* /mnt/efi2/

Now, we’ll also install grub to that disk…

grub-install /dev/sdb
update-grub

Reboot… And try booting into your other SSD. It should work!

FreeBSD 15.0 RELEASE has landed!

BeastieFreeBSD 15.0: Notable Improvements for Desktop and Laptop Users

FreeBSD 15.0 introduces a range of updates that strengthen the system’s usability on desktops, laptops, and general-purpose machines. Several areas that matter most to daily users—networking, graphics, and desktop environments—see meaningful development in this release.

A key update is expanded WiFi support. FreeBSD 15.0 adds drivers for Realtek’s rtw88 and rtw89 chipsets, used in many current laptops. Intel iwlwifi support has also been refined, and the installation media now includes a dedicated WiFi firmware package, making it easier for a wider range of wireless adapters to function immediately after installation.

Graphics hardware support also advances. By incorporating newer Linux DRM driver code, FreeBSD improves compatibility and performance on modern Intel and AMD GPUs. This benefits both X11 and Wayland sessions, with smoother acceleration and more consistent behavior across display setups.

Desktop environments gain from this foundation. KDE Plasma, GNOME, Xfce and others continue to be available through packages, and improved hardware support helps these environments run more reliably. Work on a more desktop-friendly installer is ongoing and aims to simplify initial setup in future releases.

The system as a whole also receives updates. Optimized libc routines bring performance improvements on amd64, and various device drivers—covering networking, audio, PCI, and storage—have been updated for better compatibility and stability.

Taken together, these changes make FreeBSD 15.0 a solid release for users running the system on everyday hardware, offering broader support and a smoother experience across a wide range of setups.

Grab it now!
https://download.freebsd.org/releases/amd64/amd64/ISO-IMAGES/15.0/

Devuan 6.0 “Excalibur” Released – Your Systemd-Free Debian 13 System

init freedom

Less than three months after the official release of Debian 13 “Trixie,” the Devuan project has officially launched Devuan 6.0 “Excalibur” on November 2nd, 2025.

Excalibur brings all the benefits of Debian 13’s updated packages, modern kernels, and long-term support while staying true to Devuan’s systemd-free philosophy. The release ensures that alternative init systems like sysvinit and runit integrate smoothly, and existing Devuan users can plan upgrades with confidence.

For the Devuan community, this release represents a stable, up-to-date option for both new installations and older hardware users who want the reliability of Debian without systemd. If you’ve been waiting to move to a fresh, modern, yet systemd-free environment, Excalibur is ready to download and install.

Announcment: https://dev1galaxy.org/viewtopic.php?id=7507

Learn more and download: https://www.devuan.org/os/releases

Firefox Scrolling Inverted??

First time this has happened to me, but running the FireFox Nightly (which came on NetBSD 11 BETA) I noticed my TrackPoint \ middle mouse scrolling was reversed. I think they might call this “natural scrolling”… anyway, to fix it simply go to about:config in the title bar and search for mousewheel.default.delta_multiplier_y — Change it from 100 to -100 and presto, normal scrolling behavior.

OpenBSD 7.8 Released Today, /w Pi 5 Hardware Support!

OpenBSD 7.8, is another careful step forward that strengthens daily usability across laptops, desktops, and ARM64 systems. While this release isn’t radically new, the OpenBSD team continues to refine and expand their legendary system in all the right places.

The most visible change is Raspberry Pi 5 support. OpenBSD now boots cleanly on the Pi 5 with working SDHC storage, Ethernet, and Wi-Fi power management through new RP1 and sdhc drivers. That takes the board from experimental to genuinely usable. Additional ARM64 updates improve clock, PWM, and RTC support on newer SoCs, broadening the list of hardware that “just works.”

Power management on laptops sees steady progress. AMD systems handle S0ix suspend and resume more reliably, and the amdgpu driver now sleeps and wakes properly under S3. Laptops with GPIO-based lid sensors can suspend and resume cleanly, and hibernation reliability improves with better pre-allocation during boot. Small changes, but together they make OpenBSD behave more predictably on modern notebooks.

Networking performance benefits from new multicore TCP and IPv6 input handling, allowing up to eight threads to process traffic in parallel. Several core system calls, such as close() and listen(), were unlocked from global network locks, reducing contention on multi-CPU systems.

Graphics support advances with a DRM update based on Linux 6.12.50, improving amdgpu reliability and adding Qualcomm display controller support. Xorg remains the standard display server, while Wayland continues to function through XWayland and wlroots compositors for those who prefer a modern stack. In ports, GNOME 46 and KDE Plasma 6 are available, keeping desktop environments current alongside updated Firefox and Chromium builds.

The built-in hypervisor gains AMD SEV-ES support for encrypted guests, and the installer adds further safeguards and clearer defaults. Security hardening continues quietly across the base system, with more software adopting pledge and unveil.

OpenBSD 7.8 doesn’t chase trends, but it delivers a more capable, consistent, and secure system across a wider range of hardware. Whether on a modern laptop or a Raspberry Pi 5, this release shows the project’s continued focus on quality and correctness—hallmarks that keep OpenBSD in a class of its own.

https://www.openbsd.org/78.html

Trying out FreeBSD 15.0 BETA 1 on ThinkPad T500

Screenshot
FreeBSD 15 running MATE Desktop

I for one am definitely looking forward to FreeBSD 15 RELEASE! 14.3 brought strong improvements, and things can only get better. Going to be putting it on my X1 Carbon Gen 3 soon, but for now I figured I’d try it on a spare machine. Nice to see it got going with hardly any effort on this 15+ year old machine! Just had to do a bit of manual X.Org config tweaking…

For a Core 2 Duo with 4 GB RAM in 2025, it runs surprisingly well. I’m posting from this machine right now 🙂

New home networking content is on the way!

eBay Orders
Ignore the iphone case, I ordered that for a friend!

As some of you will notice, yes there are two SFF boxes, and three NICs…

I need to decide if I’m building a 10 GB router, or more of a 2.5 / 1 G pfsense box for just having a better internet router and firewall. The lil Wyze box will be fantastic as a router I already know, those Gemini Lake chips are amazingly powerful for what they are. Also very low power draw and hardly make any heat whatsoever.  The SSDs? They just seemed like a good deal.

Here are the SFF machines. Obviously the first one is more “sff” than the second… That’s OK though, I needed something with real PCIE slots, and a real powersupply to run 10 GB network card(s).


More to come as these things arrive!

 

Resetting CMOS Password on ThinkPad T420

I recently picked up a used Thinkpad T420. While I could boot it up, use it, install another OS and all that there were some settings locked out.

There is apparently an option for a regular CMOS password, and a “supervisor” password. Hitting enter got me in with limited access, but I couldn’t do things like turning hardware virtualization on or off… among other things.

If you’re like me, maybe you’re thinking: Hey, just unplug the coincell for a few minutes!

Well that doesn’t work. Fortunately though, there is an easy enough hack. Remove the screw for the RAM door on the bottom of the machine, then use a credit card or blade to nudge the keyboard up from the palm rest. Carefully keep the keyboard connected, but place it sideways out of the way. We need to short a couple pins below where the CMOS coin cell battery connection is on the main board.

General Area
We need to be in this general area. Excuse the flashlight!

Now, lets zoom-in on the actual area where we need to short two points… Tweezers will work well for this purpose.

Pad areaNow, inside that area where we have the orange box… We need to short the upper left corner to the middle right (center row) pad. This should make things totally clear:

So, this is what to do… Keep either the AC supply, or battery attached to the machine and boot it up.
When you boot it up, hit the blue ThinkVantage button and QUICKLY use your tweezers to short those two points together for a second. If successful, you’ll see the following message.

success screenData access error” sounds bad, right? Well in this case, such an error indicates success. Yeah, and you don’t stop ‘Cause it’s 1-8-7 on a undercover tsop! Well, SPI, but that wouldn’t rhyme…

Once you get into the BIOS (press F1), be sure to disable all passwords or set them to blank and then save.

 

That’s it! Worked perfectly on my Thinkpad T420. I found this method via a YouTube video, his pictures were not so clear though. Hopefully this will help my fellow Thinkpad enthusiasts!

© 2025 LostGeek.NET - All Rights Reserved. Powered by ClassicPress, NGINX, Debian GNU/Linux.