Ubuntu 24.04 LTS Released Yesterday — What’s New?

Ubuntu 26.04 LTS “Resolute Raccoon” was released April 23, 2026. It is supported until April/May 2031 as a normal LTS, with longer coverage available through Ubuntu Pro.

This is not just another GNOME wallpaper release. 26.04 moves Ubuntu onto Linux 7.0, GNOME 50, a Wayland-only GNOME session, newer virtualization tooling, newer OpenSSH, Dracut by default, APT 3, systemd 259, and a lot of hardware enablement work.


The big shifts (quick version)

  • Kernel 7.0 → new hardware baseline
  • GNOME 50 → faster, cleaner desktop
  • Wayland-only GNOME → Xorg is no longer an option there
  • Stronger virtualization stack → actually useful for real workloads
  • Modern GPU + CPU support → Intel Xe, Arc, newer AMD, better NVIDIA

Desktop changes (what you’ll actually notice)

Wayland is no longer optional in GNOME. There is no Xorg GNOME session anymore. X11 apps still work via XWayland, and other desktops can still use Xorg.

GNOME 50 is a real upgrade:

  • better responsiveness
  • lower CPU/memory use in core components
  • improved fractional scaling, VRR, HDR
  • better remote desktop

Default apps got cleaned up:

  • Papers (PDF) replaces Evince
  • Loupe replaces Eye of GNOME
  • Ptyxis replaces GNOME Terminal
  • Resources replaces System Monitor
  • Showtime replaces Totem

Less legacy, more consistency.


Hardware / kernel

Linux 7.0 brings:

  • Intel Core Ultra (new gens) support
  • Intel Xe2/Xe3 graphics
  • new Intel Arc GPUs (consumer + pro)
  • improved AMD/Intel video acceleration (VA-API on by default)

Power management is better too:

  • improved power-profiles-daemon (especially AMD laptops)
  • NVIDIA Dynamic Boost enabled where supported

 


Server / virtualization (quietly one of the best parts)

  • New HWE virtualization stack (qemu/libvirt/etc.)
  • AMD SEV-SNP + Intel TDX
  • better NUMA + PCI affinity
  • improved virtio / multiqueue
  • NVMe support in libvirt
  • NVIDIA MIG support

This is a very solid base for KVM hosts.

 


The uutils (Rust coreutils) situation

Now the controversial bit.

Ubuntu 26.04 experiments with uutils (Rust-based coreutils) in some parts of the system. This is not a full replacement for GNU coreutils across the board, but it is present enough to matter.

What’s the concern?

  • uutils is not 100% behavior-compatible with GNU coreutils
  • edge cases and scripting differences do exist
  • subtle breakage is possible, especially in older or non-trivial scripts

This is not theoretical — this is exactly the kind of change that can bite admins who assume “ls/cp/mv behave exactly the same everywhere.”

Reality check

  • Most normal users won’t notice
  • Most simple scripts will work fine
  • Advanced scripts, weird flags, or strict POSIX/GNU assumptions may break

What you should do

If you care about consistency:

  • explicitly depend on GNU coreutils in scripts when it matters
  • test anything non-trivial before rolling into production
  • consider pinning or reinstalling GNU coreutils behavior if needed
  • avoid assuming behavior based on older Ubuntu/Debian systems

If you’re running servers, especially automation-heavy ones, this is worth paying attention to.


Other admin-facing changes

  • OpenSSH 10.2 (older crypto removed / tightened defaults)
  • Chrony replaces systemd-timesyncd
  • Dracut replaces initramfs-tools (initramfs-tools still available)
  • APT 3, apt-key fully gone
  • systemd 259 (SysV compatibility nearing end-of-life)

LTS support status

  • 26.04 → supported to May 2031
  • 24.04 → supported to May 2029
  • 22.04 → supported to May 2027
  • 20.04 → now ESM only

Bottom line

26.04 is a baseline shift release, not just an incremental one:

  • Wayland is now reality
  • GNOME is modernized
  • kernel + hardware support jumped forward
  • virtualization is significantly better
  • and yes — there are some experimental changes (like uutils) that deserve caution

Release Notes

Ubuntu 26.04 LTS Desktop

Ubuntu 26.04 LTS Server

Turning down the noise on your server’s logs with Fail2Ban

Access Denied!

If you’re running services exposed to the public internet, you’ll obviously need a way to remotely administer the machine. By far, the most common method is SSH, typically via OpenSSH. It is used on the vast majority of BSD and Linux systems.

One of the best improvements you can make to your SSH security model is to simply disallow password authentication entirely. A password can be guessed, leaked, or brute-forced. While an SSH private key could also be leaked or stolen, it is still widely considered to be the more secure and preferred option.

That said, there are situations where you may want or need to allow login from arbitrary machines. In those cases, requiring a private key may not be practical, since you would need to carry it with you everywhere, which isn’t always possible or convenient.

Basic SSH Hardening

First things first: do not allow anyone to log in as root using a password.

Really, you should not allow root login over SSH at all. But if you must allow it, then ensure it is key-only authentication.

If your server is exposed to the public internet, and especially if it listens on the default port 22, you can absolutely expect it to be hammered by bots and automated attacks. These will attempt logins constantly, probing for weak credentials.

They will also crawl your web services, checking endpoints and attempting to fingerprint your system. From that fingerprint, they can infer what vulnerabilities might apply. This is not hypothetical — expect hundreds or thousands of attempts per day.

Enter Fail2Ban

This is where Fail2Ban comes in.

Fail2Ban monitors logs for suspicious activity and reacts automatically. You define patterns (for example, repeated failed SSH logins), and when those patterns are matched, Fail2Ban takes action.

The most common setup is simple:

If an IP fails to authenticate 3 times, ban it at the firewall level.

Fail2Ban does this by inserting firewall rules (via iptables, nftables, or UFW depending on your system). Once triggered, traffic from the offending IP is dropped immediately.

This drastically reduces the effectiveness of brute-force attacks. Instead of allowing unlimited guesses, attackers get cut off almost instantly and must switch IPs to continue.

More Advanced Uses

You can go further if you want.

For example:

  • Ban entire subnets instead of single IPs
  • Aggressively block repeat offenders
  • Target other services (nginx, mail, etc.)

You can even block large geographic regions if you map IP ranges, though that requires more care and maintenance.

Whitelisting (Very Important)

The opposite is also possible — and recommended.

If you have a static IP at home, work, or another server, you should whitelist it. This ensures you never accidentally lock yourself out.

Even if you are using key-based authentication, having a whitelist adds an extra layer of safety and peace of mind.

Debian Setup (Minimal and Clean)

Install Fail2Ban:

sudo apt update
sudo apt install -y fail2ban

Create a local configuration file (do not edit defaults):

sudo nano /etc/fail2ban/jail.local

Paste the following:

[DEFAULT]

# NEVER ban these IPs (add your own here)
ignoreip = 100.100.100.100 200.200.200.200

# Ban settings
bantime = -1
findtime = 10m
maxretry = 3

# Backend auto works fine on Debian
backend = auto

# Use nftables (modern Debian) or iptables fallback
banaction = ufw
#nftables-multiport

[sshd]

enabled = true
port = ssh
logpath = %(sshd_log)s

 

If you are NOT using UFW, use this instead (this will always work on Debian)

Enable and start the service:

sudo systemctl enable fail2ban
sudo systemctl restart fail2ban

If using UFW, make sure it is enabled and allows SSH:

sudo ufw allow ssh
sudo ufw enable

Verify everything is working:

sudo fail2ban-client status
sudo fail2ban-client status sshd

What this configuration does:

  • 3 failed SSH attempts results in a ban
  • Ban is permanent (bantime = -1)
  • Your IPs are never banned
  • Firewall rules are handled via UFW (or iptables if you chose that route)

Final Thoughts

Fail2Ban is not a silver bullet, but it is an extremely effective first line of defense. It turns constant background noise from the internet into something manageable and largely harmless.

Combined with disabling password authentication, using SSH keys, and disabling root login, you end up with a setup that is simple, clean, and very difficult to attack in practice.

Old Enterprise Drives… Good option? Lets do some power testing

Recently I did a little trade deal and ended up with eight HGST 3 TB SAS drives. They’re old, very old. 2012 old — yikes! But I think they’ve been sitting a few years, so maybe they don’t have crazy hours on them.

New HDDs!Stack of Drives

These aren’t bad drives, especially in today’s market with the insane price hikes. Each disk is 3 TB, 3.5″ and 7,200 RPM. Benchmarked individually I was getting a consistent read speed of 150 MB/sec. Not bad!

The elephant in the room though, how much power do these suckers pull? I set up an old machine to benchtest that. Here is the data I got, power figures are total system power @ the wall socket, with a Kill-A-Watt meter.

The drives were cheap… next to nothing. Basically with the trade I did, each one cost me less than $10. The HBA was $18 shipped and the SAS breakout cable was $13 shipped. All readily available on eBay.

Power Figures

Baseline without HBA: 30W
Baseline /w HBA installed : 37 W
1 SAS disk: 44 W (+7 W)
2 SAS disks: 55 W (+11 W)
3 SAS disks: 65 W (+10 W)
4 SAS disks: 75 W (+10 W)
4 SAS + 1 SATA 3.5″ disk: 81 W (+6 W for SATA)


Observations:
Incremental power per SAS disk: ~ +10 W idle
SATA disk only adds ~+6 W idle

Test Bed System Specs:
Intel Core i5 4570 Haswell Quad Core
2x 4 GB DDR3 RAM — 8 GB Total
8 GB SATA DOM / SSD for the OS
LSI Logic SAS2308 Fusion-MPT SAS-2
Xubuntu 16.04 — doesn’t matter much here, but incl for completeness
750W No-Name Power Supply

 

NOTE:
All readings are steady-state idle; initial spin-up or seek currents are not included. Power scaling is roughly linear with the number of disks… So the data is likely fairly accurate.

Lets look at some early performance figures…

Now… that doesn’t look super impressive. But, if we tweak for larger test size…

That’s more like it! That is with four disks in a RAID 0 stripe. This isn’t how you’d normally be using them, but I’m more so curious what this old hardware can do and RAID0 will show the best case example of that.

Decent performance… if they used less power, I’d say it would be very attractive for a good way to add:

8 drives: $124 for 24 TB RAW (1 HBA, 8 drives, 2 cables)
4 drives: $70 for 12 TB RAW (1 HBA, 8 drives, 1 cable)

FreeBSD 15: Creating an NFS share with a USB disk

Here is how I recently went through the process of setting up a Raspberry Pi 4 running FreeBSD 15 to share a 1 TB USB hard drive to my local network via NFS.

Before we get to this point you’ll need to download the Pi aarch64 sdcard image from freebsd.org. Decompress the image (xz –decompress) and write it to a micro sd card with dd or your favorite imaging tool.

Boot the pi up, change root’s password, make a normal user, add them to wheel.

ntpdate -u pool.ntp.org will set your clock, and then you can install pkg or update the system with freebsd-update fetch install.

Now, onto the main point of this… we’re going to wipe a USB HDD, format it with UFS2 and share it on our lan via nfs.

Beastie with Pi in hand

Step 1 Identify your USB disk

camcontrol devlist

Find your USB disk, e.g., /dev/da0. Be sure it is correct.


Step 2 Wipe and partition the USB disk

gpart destroy -F /dev/da0
gpart create -s gpt /dev/da0
gpart add -t freebsd-ufs /dev/da0

This creates /dev/da0p1.


Step 3 Format the partition as UFS

newfs -U /dev/da0p1

Optional label:

newfs -U -L datadisk /dev/da0p1


Step 4 Create a mount point and mount the disk

mkdir -p /export/data
mount /dev/da0p1 /export/data
df -h /export/data


Step 5 Make it mount automatically at boot

Edit /etc/fstab and add:

/dev/da0p1 /export/data ufs rw 2 2

Mounting without reboot:

mkdir -p /export/data
chown ben:ben /export/data     # Your name here!
mount /export/data


Step 6 Set up NFS exports

Edit /etc/exports and add:

/export/data -network 192.168.1.0 -mask 255.255.255.0 -maproot=root -alldirs


Step 7 Start NFS services

Enable at boot, run these commands:

sysrc rpcbind_enable=YES
sysrc nfs_server_enable=YES
sysrc mountd_enable=YES

That will make changes to /etc/rc.conf for you! Now we run:

service rpcbind start
service mountd start
service nfsd start


Step 8 Verify the export

showmount -e

You should see:

/export/data 192.168.1.0


Step 9 Mount the NFS share your client PC

sudo mkdir -p /mnt/bsdpi
sudo chown ben:ben /mnt/bsdpi
sudo mount -t nfs bsdpi.lan:/export/data /mnt/bsdpi

Optionally you can put this in your fstab, probably want to do it in a way where it won’t keep your machine from booting if it isn’t online though!

SanDisk Industrial 8GB Micro SD card, good option for Pis?

SanDisk 8GB Benchmark

I recently purchased a 3 pack of these off eBay for $30 shipped. They came today… I’m actually pretty happy with the performance I’m seeing here.

These should make good OS / Boot disks for Raspberry Pis and other SBCs. I needed a couple and well, I’m cheap. Industrial should be a good thing here, but I guess we’ll see down the road. I’d definitely put my money on one of these versus a no-name card.

Things like system and application logs and other frequent write operations can lead to the early death of an SD card. The “industrial” branding on these microSD cards usually refers more to environmental durability than to being optimized for heavy write workloads. (As I understand it, anyways) That means they’re rated to withstand wider temperature ranges, humidity, and vibration; conditions you might see in industrial machines, automotive systems, or outdoor electronics.

$5 CPU Activity LED Indicator for your Server made with RP 2040 (Pi Pico)

Because who doesn’t love a little das-blinkenlights??

Pi Pico CPU Meter
Zip-tied to a cable management stick-on, on the front of my server

So this project is incredibly simple. You only need three things:
1. Raspberry Pi Pico (RP2040) — Of course, you could use pretty much any micro controller you want
2. Some LEDs. Mine were from a super cheap set which had a hundred or so? No specs on them, but they’re red.
3. One resistor, for each LED you’re going to install. On this I used 470 ohm resistors, I’m pretty sure.

Pi Pico bottom side
The best part is, you don’t need any kind of custom PCB. I did this just by soldering directly on the Pi Pico board itself. Now, you’ll need to be careful to get decent looking LED spacing… but it is more than possible if you are patient.

Silly-putty holding the LEDs
Silly-putty to the rescue!

You’ll definitely need some way to hold the LEDs in place or you’ll be fighting them the entire time. Blue tac would probably be ideal here. I didn’t have that, but I did have an old egg of silly-putty which worked out better than expected.

Back side of picoThis is what the whole thing looks like fully assembled. Basically, the resistor just gets soldered to the leg of each LED which is NOT on the GPIO. The way I’ve done it here was to tie the ground side of each LED through a resistor, and then they all fold into a backbone, each folded onto the next, down the line, and finally tie over to a ground pad on the pico.

Here is a video of it in action: https://ben.lostgeek.net/files/demo.mov

Basically, you’ll need two pieces of code. One which gets flashed to the RP2040 and handles the actual work of pulling the GPIO lines high/low when we get data from the computer.

How do we get the data to the pico? To keep things as simple as possible, we’re just using the Pico’s USB to uart. This makes the USB device show up in linux as a normal serial port, and makes it dead simple to interface with in software.

This is where the second piece of code we need comes into play. It is a daemon of sorts which runs on the machine of which we’d like to see CPU activity. Basically just a small amount of code to read from /proc, see our individual CPU core usage, do some math… and if it is above a certain threshold then we register that core as active and we tell the pico over serial.

My new server build has a 6 core, 12 thread Ryzen 5500 processor and so naturally I thought… Hey, wouldn’t it be cool if I had a little activity LED for each core? Well then, having one for each thread would be even cooler!

For a really nice, crisp, “activity LED” kind of genuine feel I’ve found that you want things updating pretty fast. My experience has been that a refresh period of about 30ms achieves that effect quite well.

If we try and poll the system too often then our daemon which sends data to the pico will start to use noticeable CPU time… Not very much by any means, but personally I’m happy to have the daemon only chewing up no more than ~ 1 % CPU utilization. With the 30ms update rate, it is only consuming 0.7 % of one thread. In other words, we’re only wasting a quite negligible 0.044 % of the total machine’s compute power to run our little light panel.

And who knows, I’m not a coder… someone could probably make this way more efficient. Let me know, if you have some much better code for this to run on 😉

You can get the code here: https://ben.lostgeek.net/files/blinken

Usual disclaimer Ai was used in the writing of the code for this.

Unreal Tournament 2004 now FREE! Linux support included!

UT2004

UT2004 is now free, easily installable on Linux Mac and Windows thanks to OldUnreal! This is even officially endorsed by Epic, so totally legit.

I just ran through the Linux install in less than 2 minutes. The download was very fast, no issues encountered.

You can use the install script here: https://raw.githubusercontent.com/OldUnreal/FullGameInstallers/master/Linux/install-ut2004.sh

Just mark it executable, and run with ./install-ut2004.sh -d /path/to/install (wherever you want)

UT99 (Game of the Year Edition aka GOTY), the original UnrealTournament is also available and has been for a little while.

https://github.com/OldUnreal/FullGameInstallers/tree/master/Linux

This is very exciting, and incredibly cool to see. I bought the Editor’s Choice box set back in the day and this was one of the first big box games I owned with native Linux support.

Linux Can Tell You All About Your SFP Modules

SFP+ Module

To start things off, I’d just finished switching everything over to the new Ryzen 5500–based server I built. Originally this box was using an Intel X540-T2 NIC, which has dual 10Gb RJ45 (10GBase-T) ports. If you’ve ever run 10 gig over twisted pair, you already know those things run hot. Really hot.

I had another NIC kicking around that uses SFP+, and figured if I could find some cheap fiber transceivers it might be a better way to link this machine up to my switch — especially from a heat and power perspective.

I ended up grabbing a pair of Avago-branded SFP+ transceivers and a fiber patch cable for $12.95 shipped on fleabay. Hard to argue with that. They’ve been working great so far, and they run MUCH cooler than the 10GBase-T setup. Like… not even close.

Out of curiosity, I plugged my cheap-o 4x 2.5Gb / 2x 10Gb switch into a Kill-A-Watt to see what was happening. One of the fiber SFP+ modules adds a little over a watt. The 10GBase-T SFP+ module I have? More like 3–4 watts, and that’s just sitting there at idle. Multiply that across ports and uptime and it adds up fast. No wonder 10GBase-T gear runs warm.

Anyway, here’s something neat I didn’t know before today: Linux can tell you all about your installed SFP modules. And not just basic info — actual live diagnostics.

In my case:

sudo ethtool -m enp3s0f0 — See the output of this @ the end of this post.

That command dumps the module’s EEPROM and diagnostic data. You get vendor info, part number, serial number, connector type, supported link modes, wavelength, and cable distance ratings. But you also get live telemetry.

Things like:
Module temperature
Supply voltage
Laser bias current
TX optical power
RX optical power
Alarm and warning thresholds

Which means you can answer questions like:

Is my transceiver overheating?
Is the RX light level getting too low (dirty fiber, bad patch cable, failing optic)?
Is the laser bias current unusually high?
Is anything drifting toward its warning thresholds?

I know very little about fiber compared to twisted pair, but this was pretty eye-opening. From that single command I learned my modules use LC connectors, I’m running 10GBase-SR with an 850 nm wavelength laser, and the link is rated for up to 300 meters on OM3, 80 meters on OM2, and 30 meters on OM1. In other words, short-range multimode optics — not single-mode.

And I can see that mine are currently sitting around 35°C, well under the 80°C warning threshold, with healthy TX/RX power levels and no alarms triggered.

That’s honestly pretty awesome.

I always assumed optics were black boxes. Turns out you can actually get a lot of data from them!
Continue reading “Linux Can Tell You All About Your SFP Modules”

Preventing accidental shutdowns when SSH’d into your server…

Because? People are stupid, we’re stupid sometimes…

If you’re like me, you probably SSH into a server occasionally and forget that it isn’t the local machine’s console when tabbing back to it after some time. This can really shoot you in the foot! Here are some ways to mitigate against accidental shutdowns which you can use on any server which is always-on. See the ed note below! I have a new recommendation since this was written.

NOTE: The systemd mask method would interrupt safe shutdown from say, your UPS telling the machine the battery is about to die. This is bad, so don’t create that situation. Unsafe shutdowns can lead to data loss!

EDITOR NOTE:  I wasn’t aware of it when I wrote this post, but there is a package in most distros called molly-guard and it exists for the explicit purpose of stopping addidental shutdowns and reboots over SSH connections. 


Method #1
 — Via the sudoers file. Never log into a server as root to monkey around. If you need root, it is best to su in, and exit out as soon as your task is done. This in and of itself is why sudo and doas are better options.

Here is how I can ensure that while logged in as “ben”, I won’t accidently fall victim to my sudo poweroff or sudo reboot stupidity.

visudo

This is on debian. Use whereis (command-nameto find the actual paths on your specific system. That won’t protect me if I’m logged in as root, but it will if I am logged in as ben. That is good enough for me, 99% of the time.

Method #2 — .bashrc of the user you normally use. This is not really effective, because if you prepend sudo to the command, it is going to run anyway!! This will however atleast remind you that you tried to do something dumb, and turn off the wrong machine. And yes, $SSH_CONNECTION is an environment variable you can use.

.bashrc

Method #3 — This is the only method that will truly stop the machine from shutting it down. If you’re using systemd, you can mask the commands and the system will not run them untill you re-enable them.

systemd

Probably not the best idea, because if say the power goes out now your machine won’t let the UPS trigger a safe shutdown. For now, I’ll move forward with the first two methods used together.

Moral of the story? Don’t do normal casual userspace work on the server! Just spin up a VM. Old habbits die hard, but this is definitely one I need to kill.

 

Writing a Better DD Wrapper GUI

Back in September last year, I was working on some kind of a wrapper for dd… Just for my own personal needs, nothing too fancy.

I got a little more ambitious, and was happy enough with the results to share it with anyone interested. Use at your own risk, of course.

What is it? It is a GUI front-end to dd. It is dependant on the GNU version of dd. Written in Python, uses QT toolkit.

GitHub: https://github.com/HarderLemonade/ddwrap/

Screenshot of DDWrap

© 2025 LostGeek.NET - All Rights Reserved. Powered by ClassicPress, NGINX, Debian GNU/Linux.