After a brief stint at distributed computing early in the pandemic, I came back first to Folding@Home, then BOINC, with the following goals:

  • Use some spare computing power to help with worthwhile research.
  • Not drastically increase my power usage.
  • Mainly run projects when my computer would be on anyway, not start running a full desktop power supply full blast 24/7.
  • Avoid damaging my primary system, and especially not have to replace a fried CPU or GPU in a hurry during the ongoing chip shortage! (I’ve had heating problems with graphics-intensive games on this box.)

Folding@Home only seemed worth doing with the GPU, and the tasks took long enough that it only seemed worth doing if I was going to keep the computer on, which tripped up on my targets for power usage, uptime, and overheating risk. And their ARM version had dropped 64-bit support, so I couldn’t put it on the Raspberry Pi either. Well, not without installing a new OS and setting everything up again.

I tossed BOINC on an old Android phone (via F-Droid) to start with, using Science United as a manager to automatically choose projects based on areas of research instead of having to dig into each project one at a time. After a week or so, that seemed to be working out pretty well, so I looked into expanding.

Continue reading

  1. Put Folding@Home on my desktop.
  2. It’s using too much power.
  3. Can I put it on my Raspberry Pi 3B?
  4. The software is 64-bit. The OS on there right now is 32-bit.
  5. Specs show the 3B has a 64-bit processor.
  6. /proc/cpuinfo shows it has a 32-bit processor.
  7. Specs show it should have BCM2837
  8. /proc/cpuinfo shows it has BCM2835
  9. Magnifying glass shows BCM2837 stamped on the chip.

A close-up view of a circuit board with Raspberry Pi 3 written on it and a Broadcom chip partially hidden by plastic spacers.

WTF?

It turns out all Raspberry Pi CPUs appear as 2835 in the kernel?!?!?

I decided to put BOINC on an old phone instead. I don’t feel like installing a new OS on the Pi. *sigh*

At first I thought this was related to Windows losing drives on wake. It started happening around the same time, it also involved waking up from sleep, and the CD/DVD drive was disappearing in Windows along with the vanishing hard drive.

But while moving the cables fixed that problem, it didn’t fix this one.

It was only mildly annoying, especially compared to regularly losing access to a large chunk of local storage, so I figured I’d come back to it later.

Other people are seeing this too and it’s a recent bug in the Linux kernel. At least with Fedora’s rapid kernel updates I probably won’t have to wait too long between when the patch lands and when it hits my desktop. It’s been years since I compiled my own kernel, and I don’t feel like starting that up again now!

My main desktop PC dual-boots Windows 10 and Fedora Linux. I have an SSD drive for each OS, and recently added an HDD for larger shared storage. It’s worked out pretty well except for a recurring problem: Sometimes the shared drive just disappears from Windows after I wake it up from sleep mode.

I don’t mean Windows just unmounts the filesystem. I mean Windows stops seeing the hardware at all.

When that happens, it sometimes reconnects after a few minutes…and sometimes doesn’t. Which means it’s not only invisible in Windows, it doesn’t get cleaned up properly on reboot, so Linux will only access it read-only the next time I fire that up, until I get back into Windows and shut it down cleanly.

Time to get to the bottom of it. Most of what I found online boiled down to:

  • Update the SATA controller driver.
  • Update the motherboard firmware.
  • Make sure the cable connection is solid.
  • Move the cable to another connector.
  • Replace the cable.
  • Get a better drive, [brand the OP mentioned] is terrible.

Continue reading

I ran into a weird problem testing some websites on a local NginX installation on my Mac, where it was sending different sites to Firefox and Chrome for the same URL.

I’d put the server names into /etc/hosts pointing to 127.0.0.1, and I’d set up NginX with multiple server {} blocks, each with a different server_name. But while Chrome would load the individual sites for one.example.com and two.example.com, Firefox would always get the content from one.example.com.

A little more testing confirmed that all Chromium-based web browsers (I tried Edge, Opera, Brave and Vivaldi) were getting the correct sites, but Firefox and Safari were both getting the wrong server’s content.

When I compared the server blocks, I noticed that one.example.com was listening on both IPv4 and IPv6, but two.example.com was listening only on IPv4. I added the second listen directive, reloaded nginx, and voila! It worked in Firefox and Safari!

    listen 443 ssl;
    listen [::]:443 ssl;
    server_name  two.example.com;

So apparently, even though I’d only pointed two.example.com to 127.0.0.7 (IPv4), Firefox and Safari were connecting to ::1 (IPv6) instead. And since NginX had only connected one.example.com to that interface, that’s the site it loaded. It’s not clear whether Firefox and Safari are both doing something weird and Chrome isn’t, or they’re both using a MacOS system resolver and Chromium is doing its own thing.

TL;DR: If you listen to IPV6 in one localhost server {} block, listen to it in all of them!