And the voices. “Billy…”

“You fucked the whole thing up.”

“Billy, your time is up.”

“Your time… is up.”

  • 1 Post
  • 35 Comments
Joined 1 year ago
cake
Cake day: January 9th, 2024

help-circle
rss





  • Hey, that’s wonderful! Good to hear. Yeah I would just throw away the memory and do a certain amount of double-checking of what’s on your disk, as some of it may have been corrupted during the time the broken memory was in there. But yeah if you can run and do stuff without errors after taking out the bad stick then that sounds like progress.



  • Sounds like serious hardware problems (bad memory sounds highly likely if that’s what memtest is telling you). Replace the faulty hardware before changing out any software, and before the badness “spreads”; you may already have corrupted a certain amount of the data / installed software on your disk by writing back data after the bad memory corrupted it, if you’ve been running on the broken hardware for that long.


  • I wish Debian had better support for software that wants to do its own package management.

    They do it a little bit with python, but for most things it’s either “stay within the wonderful Debian package management but then find out that the node thing you want to do is functionally impossible” or “abandon apt for a mismashed patchwork of randomly-placed and haphazardly-secured independently downloaded little mini-repos for Node, python, maybe some Docker containers, Composer, snap, some stuff that wants you to just wget a shell script and pipe it to sudo sh, and God help you, Nvidia drivers. At least libc6 is secure though.”

    I wish that there was a big multiarch-style push to acknowledge that lots of things want to do their own little package management now, and that’s okay, and somehow bring it into the fold (again their pyenv handling seems like a pretty good example of how it can be done in a mutually-working way) so it’s harmonious with the packaging system instead of existing as something of an opponent to it. Maybe this already exists and I’m not aware of it but if it exists I’m not aware of it.


  • I don’t see why it wouldn’t. I think for gentoo, you want to check if you need any security updates with:

    emerge --sync
    emerge gentoolkit
    glsa-check -l affected
    

    (Edit: Also, as a general rule – don’t type stuff as root just because I or some other random person on the internet tells you to; check the man page or docs to make sure it’s going to do something that you want it to do first.)


  • All Linux systems will be very likely vulnerable to this if they’re not they’re patched with the fix. Patched systems will not be vulnerable. That’s true for Debian and Ubuntu, as it is for any Linux system. The commands I gave are determining whether or not you’re patched, on a Debian or Ubuntu system.

    What distro are you running? I can give you commands like that for any Linux system to determine whether or not you’re patched.


  • Easiest answer:

    sudo apt udpate
    sudo apt upgrade
    

    If it upgrades some stuff, you were vulnerable, but you no longer are. If nothing upgrades, then you were already all good.

    If you’re doing that regularly, then your core system will generally be patched fixing almost all exploits in your core system, including this one. If not, you’re vulnerable to this exploit and likely a whole bunch more stuff.

    Edit: That’s the simplest answer but if you’re curious you can do a double-check for this particular vulnerability with apt changelog libc6 - generally speaking you won’t see recent changes, but if a package has been recently updated you’ll see a recent fix. So e.g. for this, I see the top change in the changelog is the fix from a couple weeks back:

    glibc (2.36-9+deb12u4) bookworm-security; urgency=medium
    
      * debian/patches/any/local-CVE-2023-6246.patch: Fix a heap buffer overflow
        in __vsyslog_internal (CVE-2023-6246).
      * debian/patches/any/local-CVE-2023-6779.patch: Fix an off-by-one heap
        buffer overflow in __vsyslog_internal (CVE-2023-6779).
      * debian/patches/any/local-CVE-2023-6780.patch: Fix an integer overflow in
        __vsyslog_internal (CVE-2023-6780).
      * debian/patches/any/local-qsort-memory-corruption.patch: Fix a memory
        corruption in qsort() when using nontransitive comparison functions.
    
     -- Aurelien Jarno <aurel32@debian.org>  Tue, 23 Jan 2024 21:57:06 +0100
    


  • Use a restricted account with an un-passphrased key is probably by far the easiest way. You could also use rsyncd, but you’ll have to fool with a whole bunch of stuff. The work involved will probably be a superset of just doing a restricted account for the rsync process to use for rsync-over-ssh.

    Edit: I had totally missed that the issue was passphrase of the key, not password


  • $ while true; do echo Hello, I updated the header; sleep 5; done &
    [1] 1631507
    $ Hello, I updated the header
    sleep 30; echo Sleep is done.
    Hello, I updated the header
    Hello, I updated the header
    Hello, I updated the header
    Hello, I updated the header
    Hello, I updated the header
    Hello, I updated the header
    Hello, I updated the header
    Sleep is done.
    Hello, I updated the header
    $ kill %1
    [1]+  Terminated              while true; do
        echo Hello, I updated the header; sleep 5;
    done
    $
    

    Edit: I’m fairly confident now that you’re just thinking the loop will stop when you run oogabooga, but that’s not how it works. That up above is how it works; the loop keeps going during the sleep with them both going on the same terminal, then after the sleep process terminates, I kill the loop, but for the whole 30 seconds previous, they were both going. It’ll be the same with oogabooga. This the situation you’re asking about, yes?




  • I mean, yeah, at this point letting it finish regardless seems like the right play. You could Ctrl-Z and then do little experiments and then resume it if you feel confident mucking around with that and you’re curious.

    You can estimate the current speed pretty accurately with something like “df; sleep 30; df” and then do the math.

    It’s useful to mess around with tar, because it will try to saturate its pipes without waiting, so even if that saturation on its own doesn’t fix anything, you can start to eliminate possibilities for where the issue might be. You know for sure it won’t wait for anything from the other end before continuing to do its reads. “time dd if=/dev/zero of=file” or similar commands can also determine the speed of individual parts of the pipeline.

    (Edit: If you’re doing the dd test make sure you write or read a ton of data, to make sure you’re dealing with the physical disk and not the memory cache)

    Best of luck


  • Almost certainly, the bottleneck is one or both of:

    1. The platters can’t simply spin at full speed reading a sequential stream of bytes from one and writing it to another - they periodically have to search around to different places stitching the file’s byte stream together from discontinguous chunks or reading or writing metadata. Seek latency of the platter will overshadow any tiny delays incurred because of memory or CPU delays.

    2. The algorithm is doing something in a fashion that causes delays (e.g. reading each file individually and waiting until it can sort out if it needs to send anything for that file before starting I/O operations for the next).

    Idk if you can do anything about #1 but in similar situations I’ve had good mileage preventing #2 with “tar cj /somewhere | ssh me@host ‘cat | tar xj’” (roughly speaking, you obviously may have to adjust things to make it actually work, and on very fast networks maybe it’s better to skip the -j, but that’s the rough idea).

    Edit: Oh, I misread, is this local? I saw rsync and just though it was a network transfer. What kind of speeds are you getting? Does doing “tar c /original | tar x” or something like that work any faster?