• 0 Posts
  • 20 Comments
Joined 2 months ago
cake
Cake day: July 3rd, 2025

help-circle
    1. it partitions same things into separate locations One library is here, another one is here, some older version there, which one should this binary load? Where should I point the -L to? Of course, compiling things completely from scratch is unmaintainable anyway (that’s why PKGBUILD was another big point - it’s easy to create your own AUR packages that will get pacman-level maintainability), but sometimes you want to check if that new patch solves your issue
    2. if distro does not care, the packages will have different prefixes I can see some use of /opt. But it should be my decision if I want something installed in /opt/bin or /usr/local/bin. In distros that did not enforce where things are put in, it was all over the place. But to be fair, to me, even bin/sbin separation is bs

  • Unlike Linux, these BSDs have a clear separation of OS from these packages. OS files and data are stored in places like /bin and /etc, while user installed packages get installed to /usr/local/bin and /usr/local/etc.

    What do you consider the OS? Is firefox a part of OS? Is office part of OS?

    On FreeBSD, the freebsd-update command is used for upgrading the OS and the pkg command is used for managing user packages. On OpenBSD, the syspatch command is used for upgrading the OS and the pkg_* commands are used for managing user packages.

    Personally, the ditching of /usr/local mess was one of the selling points of Arch for me, but in a way you could achieve this in Arch. Create a secondary pacman config with RootDir set to /usr/local and alias pacman --config /etc/pacman_local.conf as pkg_pacman







  • Ah. Yeah, nuke it from orbit. Since this was RAT, so it had local execution powers and the attackers knew exactly which distro they are targetting, they could have used some security vulnerability to get root and even replace the kernel in worst case. Hopefully not microcode insertion, so hardware could be ok

    But then, it wasn’t an attack on an existing package. So the question is how many people did actually download those


  • It was AUR. The way AUR works is that there is a PKGBUILD file that tells pacman how to compile a package from scratch. It can be created in a way where nothing gets compiled, only precompiled binary is downloaded (like from github releases). So it was not a package in purely Arch sense. With those PKGBUILDs out from AUR, malicious binaries only sit on their github, or wherever those were hosted, and are not reachable via alternative package managers (pacman, the official one, doesn’t offer AUR at all)



  • no graphics card whatsoever

    computer can play h.265 and equivalent without troubles, provided video file is no higher than 1080 p.

    Computer can play av1 files no higher than 1080 p only if I shut every other application down. If for example I run a browser and an av1 file with either mpv or vlc, system shuts down.

    Can I put all that memory to use and avoid overloading the cpu?

    Most of the answers seem to focus on the main problem, but your question got me thinking.
    Since you are not getting shutdowns with lower qualities, maybe you could use RAM to play those videos.
    Set up tmpfs. Before you start all the other things, use ffmpeg to recode the video to something without any compression, maybe tell it to not work too fast (like work on one frame at a time), and put the thing on that tmpfs. Maybe then playing this new file would be less demanding. The key would be to not force it to provide 30fps of encoded video

    Although… Are you sure all this RAM is fine? Maybe it shuts down on more demanding videos because with those the RAM usage raises to the faulty part?