• Chewy@discuss.tchncs.de
    link
    fedilink
    arrow-up
    16
    ·
    1 year ago

    The company that’s been funding bcachefs for the past 6 years has, unfortunately, been hit by a business downturn - they’ve been affected by the strikes in the media production industry. As such, I’m now having to look for new funding.

    Hopefully they find a new company to fund the development of bcachefs. Btrfs has major funding from facebook and others, so hopefully there’ll be interest in bcachefs since it has some interesting features over btrfs (namely caching and configurable data placement).

  • blashork [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    3
    ·
    1 year ago

    Finally, I’ve been waiting forever for this. btrfs is a mess and zfs in oracle jail forever. Finally we cna have good COW on linux without stupid hoops.

      • eeleech@lemm.ee
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        1 year ago

        RAID 5/6 is somewhat broken, and some people might consider the lack of built in encryption or support for a cache disk as problems. For some reason it seems popular to blame it for data loss.

        That being said, it is my favorite file system and I never had problems with data loss, but I use ECC RAM on my desktop as is strongly recommended if you use btrfs or zfs (another potential downside).

        • DaPorkchop_@lemmy.mlOP
          link
          fedilink
          arrow-up
          11
          ·
          edit-2
          1 year ago

          The recommendation for ECC memory is simply because you can’t be totally sure stuff won’t go corrupt with only the safety measures of a checksummed CoW filesystem; if the data can silently go corrupt in memory the data could still go bad before getting written out to disk or while sitting in the read cache. I wouldn’t really say that’s a downside of those filesystems, rather it’s simply a requirement if you really care about preventing data corruption. Even without ECC memory they’re still far less susceptible to data loss than conventional filesystems.

        • 𝒍𝒆𝒎𝒂𝒏𝒏@lemmy.one
          link
          fedilink
          arrow-up
          6
          ·
          1 year ago

          I’ve been using BTRFS for years without issue, albeit with the standard c raid modes and not 5/6.

          It saved my ass when a power supply issue popped up, causing my array’s hard drives to randomly drop out when reading/writing data. Managed to recover all data just fine, although it did take a while

        • I’ve been using btrfs for years, and I’d swear I’ve had fewer problems with it than ext4. I’ve never experienced any sort of data loss as a result of the fs.

          I’m really interested to play with bcachefs; evolution and competition is a great thing, and it’d be nice to have a reliable RAID5 built in. While I normally prefer Unix-philosophy tooling, needing layers of different tools to get an FS working is an exception that has caused me trouble in the past, so I’m all for a batteries-included solution.

          The proof in the pudding, for me, will be how easy or hard it is to administer. Messing with the fs tooling is something I do only rarely, so ease-of-use has a lot of value to me. This is why I don’t prefer ZFS; the btrfs tooling seems more intuitive.

        • Infiltrated_ad8271@kbin.social
          link
          fedilink
          arrow-up
          1
          arrow-down
          2
          ·
          1 year ago

          The arguments about data loss today are simply ridiculous and fallacious. They made sense in the beginning (or for raid5/6 until recently), but those who lost data were solely at fault for ignoring the warnings; yet instead of taking responsibility for their actions, they joined the horde of haters.

      • DaPorkchop_@lemmy.mlOP
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        Functionally it’s pretty solid (I use it everywhere, from portable drives to my NAS and have yet to have any breaking issues), but I’ve seen a number of complaints from devs over the years of how hopelessly convoluted and messy the code is.

        • Atemu@lemmy.ml
          link
          fedilink
          arrow-up
          8
          ·
          1 year ago

          I’ve yet to see someone state this outside of Reddit and I doubt those were devs.

          • ProtonBadger@kbin.social
            link
            fedilink
            arrow-up
            5
            ·
            1 year ago

            Yeah, these days Btrfs is solid and well proven for many use-cases, but its old reputation will probably never go away, at least on reddit. Interestingly BcacheFS have a great reputation, despite not being in Linux , having a way to go yet and only having one single developer which is a big problem, I think Linus worries about that too.

            If it lives up to everything Kent Overstreet says about it, it will be a great filesystem and I’ll be happy to use it, until then I’m doing good with Btrfs. On my PC I’ll probably never notice any difference between the two.

    • d3Xt3r@lemmy.nzM
      link
      fedilink
      arrow-up
      16
      ·
      1 year ago

      You’re right, there isn’t any special effort put towards wear leveling, but the bcache FAQ (NOT bcachefs mind you, but the same should be applicable) mentions this:

      #I thought SSDs wore out quickly if you did regular writes to them?

      For older SSDs, that was true. Newer SSDs will recognize that a given block is getting heavy writes and will actually swap a heavily written block with a more lightly written block (moving the data transparently and using internal pointers to keep track of the move). This is called “wear leveling” and its use can take a drive whose individual blocks might have tens of thousands of writes before failure and produce an SSD that can support up to millions of writes in a given location by moving data around underneath. Also, keep in mind that unlike (most) standard filesystems that treat SSDs as random access devices that can take any number of writes of any size, bcache understands the write issues in SSDs and tunes its write algorithms to minimize the number of erasures needed. As a side note, what we think of as ‘‘write’’ performance problems on SSDs are largely ‘‘erase’’ performance problems.

    • blashork [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      bcache is inherently designed to be an ssd cache that sits in front of slower bigger disks. Bcachefs is an extension of this into it’s own filesystem. iirc the words of the bcache creator were: ‘we’ve implemented 80% of a filesystem here, might as well go the rest of the way’. So how much it thrashes a disk is based on what position you give it in the architecture. The caching ssds are going to be used heavily, taking advantage of their fast random access to manage all random accesses, while sequential operations generally go to the slower disk that’s set as the background device. The background disks will tend to be accessed less.

      Soy yeah, it’s based on what kind of disk and position in the bcache, and what caching options you enable. If you want to look into it further, bcache is fs agnostic, so if you can find some tests that have been done for bcache enabled for classic linux filesystems, like ext4 and xfs, that include hardware degradation info, you’ll probably end up with similar usage and hardware wear with the actual bcachefs.

    • DaPorkchop_@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Yeah, although the neat part is that you can configure how much replication it uses on a per-file basis: for example, you can set your personal photos to be replicated three times, but have a tmp directory with no replication at all on the same filesystem.

        • DaPorkchop_@lemmy.mlOP
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          What exactly are you referring to? It seems to me to be pretty competitive with both ZFS and btrfs, in terms of supported features. It also has a lot of unique stuff, like being able to set drives/redundancy level/parity level/cache policy (among other things) per-directory or per-file, which I don’t think any of the other mainstream CoW filesystems can do.