Curious what you’ve got installed on it. What do you use a lot but took awhile to find? What do you recommend?

  • lodronsi@beehaw.org
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 years ago

    I’ve got a Synology 918+ with 16TB in raid 10.

    Of the synology software, I regularly use: Photos (photo backup and organization tool), Drive (a private “cloud” sync like Dropbox), the contacts and calendar services, and surveillance station, their security camera monitor/recorder. Via Docker, I also run dokuwiki, gitea, draw.io, minio, postgres, freshrss, firefly3, calibre, and a few others. Like others, Time Machine backups of laptops and backups of non-apple hardware use a lot of the space.

    I also have my older Synology 213 running still just as a place to backup important stuff from the primary.

  • karce@beehaw.org
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    2 years ago

    I’ve got a ‘NAS’ setup on my desktop computer/server. I use it for almost everything. It runs VMs and games and self-hosted servers, etc, etc. It is Arch Linux but does it all. Plex/Sonarr/Radarr/QBittorrent.

    24 TB of HDD in raid 10.

    I haven’t found a good reason to keep a separate computer/server. It pretty much just always complicates the setup. If I need more separation, a VM is usually a better answer in most cases as far as I can see.

  • DM_Gold@beehaw.org
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 years ago

    I’d like to build a NAS. Does anyone have a simple guide I could follow? I do have experience building my personal computers. I could search online for a guide, but a lot of the time small communities like this will have the end-all be-all guide that isn’t well known.

    • Parsnip8904@beehaw.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 years ago

      I don’t have one off hand but a NAS at homelab level is not that different from a server.

      I have had success with getting a second hand server with a moderately powerful processor (old i5 maybe?), a good 1/10Gb network card (which can be set up with bonding if you have multiple ports), and lots of SATA ports or a raid card (need PCI slots for the cards as well).

      I would go with even a lower power processor for power savings if that’s a thing. ECC ram would be great, especially for ZFS/btrfs/xfs.

  • Drew@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 years ago

    I’ve just been using an old laptop with jellyfin, radarr, sonarr and transmission.

  • pAULIE42o@beehaw.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 years ago

    Theres plenty of replies with options of decent, current NAS setups - so I’ll reply with my 1st NAS instead…

    You could start with a Pi-NAS to save a lot of $$coin$$… start with a Raspberry Pi 4 8GB; it has gigabit ethernet, so it meets that baseline… since you’ll be running over the USB-3 BUS regardless, you can get away with buying cheap USB drives; there are many brands, but Western Digitals are pretty cheap… they go up to like 40GB now a days, but 4TB drives are only $100 or so… I went with two 8TB drives. Its better, IMO, to go with the larger 3.5" versions because they come with external power supplies. I found with the smaller 2.5" drives, the Pi could only power one sucking power over USB…

    I used no RAID, as you have to jump thru a few extra hoops to get RAID setup over drives on the USB-3 bus… backup was done thru my Proxmox PBS server - but we’re not here for the safe backup talk, right?

    All this was running OpenMediaVault, which is a pretty decent NAS software. It has support for all the connection types you want - and believe it or not, I also ran Plex in docker and got decent results; while I wasn’t able to do any transcoding, wireless playback worked quick enough for me - and I could even watch movies remotely…

    I mention this setup b/c a 16TB Pi-NAS can be had for $300, all in… you can see speeds of 100MB/s but I found 40-50MB/s was an average because of WiFi or other bottlenecks.

    Its cool to have options when building a NAS; I’ve since moved my NAS to a Proxmox VM on my Dell Poweredge server, but the Pi-NAS ran without fail for four years…

    • pAULIE42o
    • . . . . . . . . . . .
    • /s
  • ntldr@beehaw.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 years ago

    Desktop PC running proxmox with a bunch of VM’s. Mostly focused around hosting Plex but some other stuff as well. Below are some of my VM’s. All are running Ubuntu server btw.

    • HDDs get passed into this VM which uses mergerfs to pool them all together. Then I’m running an NFS server to share the drives with the other VM’s that need access.
    • Torrent client, sonarr, radarr, etc. To automatically acquire content.
    • Plex VM
    • Gaming servers (hosts Minecraft, valheim, etc servers)
    • externally exposed nginx instance, hosts sites such as overseer.
    • internally exposed nginx instance, allows for https access to all internal services (sonarr, radarr, flood, etc).
  • dfyxA
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 years ago

    Custom low power build:

    • Case: some old 2U Supermicro case with 6 HDD bays that got thrown out at work
    • Mainboard: ASRock J4105M Micro ATX
    • CPU: Intel Celeron J4105
    • RAM: 8 GB DDR4 (CPU doesn’t support more)
    • RAID controller: LSI 9212-4i
    • System SSDs: 2x 128 GB Intenso 2.5" SATA SSD (mounted into the first two bays with 3D-printed 2.5" to 3.5" adapters)
    • Data HDDs: 2x Seagate Ironwolf 4TB, 2x Seagate Exos X16 14TB, combined into an 18 TB zfs pool
    • PSU: PicoPSU

    My main goal was to build a 4 HDD NAS that can run at very low power and without active cooling most of the time (because it sits under my desk) but can spin up fans if needed.

    On the software side I run Ubuntu 22.04, docker and Jellyfin as a media server. The J4105 provides Intel Arc graphics for video encoding.

  • jeansburger@beehaw.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 years ago

    Currently running an R710 in RAID6 with 32TB usable, but between the data on plex and backups of things in the rack I’m low on space.

    I’m looking at getting 8 Odroid HC4s and some referbed 20TB drives to build a Glusterfs cluster that will host all of my VM disks and backups. At least with that I’ll have 80-120TB depending on how much fault tolerance I have. Because they have two HDD slots I can double my storage when it gets low and just add more boards to expand the array when I’m tight for space again.

    • nodiet@feddit.de
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 years ago

      I don’t have any experience with the Odroid HC4, but I used to have an N2 and while I am sympathetic towards Odroid I can’t help but feel their software/firmware support is lacking. I always had issues with the GPU driver and there was either a hardware or firmware fault with the USB controller which lead to random access errors.

      • jeansburger@beehaw.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 years ago

        Oh I’m not going to use the trash OS Odroid supplies. I’m going to use Armbian which is much more stable and has better support for the tooling I want to use

        • OzoneThePirate@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 years ago

          Thank you for saying that, I’ve been struggling with my HC4 using the Odroid supplied OS for a while and need to start fresh. Definitely going down this path thus time. Cheers!

  • unfazedbeaver@lemmy.one
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 years ago

    Computer with Ubuntu Server, with a Ryzen APU (3400g), 16GB DDR4 RAM, and 2 x 4TB WD Red CMR Drives.

    Use it as a media server for Jellyfin, and also as a file server using NFS. Works super awesome and I wish I had done this sooner

  • Kaldo@beehaw.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 years ago

    It’s something I always wanted to setup for my personal files, docs, media etc. but get dissuaded once I see synology costs, hard drive requirements, RAID setup options and just generally power draw / heat&noise generation. Looking forward to answers here, I’d be very happy to get off cloud storage but not if it’s a second job maintaining and setting it up

  • SoftestVoid@beehaw.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 years ago

    I’ve got a HP DL360 g9 running Ubuntu server lts and ZFS on Linux with 8× 1.2tb 10k disks, and an external enclosure (connected by external SAS) with 8× 2tb (3.5" sata) disks. The 1.2tb disks are in a ZFS raid10 array which has all our personal and shared documents, photos, etc The 2tb disks are in a raidz6 and stores larger files.

    It uses a stupid amount of power though (mainly the 10k disks) so it’s going to be replaced this year with something newer, not sure what that will look like yet.

  • dollop_of_cream@beehaw.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 years ago

    I’m using a Synology setup. I thought I’d grab an off the self option as I have a habit of going down rabbit holes with DIY projects. It’s working well, doing a one-way mirror off my local storage with nightly backups from the NAS to a cloud server.

    • UselesslyBrisk@infosec.pub
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      2 years ago

      I use synology. I’ve done freenas, openfiler, even just straight zfs/Linux/smb/iscsi on Ubuntu and others. Synology works well and is quite easy to setup. I let the nas do file storage. And tie other computers to it (namely sff dell machines) to do the other stuff, like Pi-hole or plex. Storage is shared from the nas via cifs/smb or iscsi.

      Synology also has one of the best backups for home use imho with Active Backup for Business. It can do vmware, windows, max, Linux etc. I actually have an older second nas for that alone. But you can do it all in one easily.

  • Swintoodles@beehaw.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 years ago

    I built a massive overkill NAS with the intention of turning it into a full blown home server. That fizzled out after a while (partially because the setup I went with didn’t have GPU power options on the server PSUs, and fenangling an ATX PSU in there was too sketchy for me), so now it’s a power hog that just holds files. I just turn it on to use the files, then flip it back off to save on its ridiculous idle power costs.

    In hindsight I’d have gone with a lighter motherboard/CPU combo and kept the server grade stuff for a separate unit. The NAS doesn’t need more than a beefy NIC and a SAS drive controller, and those are only x8 PCIE slots at most.

    Also I use TrueNAS scale, more work to set up than UNRAID but the ZFS architecture seemed too good to ignore.

    • Parsnip8904@beehaw.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 years ago

      A GPU isn’t really necessary for home server unless you want to do lots of client side transcoding. I have a powerhungry server that runs a VM offering samba and nfs shares as well as a bunch of other vms, lxc containers and docker containers, with a full *arr stack, Plex, jellyfin, a jupyterlab instance, pihole and a bunch of other stuff.

      • Swintoodles@beehaw.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 years ago

        I was trying to do some fancy stuff like GPU passthrough to make the ultimate all in one unit that I could have 2 or 3 GPUS in and have several VMs running games independently, or at least the option to spin it up for a friend if they came over. I’m probably not quite sophisticated enough to pull that off anyways, and the use case was too uncommon to bother with after unga bungaing a power distribution board after a hard day of work.

        • Parsnip8904@beehaw.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 years ago

          Ah now I get it. You’ll probably need an expensive PSU to make that work. I’m sure there would be some option though in the server segment for people building GPU clusters.

          • Swintoodles@beehaw.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 years ago

            Yeah, I was trying to go all the way when I should have compartmentalized it a bit and just had two computers instead of one superbeast. The server PSUs aren’t super expensive relatively speaking, 1U hotswap 1200W PSUs with 94% efficiency are like $100. Problem was that the power distribution board I had didn’t have GPU power connectors, only CPU power connectors, and tired me wasn’t going to accept no for an answer and thus let out the magic smoke in it. I got lucky and the distribution board seems to be the intended failure point in these things, so the expensive motherboard and components got by unscathed (I think, I never used the GPU, and it was just some cheap Ebay thing). Still a fairly costly mistake that I should have avoided, but I was tired that night and wanted something to just work out.

            • Parsnip8904@beehaw.org
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 years ago

              That’s quite interesting. I would have thought that they were more expensive than that. I’ve been there too. You’re doing a bunch of stuff, tired and just want it to somehow work. What have you been doing with the build after that, if you don’t mind me asking?

              • Swintoodles@beehaw.org
                link
                fedilink
                English
                arrow-up
                3
                ·
                2 years ago

                Was going to make it a sort of central computer that could centralize all the computing for several members of the family. Was hoping to get a basic laptop that could hook into the unit and play games/program on a virtual machine with graphics far above what the laptop could have handled, plus the aforementioned spin up of more machines for friends. Craft Computing had a lot of fun computing setups I wanted to learn and emulate. I would have also had the standard suite of video services and general tomfoolery. Maybe dip into crypto mining with idle time later on. Lots of ideas that somewhat fizzled out.

                • Parsnip8904@beehaw.org
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  2 years ago

                  That sounds really interesting. I have some VMs set up in a similar way for family memeber though they’re very low power. They’re mostly used to ease the transition from windows to Linux. I hope you get to do it again sometime :)