User:CounterPillow/NAS-Board

From PINE64
Jump to navigation Jump to search
Note: This is not about an official PINE64 product, whether in development or not. It's merely the proposal by one community user.

This page is for detailing my ideal small NAS based on things the SoC could actually do.

Consider this to only matter once Rockchip finally releases the firmware sources.

Justification

Why Not Just Use An SBC?

The SBCs PINE64 offers currently don't have:

  • PCIe 3.0 for full NVMe bandwidth
  • 2× SATA
  • standard form factor that fits in PC/NAS cases
  • ATX power input

They do have USB 3 which is not needed on a NAS, can be freed up for a second SATA connector.

With the suggested specs, you can easily build a 2×HDD 1×NVMe SSD NAS that uses the SSD for cache.

Why Not Just Use An Off-The-Shelf NAS?

Either bad value (pre-built NAS barebones from e.g. QNAP or Synology) with closed software or way too big/expensive of a setup (x86 boards with Xeons or Ryzen for ECC).

Most people just need something that can do RAID1/10 with SATA, and don't need a lot of CPU power.

Suggested Specs

SoC

RK3568B2, because it has:

  • PCIe 3.0
  • Another serdes
  • ECC support

RK3566 would NOT make for a good NAS board compared to just using an SBC and should not be used.

As a reminder, RK356x has 4× Cortex-A55 (in-order) with crypto and NEON SIMD extensions.

Memory

8 GB, ideally ECC, and only that as an option, splitting supply needlessly for cost savings to consumer isn't worth it in my opinion.

Form Factor

Mini-ITX form factor, with ATX power input.

PMIC reset line held until ATX power ready line indicates power is ready, to prevent boot failures.

I/O

  • 1× PCIe 3.0 ×2 open-ended slot
    • for M.2 adapter card or SATA controller card
  • 1× PCIe 2.1 ×1 open-ended slot
    • for network controller card
  • on-board USB-to-serial converter
    • USB Type B (the large one) connector, so it's clear it's a device role
    • converter powered from USB power, not board power
    • makes it easier for people to use serial
  • 2× SATA III
  • 1× Gigabit Ethernet
  • 2× Standard 4-pin 12V PWM fan, see Noctua pwm fan white paper
    • use actual level-shifted PWM pins for PWM signal, not just a single GPIO full on/full off
    • level-shift the tacho signal to GPIO pins, we can use it in Linux to measure actual RPM
    • alternatively, don't use SoC PWM at all and use I2C fan driver chip like EMC2302
  • 1× micro-SD card slot
  • 1× eMMC module connector
  • 1× HDMI
  • 2× USB 2.0 Host
    • for keyboard and mouse
  • 1× header for power and reset button

Misc

  • SPI flash on board
  • Jumper to disable SPI flash
  • CR2032 holder for RTC battery
  • CE/FCC certification
  • <$200 price point

Software

Hire someone (jmcneil?) to make an EDKII port (with memory training code for memory on the board) that can reside on SPI flash and boot kernels from eMMC, SD or SATA.

Have a working Debian image and a working Ubuntu image on launch (should be easy enough with EDKII, if SoC support is in the kernels of those distros by then)

Example Builds

Here's some example configurations that the board could be used for.

Many Disks

In this configuration, the PCIe 3.0 x2 slot is used up by a 5× SATA controller card. This gives us a total of 7× SATA. The PCIe 2.1 x1 slot is used by a network card.

  • 1× Mini-ITX compatible case with enough 3.5" slots, e.g. Fractal Node 804 (~$100) with 8 slots for 3.5" drives
  • 1× generic PCIe 3.0 to 5× SATA-III expansion card, JMicron JMB585 based (~$60)
  • 5× 3.5" hard disk drive, e.g. 18TB SeaGate Exos X18 (~$310 a piece, $1550 total)
  • 2× 2.5" SATA SSD drive (for RAID1'd read/write cache), e.g. Crucial MX500 1TB (~$85 a piece, $170 total)
    • used through either bcache or lvm cache
    • best connected to on-board SATA for best performance
  • 1× RTL8125B based 2.5G Ethernet card, (~$40)
  • 7× SATA cables (~$5 a piece, $35 total)
  • 1× Power Supply, e.g. SeaSonic Prime Fanless PX 500W (~$160)

Total cost (approximately): $2115 (without board), of which $1720 is storage and can easily be downgraded to a cheaper option. Server with approximately equal amounts of storage at Hetzner would cost ~$192 per month, so approximately 1 year until cost is fully amortised assuming the CPU power/RAM of an x86 storage server isn't needed. A Synology NAS (DS1821+) would cost >$2700 once disk bays are populated the same way.

Btrfs calculator says 45TB of usable space if we run in RAID10.[1] Assuming a board price of $200, we get a price of about $51 per terabyte of redundant cached storage. If we go for all HDDs instead, we get 63TB of usable space instead, at $44 per terabyte of redundant (non-cached) storage.

Few Disks, But Fast Read Cache

In this configuration, the PCIe 3.0 x2 slot is used for a fast NVMe read cache SSD, and the two native SATA ports are used for a RAID1 configuration. The cache drive is not redundant, so it's used as read cache only.

  • 1× literally any compatible case with two 3.5" slots (<$100)
  • 1× power supply, e.g. be quiet! Pure Power 11 (400W) (~$60)
  • 1× PCIe 3.0 M.2 NVMe SSD for read cache, e.g. Kingston NV1 1TB (~$80)
  • 1× PCIe to M.2 adapter, $6 on PINE64 store.
  • 2× 3.5" hard disk drive, e.g. 18TB SeaGate Exos X18 (~$310 a piece, $620 total)
  • 1× RTL8125B based 2.5G Ethernet card, (~$40)
  • 2× SATA cables (~$5 a piece, $10 total)

Total cost (approximately): $916 (without board), of which $700 is storage. This gives us 18 TB of usable storage that can be accessed over 2.5G networking (+ the 1GigE native on the board) with a very fast read cache. Assuming $200 for the board, we get $62 per terabyte of redundant fast read cached storage. Synology has nothing comparable, their 2 bay option (DS720+) is ~$440 without drives and has much less RAM (2 GB), less networking and no cache drive.