Difference between revisions of "User:CounterPillow/NAS-Board"

From PINE64
Jump to navigation Jump to search
(more daydreaming)
 
 
(7 intermediate revisions by the same user not shown)
Line 2: Line 2:


This page is for detailing my ideal small NAS based on things the SoC could actually do.
This page is for detailing my ideal small NAS based on things the SoC could actually do.
Consider this to only matter once Rockchip finally releases the firmware sources.


== Justification ==
== Justification ==
=== Why Not Just Use An SBC? ===
=== Why Not Just Use An SBC? ===


The SBCs PINE64 offers currently don't have:
The SBCs PINE64 offers currently don't have:
* PCIe 3.0 for full NVMe bandwidth
* PCIe 3.0 for full NVMe bandwidth
* 21× SATA
* 2× SATA
* standard form factor that fits in PC/NAS cases
* standard form factor that fits in PC/NAS cases
* ATX power input
* ATX power input


They do have USB 3 which is not needed on a NAS, can be freed up for a second SATA connector.
They do have USB 3 which is not needed on a NAS, can be freed up for a second SATA connector. USB 3.0 functionality can be retained as the RK3568 has USB 3.0 on the OTG port, which can be set to host mode.


With the suggested specs, you can easily build a 21×HDD 11×NVMe SSD NAS that uses the SSD for cache.
With the suggested specs, you can easily build a 2×HDD 1×NVMe SSD NAS that uses the SSD for cache.


=== Why Not Just Use An Off-The-Shelf NAS? ===
=== Why Not Just Use An Off-The-Shelf NAS? ===
Line 32: Line 35:


RK3566 would NOT make for a good NAS board compared to just using an SBC and should not be used.
RK3566 would NOT make for a good NAS board compared to just using an SBC and should not be used.
As a reminder, RK356x has 4× Cortex-A55 (in-order) with crypto and NEON SIMD extensions.


=== Memory ===
=== Memory ===
Line 53: Line 58:
** converter powered from USB power, not board power
** converter powered from USB power, not board power
** makes it easier for people to use serial
** makes it easier for people to use serial
* 2× SATA 3.0
* 2× SATA III
* 1× Gigabit Ethernet
* 1× Gigabit Ethernet
* 2× Standard 4-pin 12V PWM fan, see [https://noctua.at/pub/media/wysiwyg/Noctua_PWM_specifications_white_paper.pdf Noctua pwm fan white paper]
* 2× Standard 4-pin 12V PWM fan, see [https://noctua.at/pub/media/wysiwyg/Noctua_PWM_specifications_white_paper.pdf Noctua pwm fan white paper]
** use actual level-shifted PWM pins for PWM signal, not just a single GPIO full on/full off
** level-shift the tacho signal to GPIO pins, we can use it in Linux to measure actual RPM
** alternatively, don't use SoC PWM at all and use I<sup>2</sup>C fan driver chip like EMC2302
* 1&times; micro-SD card slot
* 1&times; micro-SD card slot
* 1&times; eMMC module connector
* 1&times; eMMC module connector
Line 62: Line 70:
** for keyboard and mouse
** for keyboard and mouse
* 1&times; header for power and reset button
* 1&times; header for power and reset button
* 1&times; USB 3.0 Host from the OTG port


=== Misc ===
=== Misc ===
Line 68: Line 77:
* Jumper to disable SPI flash
* Jumper to disable SPI flash
* CR2032 holder for RTC battery
* CR2032 holder for RTC battery
* CE/FCC certification
* <$200 price point


== Software ==
== Software ==


Hire someone (jmcneil?) to make an EDKII port that can reside on SPI flash and boot kernels from eMMC, SD or SATA.
Hire someone (jmcneil?) to make an EDKII port (with memory training code for memory on the board) that can reside on SPI flash and boot kernels from eMMC, SD or SATA.


Have a working Debian image and a working Ubuntu image on launch (should be easy enough with EDKII, if SoC support is in the kernels of those distros by then)
Have a working Debian image and a working Ubuntu image on launch (should be easy enough with EDKII, if SoC support is in the kernels of those distros by then)
== Example Builds ==
Here's some example configurations that the board could be used for. In all these builds, we assume the kernel + initramfs is stored on a small eMMC module.
=== Many Disks ===
In this configuration, the PCIe 3.0 x2 slot is used up by a 5&times; SATA controller card. This gives us a total of 7&times; SATA. The PCIe 2.1 x1 slot is used by a network card.
* 1&times; Mini-ITX compatible case with enough 3.5" slots, e.g. Fractal Node 804 (~$100) with 8 slots for 3.5" drives
* 1&times; generic PCIe 3.0 to 5&times; SATA-III expansion card, JMicron JMB585 based (~$60)
* 5&times; 3.5" hard disk drive, e.g. 18TB SeaGate Exos X18 (~$310 a piece, $1550 total)
* 2&times; 2.5" SATA SSD drive (for RAID1'd read/write cache), e.g. Crucial MX500 1TB (~$85 a piece, $170 total)
** used through either bcache or lvm cache
** best connected to on-board SATA for best performance
* 1&times; RTL8125B based 2.5G Ethernet card, (~$40)
* 7&times; SATA cables (~$5 a piece, $35 total)
* 1&times; Power Supply, e.g. SeaSonic Prime Fanless PX 500W (~$160)
Total cost (approximately): $2115 (without board), of which $1720 is storage and can easily be downgraded to a cheaper option. Server with approximately equal amounts of storage at Hetzner would cost ~$192 per month, so approximately 1 year until cost is fully amortised assuming the CPU power/RAM of an x86 storage server isn't needed. A Synology NAS (DS1821+) would cost >$2700 once disk bays are populated the same way.
Btrfs calculator says 45TB of usable space if we run in RAID10.<sup>[https://carfax.org.uk/btrfs-usage/?c=2&slo=1&shi=100&p=0&dg=1&d=18&d=18&d=18&d=18&d=18]</sup> Assuming a board price of $200, we get a price of about $51 per terabyte of redundant cached storage. If we go for all HDDs instead, we get 63TB of usable space instead, at $44 per terabyte of redundant (non-cached) storage.
=== Few Disks, But Fast Read Cache ===
In this configuration, the PCIe 3.0 x2 slot is used for a fast NVMe read cache SSD, and the two native SATA ports are used for a RAID1 configuration. The cache drive is not redundant, so it's used as read cache only.
* 1&times; literally any compatible case with two 3.5" slots (<$100)
* 1&times; power supply, e.g. be quiet! Pure Power 11 (400W) (~$60)
* 1&times; PCIe 3.0 M.2 NVMe SSD for read cache, e.g. Kingston NV1 1TB (~$80)
* 1&times; PCIe to M.2 adapter, $6 on PINE64 store.
* 2&times; 3.5" hard disk drive, e.g. 18TB SeaGate Exos X18 (~$310 a piece, $620 total)
* 1&times; RTL8125B based 2.5G Ethernet card, (~$40)
* 2&times; SATA cables (~$5 a piece, $10 total)
Total cost (approximately): $916 (without board), of which $700 is storage. This gives us 18 TB of usable storage that can be accessed over 2.5G networking (+ the 1GigE native on the board) with a very fast read cache. Assuming $200 for the board, we get $62 per terabyte of redundant fast read cached storage. Synology has nothing comparable, their 2 bay option (DS720+) is ~$440 without drives and has much less RAM (2 GB), less networking and no cache drive.
=== 1U Mini Server ===
This configuration is very similar to the "Few Disks, But Fast Read Cache" build, but fits in a 1U rackmount case. This is great for situations where you need storage on the edge in a small 19" rack (e.g. surveillance camera recordings, local caches/mirrors, ...), or even for housing your server in a shared rack.
* 1&times; [https://www.supermicro.com/en/products/chassis/1U/512/SC512L-200B SuperMicro SuperChassis 512L-200B] (~$135)
** Modify the I/O shield, e.g. replace it with a 3d printed one
* 1&times; PCIe 3.0 M.2 NVMe SSD for read cache, e.g. Kingston NV1 1TB (~$80)
* 1&times; PCIe to M.2 adapter, $6 on PINE64 store.
* 1&times; PCIe 90 degree riser card or riser cable for M.2 adapter (~$20?)
* 2&times; 3.5" hard disk drive, e.g. 18TB SeaGate Exos X18 (~$310 a piece, $620 total)
Total cost (approximately): $861 (without board), of which $700 is storage. This gives us fast random access thanks to the read cache combined with 18 TB of redundant storage, accessible over the built-in 1 Gigabit/s ethernet controller of the SoC.

Latest revision as of 13:40, 21 October 2022

Note: This is not about an official PINE64 product, whether in development or not. It's merely the proposal by one community user.

This page is for detailing my ideal small NAS based on things the SoC could actually do.

Consider this to only matter once Rockchip finally releases the firmware sources.

Justification

Why Not Just Use An SBC?

The SBCs PINE64 offers currently don't have:

  • PCIe 3.0 for full NVMe bandwidth
  • 2× SATA
  • standard form factor that fits in PC/NAS cases
  • ATX power input

They do have USB 3 which is not needed on a NAS, can be freed up for a second SATA connector. USB 3.0 functionality can be retained as the RK3568 has USB 3.0 on the OTG port, which can be set to host mode.

With the suggested specs, you can easily build a 2×HDD 1×NVMe SSD NAS that uses the SSD for cache.

Why Not Just Use An Off-The-Shelf NAS?

Either bad value (pre-built NAS barebones from e.g. QNAP or Synology) with closed software or way too big/expensive of a setup (x86 boards with Xeons or Ryzen for ECC).

Most people just need something that can do RAID1/10 with SATA, and don't need a lot of CPU power.

Suggested Specs

SoC

RK3568B2, because it has:

  • PCIe 3.0
  • Another serdes
  • ECC support

RK3566 would NOT make for a good NAS board compared to just using an SBC and should not be used.

As a reminder, RK356x has 4× Cortex-A55 (in-order) with crypto and NEON SIMD extensions.

Memory

8 GB, ideally ECC, and only that as an option, splitting supply needlessly for cost savings to consumer isn't worth it in my opinion.

Form Factor

Mini-ITX form factor, with ATX power input.

PMIC reset line held until ATX power ready line indicates power is ready, to prevent boot failures.

I/O

  • 1× PCIe 3.0 ×2 open-ended slot
    • for M.2 adapter card or SATA controller card
  • 1× PCIe 2.1 ×1 open-ended slot
    • for network controller card
  • on-board USB-to-serial converter
    • USB Type B (the large one) connector, so it's clear it's a device role
    • converter powered from USB power, not board power
    • makes it easier for people to use serial
  • 2× SATA III
  • 1× Gigabit Ethernet
  • 2× Standard 4-pin 12V PWM fan, see Noctua pwm fan white paper
    • use actual level-shifted PWM pins for PWM signal, not just a single GPIO full on/full off
    • level-shift the tacho signal to GPIO pins, we can use it in Linux to measure actual RPM
    • alternatively, don't use SoC PWM at all and use I2C fan driver chip like EMC2302
  • 1× micro-SD card slot
  • 1× eMMC module connector
  • 1× HDMI
  • 2× USB 2.0 Host
    • for keyboard and mouse
  • 1× header for power and reset button
  • 1× USB 3.0 Host from the OTG port

Misc

  • SPI flash on board
  • Jumper to disable SPI flash
  • CR2032 holder for RTC battery
  • CE/FCC certification
  • <$200 price point

Software

Hire someone (jmcneil?) to make an EDKII port (with memory training code for memory on the board) that can reside on SPI flash and boot kernels from eMMC, SD or SATA.

Have a working Debian image and a working Ubuntu image on launch (should be easy enough with EDKII, if SoC support is in the kernels of those distros by then)

Example Builds

Here's some example configurations that the board could be used for. In all these builds, we assume the kernel + initramfs is stored on a small eMMC module.

Many Disks

In this configuration, the PCIe 3.0 x2 slot is used up by a 5× SATA controller card. This gives us a total of 7× SATA. The PCIe 2.1 x1 slot is used by a network card.

  • 1× Mini-ITX compatible case with enough 3.5" slots, e.g. Fractal Node 804 (~$100) with 8 slots for 3.5" drives
  • 1× generic PCIe 3.0 to 5× SATA-III expansion card, JMicron JMB585 based (~$60)
  • 5× 3.5" hard disk drive, e.g. 18TB SeaGate Exos X18 (~$310 a piece, $1550 total)
  • 2× 2.5" SATA SSD drive (for RAID1'd read/write cache), e.g. Crucial MX500 1TB (~$85 a piece, $170 total)
    • used through either bcache or lvm cache
    • best connected to on-board SATA for best performance
  • 1× RTL8125B based 2.5G Ethernet card, (~$40)
  • 7× SATA cables (~$5 a piece, $35 total)
  • 1× Power Supply, e.g. SeaSonic Prime Fanless PX 500W (~$160)

Total cost (approximately): $2115 (without board), of which $1720 is storage and can easily be downgraded to a cheaper option. Server with approximately equal amounts of storage at Hetzner would cost ~$192 per month, so approximately 1 year until cost is fully amortised assuming the CPU power/RAM of an x86 storage server isn't needed. A Synology NAS (DS1821+) would cost >$2700 once disk bays are populated the same way.

Btrfs calculator says 45TB of usable space if we run in RAID10.[1] Assuming a board price of $200, we get a price of about $51 per terabyte of redundant cached storage. If we go for all HDDs instead, we get 63TB of usable space instead, at $44 per terabyte of redundant (non-cached) storage.

Few Disks, But Fast Read Cache

In this configuration, the PCIe 3.0 x2 slot is used for a fast NVMe read cache SSD, and the two native SATA ports are used for a RAID1 configuration. The cache drive is not redundant, so it's used as read cache only.

  • 1× literally any compatible case with two 3.5" slots (<$100)
  • 1× power supply, e.g. be quiet! Pure Power 11 (400W) (~$60)
  • 1× PCIe 3.0 M.2 NVMe SSD for read cache, e.g. Kingston NV1 1TB (~$80)
  • 1× PCIe to M.2 adapter, $6 on PINE64 store.
  • 2× 3.5" hard disk drive, e.g. 18TB SeaGate Exos X18 (~$310 a piece, $620 total)
  • 1× RTL8125B based 2.5G Ethernet card, (~$40)
  • 2× SATA cables (~$5 a piece, $10 total)

Total cost (approximately): $916 (without board), of which $700 is storage. This gives us 18 TB of usable storage that can be accessed over 2.5G networking (+ the 1GigE native on the board) with a very fast read cache. Assuming $200 for the board, we get $62 per terabyte of redundant fast read cached storage. Synology has nothing comparable, their 2 bay option (DS720+) is ~$440 without drives and has much less RAM (2 GB), less networking and no cache drive.

1U Mini Server

This configuration is very similar to the "Few Disks, But Fast Read Cache" build, but fits in a 1U rackmount case. This is great for situations where you need storage on the edge in a small 19" rack (e.g. surveillance camera recordings, local caches/mirrors, ...), or even for housing your server in a shared rack.

  • SuperMicro SuperChassis 512L-200B (~$135)
    • Modify the I/O shield, e.g. replace it with a 3d printed one
  • 1× PCIe 3.0 M.2 NVMe SSD for read cache, e.g. Kingston NV1 1TB (~$80)
  • 1× PCIe to M.2 adapter, $6 on PINE64 store.
  • 1× PCIe 90 degree riser card or riser cable for M.2 adapter (~$20?)
  • 2× 3.5" hard disk drive, e.g. 18TB SeaGate Exos X18 (~$310 a piece, $620 total)

Total cost (approximately): $861 (without board), of which $700 is storage. This gives us fast random access thanks to the read cache combined with 18 TB of redundant storage, accessible over the built-in 1 Gigabit/s ethernet controller of the SoC.