The SOPine Clusterboard is a way to setup a compact cluster for low power usage and testing. It can hold up to 7 SOPine modules.
Although there is a PINE_A64-LTS/SOPine page already, this page will describe both the Clusterboard and the SOPine modules.
There is a user guide on the forum but sadly the images have disappeared.
While the Cluster board is an hardware open source project, please note that it does not mean this project is "OSH" compliant.
These are the specifications as mentioned in the project introduction: (spellchecked a bit)
- Standard mITX Form-Factor (167mm x 170mm)
- Built-In Unmanaged Gigabit Ethernet Switch - RTL8370N
- 7x RTL8211E Gigabit Ethernet port, connected to the Switch
- eMMC module Slot
- 7x USB 2.0 (one for each module)
- GPIO pins exposed for each module including UART
- NIC LEDs for each SOPine Module
- 2x 1.5V “AA” size Battery Holder for Real Time Clock Port (RTC)
- +5V 15A power supply with 6.3mm OD/3.0mm ID barrel type DC Jack
- ATX Power Supply Header
|A||6.3mm OD/3.0mm ID barrel type DC Jack for +5V 15A 75W power supply|
|B||Gigabit Ethernet Port|
|D||2x 1.5V “AA” size Battery Holder for Real Time Clock Port (RTC)|
|E||eMMC connector (only for 1st module)|
|F||SOPine Module slot|
|G||USB2.0 USB-A plug|
|H||MicroUSB-B USB2.0 plug|
|J||20-pin expansion connector|
|K||Lithium Battery 3-pin JST connector|
|L||RTL8211E Ethernet Port, with 2 status LEDs|
|M||Unmanaged Gigabit Ethernet Switch - RTL8370N|
|N||ATX Power Connector|
|O||ATX PS_ON 2-pin JST connector|
|P||HDD 5V Power (optional)|
|Q||Resistor (optional, for use with the alternative powering options)|
Parts F to L are duplicated for all 7 modules
20 Pin Connector
|B||2GB DDR3 RAM|
|E||mSD Card Slot|
|F (not annotated yet)||Power LED|
- To operate this board you will need a power supply, Pine advises a "5V 15A power supply with 6.3mm OD/3.0mm ID barrel type DC Jack" which is also available in the store (EU/US versions). There are other ways to power this board, but they are not described here yet.
- The board works best when it is protected by a (mITX)case, and has some airflow provided by a fan.
- Each SOPine module can use cooling, both by a casefan, and by using heatsinks on the individual modules. At least the A64 could use some cooling.
- The first slot can use a eMMC module, which are in the store in 16GB/32GB/64GB/128GB sizes. The modules can be used as a USB stick using a USB adapter. (The eMMC is also readable with the Hardkernel eMMC to microSD converter.)
- 2x AA batteries, to allow the SOPine nodes to retain the RTC (Real Time Clock) time and date information when the power is disconnected.
To install this cluster it is important to know which module has which IP address, so you can make sure you connect to the right board, esp with the module that has access to the eMMC.
You can plug in each module individually, and give them a separate name. After that is taken care of you will know which module is used for what. It would also be possible to manually edit each images hosts/hostname files before first boot.
|The board has no hotplug functionality, so make sure you only plug/unplug the modules while the power is disconnected from the clusterboard.|
|As a unmanaged switch is used there is no VLAN support.|
To boot use the serial console connect the pins to UART0 on the GPIO header and connect using baud 115200
- Pin 6: GND
- Pin 7: RTX
- Pin 8: TXD
The pinouts are available in the forum.
|Do not connect the GND connector until the power is on as it can provide power and prevent the board from booting|
To get the cluster running, start off with a basic Armbian SOPine install on the first module or directly on all the modules. Armbian offers Debian and Ubuntu as options for download.
There is an issue recognizing the network that needs you to make a change to the base image described here, and a PXE issue. If you have a good description, please add it here. The network issue should have been resolved in Armbian builds post December 2020 - as described here - but confirmation from an affected user is needed.
There are a number of possible basic installation methods.
- Full install on each module's mSD card.
- eMMC install on the first module.
- PXE boot for all modules, from the first module, or an external host.
Frequently asked questions
Q: Are the individual MAC addresses linked to the NIC, or the module. I would say the NIC, which would make it easy to make a list for future reuse, but i am unsure. The RTL8211E is actually not a nic but just the physical interfacing. Which seems to indicate the nic is part of the SoC and the MAC address is part of the module. Anyone can test it by swapping two modules and see if they notice the effect on the interface leds.
Schematics and other
- Clusterboard version 2.2 Schematic Capture source file
- Clusterboard version 2.2 Schematic Capture PDF file
- Clusterboard version 2.2 PCB Job source file
- Clusterboard version 2.2 PCB Gerber file
- Clusterboard version 2.2 PCB Layout PDF file
- Clusterboard 20pins header definition
- Clusterboard 3D drawing in Fusion360
- Clusterboard PDF drawing
The current version is 2.3. There was at least a 2.2 version, if there is more info, please add it here.
- https://www.pine64.org/clusterboard/ Clusterboard Introduction
- https://www.pine64.org/sopine/ SOPine Introduction
- https://pine64.com/product/clusterboard-with-7-sopine-compute-module-slots/ Store page for Clusterboard
- https://pine64.com/product/sopine-a64-compute-module/ Store page for SOPine module
- https://www.pine64.org/2020/02/03/fosdem-2020-and-hardware-announcements/ Mention of "Clusterboard with 4 SOEdge and 3 SOPine modules"
- https://www.pine64.org/2019/08/05/august-update-london-meetup-pinetab-news-soedge-and-more/ SOEdge Introduction