Jump to content

MilliwaysStack

From Milliways
Revision as of 14:16, 11 January 2026 by Obsidian (talk | contribs) (OpenStack)

We want to run an OpenStack experiment

The grander idea

We want to try out an installation of OpenStack to give people around milliways experience with running (on).

From an unnamed source we got 10 HPE servers. We will use 8 of them to run OpenStack on it. Storage is on a seperate machine.

MVP

The MVP would be:

  • Kubernetes / docker
  • object storage
  • file systems
  • Networking
  • Virtual machines
  • Firewalling
  • Databases - mariaDB / PostgreSQL
  • Someone something redis I guess
  • container registry

e-MVP

The extended MVP would be:

  • functional Monitoring & alerting
  • autoscaling
  • integration into milliways identity & access management authentik
  • logging & alerting

the software stack explained

OpenStack is a cloud framework stack that offeres AWS / Azure / GC alike services.

Most documentation is availible for Ubuntu & Red Hat. On the longer term an installation under NixOS might be feasable.

Asset List

Rack

  • 47U
  • 950mm external depth
    • 915mm internal depth

Switches

  • 2 x Dell PowerConnect 7048R-RA
  • 1 x Cisco 3560e

Servers

  • 1 Dell PowerEdge R710 server as storage
    • 6 x 3,5" bays
      • 4 x 3,5" drive sleds/brackts
        • 6 x 3,5" drive sleds/brackets
      • 2 x 3,5" drive blanks
    • Drives
      • We have more drives than bays, but not enough drives to make a nice or ideal configuration. As such, the Dell storage situation is likely temporary until RMA'd seagate drives return and we can figure out if we add more 12T or 10T?
        • 2* Seagate Exos X16 12TB
          • Passes SMART shorttest
          • Fails SMART longtest
          • RMA'd to Seagate
        • 1* Seagate Exos X16 10TB
          • Passes SMART shorttest
          • Fails SMART longtest
          • RMA'd to Seagate
        • 4* WD Red 4TB
        • 4* WD Green 3TB
    • no rails
  • 10 x HPE proliant DL380 Gen 8
    • 2 x E5-2620 v3 2,4GHz
    • 384GB ram
    • iLO4
      • It seems it accepts 35DPH-SVSXJ-HGBJN-C7N5R-2SS4W as activation key for iLO Advanced license?
    • without hard drives but has 2,5" bays
      • no drive sleds/brackets available, only blanks
    • 9 x slide rails

Shopping List

It's ofc. sexy as all hell to buy memory, AI cards, flash storage and allsorts, but literally none of that will ever work if we don't have our Generic basics in order. While we prefer big donations go to big ticket items, many small ticket items unexpectedly add up in the long run. Please do not forget the generic basics!
  • Generic Basics
    • PDU
      • Temporary 1U unmanaged PDU with 16A/230V C19 input and 1* C19 + 8* Type F outlet.
      • Perfect: Managed Rack mountable PDU with CEE Red 16A/20A 400v input to C13/C14 + C19/C20 outlets.
      • Alternatively; a "normal" Serverrack PDU (still strong prefer managed) + 16A/20A 400v -> 16A 230V transform
    • Network Cables
      • [Color]
        • [Type],[Amount],[Length]
    • Power Cables
      • [Type],[Amount],[Length]
    • Screws, Nuts, Bolts
      • Assorted M2,M2.5,M3 Screws
    • PCI Risers
      • Single NVMe adapters
  • Dell
    • 2* Drive sleds
    • New RAID Card that supports passthrough
    • 2* SFF-8087 -> SFF-8087 Mini SAS Cable
    • Drives
      • 12T ?
  • HP1
    • 1* PCI riser to 4*NVMe adapter
    • 1* 1TB NVMe
  • HP2
    • 1* PCI riser to 4*NVMe adapter
    • 1* 1TB NVMe

Documentation

nb. this is quick 'n' dirty as I go along.
In the short-term future I'd much rather replace this adhoc documentation with something like NetBox.

Network

  • Supernet 10.42.0.0/16
    • Vlan 42
      • Interconnect
      • 10.42.0.0/30
        • Gateway 10.42.0.1
        • Milliways Core 10.42.0.2
    • Vlan 5
      • Mgmt \ OOB
      • 10.42.1.0/24
        • Milliways Core 10.42.1.1
        • Dell iDRAC 10.42.1.5
        • Dell RAID Controller 10.42.1.6
        • HP 1 iLO 10.42.1.7
    • Vlan 10
      • Prod
      • 10.42.10.0/24
        • Milliways Core 10.42.10.1
        • Dell 10.42.10.2
        • HP 1 10.42.10.3

Cable Mgmt

As there are some early ambitions to physically take this environment to events, perhaps we should seriously think about making our lives easier by already thinking about colorcoding connectivity. While this will help us connecting everything again at $event when we're sleepdeprived\drunk\explaining to newbies, this has the added effect of making it all look slightly more cooler than just a spaghetti of all boring white cables or worse, a spaghetti of whatever the fuck we have lying around.

This is all just made-up without too much thought. This is specifically intended to start a discussion so we can work toward an agreement, it is not intended to be a unilateral decision. Example; You'll notice 0 thought was put into fiber or not ;)

  • RED
    • Mgmt \ OOB
      • iDRACs, iLOs, RAID Cards, etc
  • GREEN
    • Storage Prod
      • At least the Dell, maybe HPs if we get into flash storage
  • BLUE
    • Compute Prod
      • Likely overwhelmingly the HPs
  • YELLOW
    • Interconnect
      • Connectivity to $outside, between switches, whatever

Naming Convention

We need names!
Can't keep calling these "Dell", "HP1", "HP2" etc.
Calling them by their S/Ns is also super boring and cumbersome; "Oh yea, we need to setup 5V6S064"
We could even opt for dual names. Internally, when logged in to $shell, the names could be functional "milliways-control-node-1" so it's clear what you're doing, but externally, the Asset Tag could be a Hitchhiker's Guide to the Galaxy character or a DiscWorld town or something. That way, if we do ever show this off at events, we can do cool shit with light up tags, make stuff funny and recognizable and cool to talk about - it also makes it way more relatable to market for when asking for donations; "Ya, we're looking for extra storage for Überwald" sounds much better than "Ya we're looking for extra storage for 5V6S064 or milliways-control-node-1"
Naturally, once we get NetBox going, we can map the Asset names to the actual server name and potentially it's serial so we don't get confused internally (if we want to use serials, there's somethign to be said for not using serials here)
  • Functional
    • milliways-control-node-1
    • milliways-control-node-2
    • control-node-1
    • compute-node-1
    • flash-storage-1
  • Marketing
    • HGttG characters
      • Arthur
      • Ford
      • Zaphod
    • Discworld locations
      • Ankh-Morpork
      • Überwald
      • Lancre

OpenStack

Following installation guide recommendation, passwords are created with openssl rand -hex 10 and saved in a password store.

Controller

communications