<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.milliways.info/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Obsidian</id>
	<title>Milliways - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.milliways.info/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Obsidian"/>
	<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=Special:Contributions/Obsidian"/>
	<updated>2026-05-03T10:58:42Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.44.5</generator>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=EMF2026Gear&amp;diff=7102</id>
		<title>EMF2026Gear</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=EMF2026Gear&amp;diff=7102"/>
		<updated>2026-04-23T15:35:49Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Source Locally */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go back to [[emfcamp 2026 | EMFCamp 2026]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Take a look at what we brought to [[Weeze_Inventory|EMFCamp 2024]]&lt;br /&gt;
&lt;br /&gt;
==Timeline==&lt;br /&gt;
Gear arrives with [[User:obsidian| Obsidian]] on Tuesday 14th July. ETA 15:00.&amp;lt;br /&amp;gt;&lt;br /&gt;
Gear leaves with [[User:obsidian| Obsidian]] on Tuesday 21nd July ETD 13:00.&lt;br /&gt;
&lt;br /&gt;
==Transport==&lt;br /&gt;
Obsidian transports all gear from DE and UK storage in a single [https://www.vandeburgwal.nl/huur-tandemas-gesloten-3-meter.-extra-breed-en-hoog trailer].&amp;lt;br /&amp;gt;&lt;br /&gt;
Dimensions of trailer are 306x154x180.&amp;lt;br /&amp;gt;&lt;br /&gt;
This is probably oversized for our gear, but the length is necessary for dome struts and nothing else makes as much sense value wise from various dutch rental places.&lt;br /&gt;
&lt;br /&gt;
==UK Storage==&lt;br /&gt;
*Dome&lt;br /&gt;
*Power&lt;br /&gt;
*???&lt;br /&gt;
&lt;br /&gt;
==DE Storage==&lt;br /&gt;
*Rummary Tent&lt;br /&gt;
*Freeside Shelter&lt;br /&gt;
*[[Emfcamp 2026#Milliways_Requestable_Event_Shelters| Eventshelters]]&lt;br /&gt;
*Equipment for Dome&lt;br /&gt;
**Lights&lt;br /&gt;
***1 Clamp per light&lt;br /&gt;
***1 Safety wire loop per light&lt;br /&gt;
***1 Power cable per light&lt;br /&gt;
***1 DMX cable per light&lt;br /&gt;
**Speaker for chilldome (basic I/O nothing special)&lt;br /&gt;
**Camonets&lt;br /&gt;
***Tiewraps&lt;br /&gt;
**Bedsheets?&lt;br /&gt;
*Power&lt;br /&gt;
*Kitchen&lt;br /&gt;
**Oven&lt;br /&gt;
**Pots &amp;amp; Pans&lt;br /&gt;
**Cooking utensils?&lt;br /&gt;
**Induction Stove&lt;br /&gt;
**Cofeemaker&lt;br /&gt;
**???&lt;br /&gt;
&lt;br /&gt;
==Source Locally==&lt;br /&gt;
*Sink Table for kitchen&lt;br /&gt;
** Depends on water situation, sent reminder to EMF on 22/04&lt;br /&gt;
*Stage for Chill Dome&lt;br /&gt;
** Requested info from local suppliers on 23/04&lt;br /&gt;
*Speaker for chilldome (basic I/O nothing special)&lt;br /&gt;
** Requested info from local suppliers on 23/04&lt;br /&gt;
&lt;br /&gt;
[[Category:emfcamp2026]]&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=EMF2026Travelers&amp;diff=7101</id>
		<title>EMF2026Travelers</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=EMF2026Travelers&amp;diff=7101"/>
		<updated>2026-04-21T07:28:30Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go back to [[emfcamp 2026 | EMFCamp 2026]]&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
EU creatures not yet used to post-brexit rules, remember to apply for an [https://www.gov.uk/eta ETA].&lt;br /&gt;
&lt;br /&gt;
==Has a Ticket==&lt;br /&gt;
&lt;br /&gt;
# boreq&lt;br /&gt;
# cqc&lt;br /&gt;
# obsidian&lt;br /&gt;
# OakKitten&lt;br /&gt;
# mc.fly&lt;br /&gt;
# cookingroffa&lt;br /&gt;
# Emerson&lt;br /&gt;
# Junglerot&lt;br /&gt;
# ChewyMoose&lt;br /&gt;
# dpk&lt;br /&gt;
# jobepunkt&lt;br /&gt;
# josie&lt;br /&gt;
# n0k0&lt;br /&gt;
# jelly&lt;br /&gt;
# foxboron&lt;br /&gt;
# zornem&lt;br /&gt;
# mara&lt;br /&gt;
# augeas&lt;br /&gt;
# meg&lt;br /&gt;
# Hackeriet/Norwegians &lt;br /&gt;
# Hackeriet/Norwegians +1&lt;br /&gt;
# Hackeriet/Norwegians +2&lt;br /&gt;
# Hackeriet/Norwegians +3&lt;br /&gt;
# Hackeriet/Norwegians +4&lt;br /&gt;
# Hackeriet/Norwegians +5&lt;br /&gt;
# Hackeriet/Norwegians +6&lt;br /&gt;
# Hackeriet/Norwegians +7&lt;br /&gt;
# Hackeriet/Norwegians +8&lt;br /&gt;
# wasamasa&lt;br /&gt;
# wasamasa +1&lt;br /&gt;
# lineargraph&lt;br /&gt;
# fellmoon&lt;br /&gt;
# fellmoon +1&lt;br /&gt;
# coderobe&lt;br /&gt;
# ax&lt;br /&gt;
# nyx&lt;br /&gt;
# Purple&lt;br /&gt;
# emy&lt;br /&gt;
# bloo&lt;br /&gt;
# leon&lt;br /&gt;
# merlos&lt;br /&gt;
# merlos +1&lt;br /&gt;
&lt;br /&gt;
==Wants a Ticket==&lt;br /&gt;
# w1bble&lt;br /&gt;
# josie/cqc +1&lt;br /&gt;
# josie/cqc +1&lt;br /&gt;
# punycode (hase)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:emfcamp2026]]&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=EMF2026Gear&amp;diff=7100</id>
		<title>EMF2026Gear</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=EMF2026Gear&amp;diff=7100"/>
		<updated>2026-04-21T07:26:43Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* DE Storage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go back to [[emfcamp 2026 | EMFCamp 2026]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Take a look at what we brought to [[Weeze_Inventory|EMFCamp 2024]]&lt;br /&gt;
&lt;br /&gt;
==Timeline==&lt;br /&gt;
Gear arrives with [[User:obsidian| Obsidian]] on Tuesday 14th July. ETA 15:00.&amp;lt;br /&amp;gt;&lt;br /&gt;
Gear leaves with [[User:obsidian| Obsidian]] on Tuesday 21nd July ETD 13:00.&lt;br /&gt;
&lt;br /&gt;
==Transport==&lt;br /&gt;
Obsidian transports all gear from DE and UK storage in a single [https://www.vandeburgwal.nl/huur-tandemas-gesloten-3-meter.-extra-breed-en-hoog trailer].&amp;lt;br /&amp;gt;&lt;br /&gt;
Dimensions of trailer are 306x154x180.&amp;lt;br /&amp;gt;&lt;br /&gt;
This is probably oversized for our gear, but the length is necessary for dome struts and nothing else makes as much sense value wise from various dutch rental places.&lt;br /&gt;
&lt;br /&gt;
==UK Storage==&lt;br /&gt;
*Dome&lt;br /&gt;
*Power&lt;br /&gt;
*???&lt;br /&gt;
&lt;br /&gt;
==DE Storage==&lt;br /&gt;
*Rummary Tent&lt;br /&gt;
*Freeside Shelter&lt;br /&gt;
*[[Emfcamp 2026#Milliways_Requestable_Event_Shelters| Eventshelters]]&lt;br /&gt;
*Equipment for Dome&lt;br /&gt;
**Lights&lt;br /&gt;
***1 Clamp per light&lt;br /&gt;
***1 Safety wire loop per light&lt;br /&gt;
***1 Power cable per light&lt;br /&gt;
***1 DMX cable per light&lt;br /&gt;
**Speaker for chilldome (basic I/O nothing special)&lt;br /&gt;
**Camonets&lt;br /&gt;
***Tiewraps&lt;br /&gt;
**Bedsheets?&lt;br /&gt;
*Power&lt;br /&gt;
*Kitchen&lt;br /&gt;
**Oven&lt;br /&gt;
**Pots &amp;amp; Pans&lt;br /&gt;
**Cooking utensils?&lt;br /&gt;
**Induction Stove&lt;br /&gt;
**Cofeemaker&lt;br /&gt;
**???&lt;br /&gt;
&lt;br /&gt;
==Source Locally==&lt;br /&gt;
*Sink Table for kitchen&lt;br /&gt;
*Stage for Chill Dome&lt;br /&gt;
&lt;br /&gt;
[[Category:emfcamp2026]]&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=EMF2026Gear&amp;diff=7099</id>
		<title>EMF2026Gear</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=EMF2026Gear&amp;diff=7099"/>
		<updated>2026-04-21T07:26:17Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go back to [[emfcamp 2026 | EMFCamp 2026]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Take a look at what we brought to [[Weeze_Inventory|EMFCamp 2024]]&lt;br /&gt;
&lt;br /&gt;
==Timeline==&lt;br /&gt;
Gear arrives with [[User:obsidian| Obsidian]] on Tuesday 14th July. ETA 15:00.&amp;lt;br /&amp;gt;&lt;br /&gt;
Gear leaves with [[User:obsidian| Obsidian]] on Tuesday 21nd July ETD 13:00.&lt;br /&gt;
&lt;br /&gt;
==Transport==&lt;br /&gt;
Obsidian transports all gear from DE and UK storage in a single [https://www.vandeburgwal.nl/huur-tandemas-gesloten-3-meter.-extra-breed-en-hoog trailer].&amp;lt;br /&amp;gt;&lt;br /&gt;
Dimensions of trailer are 306x154x180.&amp;lt;br /&amp;gt;&lt;br /&gt;
This is probably oversized for our gear, but the length is necessary for dome struts and nothing else makes as much sense value wise from various dutch rental places.&lt;br /&gt;
&lt;br /&gt;
==UK Storage==&lt;br /&gt;
*Dome&lt;br /&gt;
*Power&lt;br /&gt;
*???&lt;br /&gt;
&lt;br /&gt;
==DE Storage==&lt;br /&gt;
*Rummary Tent&lt;br /&gt;
*Freeside Shelter&lt;br /&gt;
*[[Emfcamp 2026#Milliways_Requestable_Event_Shelters| Eventshelters]]&lt;br /&gt;
*Dome&lt;br /&gt;
**Lights&lt;br /&gt;
***1 Clamp per light&lt;br /&gt;
***1 Safety wire loop per light&lt;br /&gt;
***1 Power cable per light&lt;br /&gt;
***1 DMX cable per light&lt;br /&gt;
**Speaker for chilldome (basic I/O nothing special)&lt;br /&gt;
**Camonets&lt;br /&gt;
***Tiewraps&lt;br /&gt;
**Bedsheets?&lt;br /&gt;
*Power&lt;br /&gt;
*Kitchen&lt;br /&gt;
**Oven&lt;br /&gt;
**Pots &amp;amp; Pans&lt;br /&gt;
**Cooking utensils?&lt;br /&gt;
**Induction Stove&lt;br /&gt;
**Cofeemaker&lt;br /&gt;
**???&lt;br /&gt;
&lt;br /&gt;
==Source Locally==&lt;br /&gt;
*Sink Table for kitchen&lt;br /&gt;
*Stage for Chill Dome&lt;br /&gt;
&lt;br /&gt;
[[Category:emfcamp2026]]&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=Emfcamp_2026&amp;diff=7098</id>
		<title>Emfcamp 2026</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=Emfcamp_2026&amp;diff=7098"/>
		<updated>2026-04-21T07:14:11Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Meetings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==General==&lt;br /&gt;
Electromagnetic Field 2026 will be held on 16–19 July 2026 at Eastnor Castle Deer Park, Eastnor. &lt;br /&gt;
&lt;br /&gt;
Links: &lt;br /&gt;
* https://www.emfcamp.org/&lt;br /&gt;
* https://social.emfcamp.org/@emf&lt;br /&gt;
&lt;br /&gt;
==Recent News==&lt;br /&gt;
* Next meeting: Sunday 26th April.&lt;br /&gt;
&lt;br /&gt;
==Communication==&lt;br /&gt;
    Matrix: join #milliways-emfcamp:milliways.info (and of course the milliways space: https://matrix.to/#/#milliways-space:milliways.info)&lt;br /&gt;
&lt;br /&gt;
==Arrival and Departure==&lt;br /&gt;
&lt;br /&gt;
The site is open from &#039;&#039;&#039;Thursday 16th July 10:00 to Monday 20th July 12:00 (midday)&#039;&#039;&#039;. You may not be on site outside of this time period unless you are helping with Milliways buildup/teardown and/or camp buildup/teardown. &lt;br /&gt;
&lt;br /&gt;
If you want to be on site outside of that time period to help with Milliways buildup or teardown you need to be cleared by Obsidian who will explain how that works to you and tell you how early you can come or how late you can leave. &lt;br /&gt;
&lt;br /&gt;
If you want to come even earlier or leave even later to help with camp buildup/teardown you need to be cleared by orga, consult [https://www.emfcamp.org/about/volunteering Volunteering] (you should sign up for the mailing list, when in doubt contact [mailto:volunteer@emfcamp.org]).&lt;br /&gt;
&lt;br /&gt;
==Travelers==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Please add yourself&#039;&#039;&#039; so that we have a better idea of what the situation is: [[EMF2026Travelers | Travelers to emfcamp 2026]].&lt;br /&gt;
&lt;br /&gt;
EU creatures not yet used to post-brexit rules, remember to apply for an [https://www.gov.uk/eta ETA].&lt;br /&gt;
&lt;br /&gt;
==Timeline==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Planned timeline subject to changes&lt;br /&gt;
|-&lt;br /&gt;
! Date !! 13:00 !! 20:00&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-14 (Day -1) || [[EMF2026Gear | Gear Arrives]] || Nothing&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-15 (Day 0) || Buildup || Buildup&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-16 (Day 1) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-17 (Day 2) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-18 (Day 3) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-19 (Day 4) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-20 (Day 5) || Teardown || Teardown&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-21 (Day 6) || [[EMF2026Gear | Gear Leaves]] || Nothing&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Subassemblies==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
! Name !! Description !! How to contact?&lt;br /&gt;
|-&lt;br /&gt;
| Freeside || Freeside is a loose conglomerate of hackers from around the globe. Cyberpunks, Stirner enthusiasts, beer drinkers, hedgehog admirers,  and victims of the programming language Go. || boreq/cqc/willscott etc&lt;br /&gt;
|-&lt;br /&gt;
| Ministery of Chaos|| || yawnbox &lt;br /&gt;
|-&lt;br /&gt;
| Your Assembly here? || ??? || ???&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Milliways Requestable Event Shelters==&lt;br /&gt;
&lt;br /&gt;
Aka tents which you can request to appear for you at the campsite. They will be taken from storage and brought just for you so you need to sign up before the event. Each tent fits up to two people, specify the number of people that will sleep in the particular tent so we know how many sleeping bags to grab. The tents come with everything you need e.g. a sleeping pad and sleeping bags.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Tent number !! Names of the people entitled to pick the tent up !! Number of sleeping bags !! Notes &lt;br /&gt;
|-&lt;br /&gt;
| 1 || coderobe || 2 || &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Meetings==&lt;br /&gt;
Pads can be created under https://cpad.milliways.info/, select &amp;quot;code&amp;quot;. Please link here as &amp;quot;view&amp;quot; when done and as &amp;quot;edit&amp;quot; in the relevant channels during the meeting. The meetings usually happen at &#039;&#039;&#039;2100 German time&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
! When !! Where !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 2026-03-29 || [https://jitsi.milliways.info/milliways jitsi] || Kickoff Meeting || [https://cpad.milliways.info/code/#/2/code/view/zbUqSpcwUkWlG9t+E4yb6cOeGclQ7B1SKHVPoX+d+yM/ Meeting Minutes]&lt;br /&gt;
|-&lt;br /&gt;
| 2026-04-26 || [https://jitsi.milliways.info/milliways jitsi] || Monthly || [https://cpad.milliways.info/code/#/2/code/edit/vproN+FWPERkWIIr9y2SNlft/ Meeting Minutes]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Plans==&lt;br /&gt;
&lt;br /&gt;
To have fun.&lt;br /&gt;
&lt;br /&gt;
[[EMF2026Gear | Gear needed]]&lt;br /&gt;
&lt;br /&gt;
==Coin==&lt;br /&gt;
&lt;br /&gt;
Doesn&#039;t exist yet.&lt;br /&gt;
&lt;br /&gt;
[[Category:emfcamp2026]]&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=EMF2026Gear&amp;diff=7094</id>
		<title>EMF2026Gear</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=EMF2026Gear&amp;diff=7094"/>
		<updated>2026-04-17T11:51:19Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* DE Storage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go back to [[emfcamp 2026 | EMFCamp 2026]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Take a look at what we brought to [[Weeze_Inventory|EMFCamp 2024]]&lt;br /&gt;
&lt;br /&gt;
==Timeline==&lt;br /&gt;
Gear arrives with [[User:obsidian| Obsidian]] on Tuesday 14th July. ETA 15:00.&amp;lt;br /&amp;gt;&lt;br /&gt;
Gear leaves with [[User:obsidian| Obsidian]] on Tuesday 21nd July ETD 13:00.&lt;br /&gt;
&lt;br /&gt;
==Transport==&lt;br /&gt;
Obsidian transports all gear from DE and UK storage in a single [https://www.vandeburgwal.nl/huur-tandemas-gesloten-3-meter.-extra-breed-en-hoog trailer].&amp;lt;br /&amp;gt;&lt;br /&gt;
Dimensions of trailer are 306x154x180.&amp;lt;br /&amp;gt;&lt;br /&gt;
This is probably oversized for our gear, but the length is necessary for dome struts and nothing else makes as much sense value wise from various dutch rental places.&lt;br /&gt;
&lt;br /&gt;
==UK Storage==&lt;br /&gt;
*Dome&lt;br /&gt;
*Power&lt;br /&gt;
*???&lt;br /&gt;
&lt;br /&gt;
==DE Storage==&lt;br /&gt;
*Rummary Tent&lt;br /&gt;
*Freeside Shelter&lt;br /&gt;
*[[Emfcamp 2026#Milliways_Requestable_Event_Shelters| Eventshelters]]&lt;br /&gt;
*Lights&lt;br /&gt;
**1 Clamp per light&lt;br /&gt;
**1 Safety wire loop per light&lt;br /&gt;
**1 Power cable per light&lt;br /&gt;
**1 DMX cable per light&lt;br /&gt;
*Speaker for chilldome (basic I/O nothing special)&lt;br /&gt;
*Camonets&lt;br /&gt;
**Tiewraps&lt;br /&gt;
*Bedsheets?&lt;br /&gt;
*Power&lt;br /&gt;
*Pots &amp;amp; Pans&lt;br /&gt;
*Cooking utensils?&lt;br /&gt;
*Induction Stove&lt;br /&gt;
*Cofeemaker&lt;br /&gt;
*???&lt;br /&gt;
&lt;br /&gt;
[[Category:emfcamp2026]]&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=Emfcamp_2026&amp;diff=7069</id>
		<title>Emfcamp 2026</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=Emfcamp_2026&amp;diff=7069"/>
		<updated>2026-03-29T20:02:29Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Meetings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==General==&lt;br /&gt;
Electromagnetic Field 2026 will be held on 16–19 July 2026 at Eastnor Castle Deer Park, Eastnor. &lt;br /&gt;
&lt;br /&gt;
Links: &lt;br /&gt;
* https://www.emfcamp.org/&lt;br /&gt;
* https://social.emfcamp.org/@emf&lt;br /&gt;
&lt;br /&gt;
Tickets:&lt;br /&gt;
* &amp;lt;s&amp;gt;Monday 9th March, 20:00 UTC&amp;lt;/s&amp;gt;&lt;br /&gt;
* &amp;lt;s&amp;gt;Sunday 22nd March, 15:00 UTC&amp;lt;/s&amp;gt;&lt;br /&gt;
* Thursday 2nd April, 18:00 BST&lt;br /&gt;
&lt;br /&gt;
==Recent News==&lt;br /&gt;
* First meeting Sunday 29th.&lt;br /&gt;
* If you want a Milliways Tent add yourself in one of the sections below.&lt;br /&gt;
&lt;br /&gt;
==Communication==&lt;br /&gt;
    Matrix: join #milliways-emfcamp:milliways.info (and of course the milliways space: https://matrix.to/#/#milliways-space:milliways.info)&lt;br /&gt;
&lt;br /&gt;
==Travelers==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Please add yourself&#039;&#039;&#039; so that we have a better idea of what the situation is: [[EMF2026Travelers | Travelers to emfcamp 2026]].&lt;br /&gt;
&lt;br /&gt;
EU creatures not yet used to post-brexit rules, remember to apply for an [https://www.gov.uk/eta ETA].&lt;br /&gt;
&lt;br /&gt;
==Timeline==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Planned timeline subject to changes&lt;br /&gt;
|-&lt;br /&gt;
! Date !! 13:00 !! 20:00&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-14 (Day -1) || [[EMF2026Gear | Gear Arrives]] || Nothing&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-15 (Day 0) || Buildup || Buildup&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-16 (Day 1) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-17 (Day 2) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-18 (Day 3) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-19 (Day 4) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-20 (Day 5) || Teardown || Teardown&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-21 (Day 6) || [[EMF2026Gear | Gear Leaves]] || Nothing&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Subassemblies==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
! Name !! Description !! How to contact?&lt;br /&gt;
|-&lt;br /&gt;
| Freeside || Freeside is a loose conglomerate of hackers from around the globe. Cyberpunks, Stirner enthusiasts, beer drinkers, hedgehog admirers,  and victims of the programming language Go. || boreq/cqc/willscott etc&lt;br /&gt;
|-&lt;br /&gt;
| Ministery of Chaos|| || yawnbox &lt;br /&gt;
|-&lt;br /&gt;
| Your Assembly here? || ??? || ???&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Milliways Requestable Event Shelters==&lt;br /&gt;
&lt;br /&gt;
Aka tents which you can request to appear for you at the campsite. They will be taken from storage and brought just for you so you need to sign up before the event. Each tent fits up to two people, specify the number of people that will sleep in the particular tent so we know how many sleeping bags to grab. The tents come with everything you need e.g. a sleeping pad and sleeping bags.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Tent number !! Names of the people entitled to pick the tent up !! Number of sleeping bags !! Notes &lt;br /&gt;
|-&lt;br /&gt;
| 1 || coderobe || 2 || &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Meetings==&lt;br /&gt;
Pads can be created under https://cpad.milliways.info/, select &amp;quot;code&amp;quot;. Please link here as &amp;quot;view&amp;quot; when done and as &amp;quot;edit&amp;quot; in the relevant channels during the meeting. The meetings usually happen at &#039;&#039;&#039;2100 German time&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
! When !! Where !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 2026-03-29 || [https://jitsi.milliways.info/milliways jitsi] || Kickoff Meeting || [https://cpad.milliways.info/code/#/2/code/view/zbUqSpcwUkWlG9t+E4yb6cOeGclQ7B1SKHVPoX+d+yM/ Meeting Minutes]&lt;br /&gt;
|-&lt;br /&gt;
| 2026-04-26 || || Monthly || &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Plans==&lt;br /&gt;
&lt;br /&gt;
To have fun.&lt;br /&gt;
&lt;br /&gt;
[[EMF2026Gear | Gear needed]]&lt;br /&gt;
&lt;br /&gt;
==Coin==&lt;br /&gt;
&lt;br /&gt;
Doesn&#039;t exist yet.&lt;br /&gt;
&lt;br /&gt;
[[Category:emfcamp2026]]&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=Emfcamp_2026&amp;diff=7047</id>
		<title>Emfcamp 2026</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=Emfcamp_2026&amp;diff=7047"/>
		<updated>2026-03-26T13:45:10Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==General==&lt;br /&gt;
Electromagnetic Field 2026 will be held on 16–19 July 2026 at Eastnor Castle Deer Park, Eastnor. &lt;br /&gt;
&lt;br /&gt;
Links: &lt;br /&gt;
* https://www.emfcamp.org/&lt;br /&gt;
* https://chaos.social/@emf@emfcamp.org&lt;br /&gt;
&lt;br /&gt;
Tickets:&lt;br /&gt;
* &amp;lt;s&amp;gt;Monday 9th March, 20:00 UTC&amp;lt;/s&amp;gt;&lt;br /&gt;
* &amp;lt;s&amp;gt;Sunday 22nd March, 15:00 UTC&amp;lt;/s&amp;gt;&lt;br /&gt;
* Thursday 2nd April, 18:00 BST&lt;br /&gt;
&lt;br /&gt;
==Recent News==&lt;br /&gt;
* First meeting Sunday 29th.&lt;br /&gt;
* If you want a Milliways Tent add yourself in one of the sections below.&lt;br /&gt;
&lt;br /&gt;
==Communication==&lt;br /&gt;
    Matrix: join #milliways-emfcamp:milliways.info (and of course the milliways space: https://matrix.to/#/#milliways-space:milliways.info)&lt;br /&gt;
&lt;br /&gt;
==Travelers==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Please add yourself&#039;&#039;&#039; so that we have a better idea of what the situation is: [[EMF2026Travelers | Travelers to emfcamp 2026]].&lt;br /&gt;
&lt;br /&gt;
EU creatures not yet used to post-brexit rules, remember to apply for an [https://www.gov.uk/eta ETA].&lt;br /&gt;
&lt;br /&gt;
==Timeline==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Planned timeline subject to changes&lt;br /&gt;
|-&lt;br /&gt;
! Date !! 13:00 !! 20:00&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-14 (Day -1) || [[EMF2026Gear | Gear Arrives]] || Nothing&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-15 (Day 0) || Buildup || Buildup&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-16 (Day 1) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-17 (Day 2) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-18 (Day 3) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-19 (Day 4) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-20 (Day 5) || Teardown || Teardown&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-21 (Day 6) || [[EMF2026Gear | Gear Leaves]] || Nothing&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Subassemblies==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
! Name !! Description !! How to contact?&lt;br /&gt;
|-&lt;br /&gt;
| Freeside || Freeside is a loose conglomerate of hackers from around the globe. Cyberpunks, Stirner enthusiasts, beer drinkers, hedgehog admirers,  and victims of the programming language Go. || #freeside @ hackint&lt;br /&gt;
|-&lt;br /&gt;
| Your Assembly here? || ??? || ???&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Milliways Requestable Event Shelters==&lt;br /&gt;
&lt;br /&gt;
Aka tents which you can request to appear for you at the campsite. They will be taken from storage and brought just for you so you need to sign up before the event. Each tent fits up to two people, specify the number of people that will sleep in the particular tent so we know how many sleeping bags to grab. The tents come with everything you need e.g. a sleeping pad and sleeping bags.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Tent number !! Names of the people entitled to pick the tent up !! Number of sleeping bags !! Notes &lt;br /&gt;
|-&lt;br /&gt;
| 1 || coderobe || 2 || &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Meetings==&lt;br /&gt;
Pads can be created under https://cpad.milliways.info/, select &amp;quot;code&amp;quot;. Please link here as &amp;quot;view&amp;quot; when done and as &amp;quot;edit&amp;quot; in the relevant channels during the meeting. The meetings usually happen at &#039;&#039;&#039;2100 German time&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
! When !! Where !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 2026-03-29 || [https://jitsi.milliways.info/milliways jitsi] || Kickoff Meeting || [https://cpad.milliways.info/code/#/2/code/edit/dqAPcOpGPlisYNh7KlZjeqew/ Meeting Minutes]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Plans==&lt;br /&gt;
&lt;br /&gt;
To have fun.&lt;br /&gt;
&lt;br /&gt;
[[EMF2026Gear | Gear needed]]&lt;br /&gt;
&lt;br /&gt;
==Coin==&lt;br /&gt;
&lt;br /&gt;
Doesn&#039;t exist yet.&lt;br /&gt;
&lt;br /&gt;
[[Category:emfcamp2026]]&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=EMF2026Travelers&amp;diff=7046</id>
		<title>EMF2026Travelers</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=EMF2026Travelers&amp;diff=7046"/>
		<updated>2026-03-26T13:44:55Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go back to [[emfcamp 2026 | EMFCamp 2026]]&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
EU creatures not yet used to post-brexit rules, remember to apply for an [https://www.gov.uk/eta ETA].&lt;br /&gt;
&lt;br /&gt;
Add yourself if you have a ticket:&lt;br /&gt;
&lt;br /&gt;
# boreq&lt;br /&gt;
# cqc&lt;br /&gt;
# obsidian&lt;br /&gt;
# OakKitten&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:emfcamp2026]]&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=EMF2026Travelers&amp;diff=7045</id>
		<title>EMF2026Travelers</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=EMF2026Travelers&amp;diff=7045"/>
		<updated>2026-03-26T10:39:53Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go back to [[emfcamp 2026 | EMFCamp 2026]]&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add yourself if you have a ticket:&lt;br /&gt;
&lt;br /&gt;
# boreq&lt;br /&gt;
# cqc&lt;br /&gt;
# obsidian&lt;br /&gt;
# OakKitten&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:emfcamp2026]]&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=Emfcamp_2026&amp;diff=7039</id>
		<title>Emfcamp 2026</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=Emfcamp_2026&amp;diff=7039"/>
		<updated>2026-03-24T12:10:10Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==General==&lt;br /&gt;
Electromagnetic Field 2026 will be held on 16–19 July 2026 at Eastnor Castle Deer Park, Eastnor. &lt;br /&gt;
&lt;br /&gt;
Links: &lt;br /&gt;
* https://www.emfcamp.org/&lt;br /&gt;
* https://chaos.social/@emf@emfcamp.org&lt;br /&gt;
&lt;br /&gt;
Tickets:&lt;br /&gt;
* &amp;lt;s&amp;gt;Monday 9th March, 20:00 UTC&amp;lt;/s&amp;gt;&lt;br /&gt;
* &amp;lt;s&amp;gt;Sunday 22nd March, 15:00 UTC&amp;lt;/s&amp;gt;&lt;br /&gt;
* Thursday 2nd April, 18:00 BST&lt;br /&gt;
&lt;br /&gt;
==Recent News==&lt;br /&gt;
* First meeting Sunday 29th.&lt;br /&gt;
* If you want a Milliways Tent add yourself in one of the sections below.&lt;br /&gt;
&lt;br /&gt;
==Communication==&lt;br /&gt;
    Matrix: join #milliways-emfcamp:milliways.info (and of course the milliways space: https://matrix.to/#/#milliways-space:milliways.info)&lt;br /&gt;
&lt;br /&gt;
==Travelers==&lt;br /&gt;
&lt;br /&gt;
[[EMF2026Travelers | Travelers to emfcamp 2026]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Timeline==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Planned timeline subject to changes&lt;br /&gt;
|-&lt;br /&gt;
! Date !! 13:00 !! 20:00&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-14 (Day -1) || [[EMF2026Gear | Gear Arrives]] || Nothing&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-15 (Day 0) || Buildup || Buildup&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-16 (Day 1) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-17 (Day 2) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-18 (Day 3) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-19 (Day 4) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-20 (Day 5) || Teardown || Teardown&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-21 (Day 6) || [[EMF2026Gear | Gear Leaves]] || Nothing&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Subassemblies==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
! Name !! Description !! How to contact?&lt;br /&gt;
|-&lt;br /&gt;
| Freeside || Freeside is a loose conglomerate of hackers from around the globe. Cyberpunks, Stirner enthusiasts, beer drinkers, hedgehog admirers,  and victims of the programming language Go. || #freeside @ hackint&lt;br /&gt;
|-&lt;br /&gt;
| Your Assembly here? || ??? || ???&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Milliways Requestable Event Shelters==&lt;br /&gt;
&lt;br /&gt;
Aka tents which you can request to appear for you at the campsite. They will be taken from storage and brought just for you so you need to sign up before the event. Each tent fits up to two people, specify the number of people that will sleep in the particular tent so we know how many sleeping bags to grab. The tents come with everything you need e.g. a sleeping pad and sleeping bags.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Tent number !! Names of the people entitled to pick the tent up !! Number of sleeping bags !! Notes &lt;br /&gt;
|-&lt;br /&gt;
| 1 || coderobe || 2 || &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Meetings==&lt;br /&gt;
Pads can be created under https://cpad.milliways.info/, select &amp;quot;code&amp;quot;. Please link here as &amp;quot;view&amp;quot; when done and as &amp;quot;edit&amp;quot; in the relevant channels during the meeting. The meetings usually happen at &#039;&#039;&#039;2100 German time&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
! When !! Where !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 2026-03-29 || [https://jitsi.milliways.info/milliways jitsi] || Kickoff Meeting || [https://cpad.milliways.info/code/#/2/code/edit/dqAPcOpGPlisYNh7KlZjeqew/ Meeting Minutes]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Plans==&lt;br /&gt;
&lt;br /&gt;
To have fun.&lt;br /&gt;
&lt;br /&gt;
[[EMF2026Gear | Gear needed]]&lt;br /&gt;
&lt;br /&gt;
==Coin==&lt;br /&gt;
&lt;br /&gt;
Doesn&#039;t exist yet.&lt;br /&gt;
&lt;br /&gt;
[[Category:emfcamp2026]]&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=Emfcamp_2026&amp;diff=7038</id>
		<title>Emfcamp 2026</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=Emfcamp_2026&amp;diff=7038"/>
		<updated>2026-03-24T12:08:00Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Meetings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==General==&lt;br /&gt;
Electromagnetic Field 2026 will be held on 16–19 July 2026 at Eastnor Castle Deer Park, Eastnor. &lt;br /&gt;
&lt;br /&gt;
Links: &lt;br /&gt;
* https://www.emfcamp.org/&lt;br /&gt;
* https://chaos.social/@emf@emfcamp.org&lt;br /&gt;
&lt;br /&gt;
Tickets:&lt;br /&gt;
* &amp;lt;s&amp;gt;Monday 9th March, 20:00 UTC&amp;lt;/s&amp;gt;&lt;br /&gt;
* &amp;lt;s&amp;gt;Sunday 22nd March, 15:00 UTC&amp;lt;/s&amp;gt;&lt;br /&gt;
* Thursday 2nd April, 18:00 BST&lt;br /&gt;
&lt;br /&gt;
==Recent News==&lt;br /&gt;
* First meeting Sunday 29th.&lt;br /&gt;
* If you want a Milliways Tent add yourself in one of the sections below.&lt;br /&gt;
&lt;br /&gt;
==Communication==&lt;br /&gt;
    Matrix: join #milliways-emfcamp:milliways.info (and of course the milliways space: https://matrix.to/#/#milliways-space:milliways.info)&lt;br /&gt;
&lt;br /&gt;
==Travelers==&lt;br /&gt;
&lt;br /&gt;
[[EMF2026Travelers | Travelers to emfcamp 2026]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Timeline==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Planned timeline subject to changes&lt;br /&gt;
|-&lt;br /&gt;
! Date !! 13:00 !! 20:00&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-14 (Day -1) || [[EMF2026Gear | Gear Arrives]] || Nothing&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-15 (Day 0) || Buildup || Buildup&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-16 (Day 1) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-17 (Day 2) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-18 (Day 3) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-19 (Day 4) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-20 (Day 5) || Teardown || Teardown&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-21 (Day 6) || [[EMF2026Gear | Gear Leaves]] || Nothing&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Subassemblies==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
! Name !! Description !! How to contact?&lt;br /&gt;
|-&lt;br /&gt;
| Freeside || Freeside is a loose conglomerate of hackers from around the globe. Cyberpunks, Stirner enthusiasts, beer drinkers, hedgehog admirers,  and victims of the programming language Go. || #freeside @ hackint&lt;br /&gt;
|-&lt;br /&gt;
| Your Assembly here? || ??? || ???&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Milliways Requestable Event Shelters==&lt;br /&gt;
&lt;br /&gt;
Aka tents which you can request to appear for you at the campsite. They will be taken from storage and brought just for you so you need to sign up before the event. Each tent fits up to two people, specify the number of people that will sleep in the particular tent so we know how many sleeping bags to grab. The tents come with everything you need e.g. a sleeping pad and sleeping bags.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Tent number !! Names of the people entitled to pick the tent up !! Number of sleeping bags !! Notes &lt;br /&gt;
|-&lt;br /&gt;
| 1 || coderobe || 2 || &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Meetings==&lt;br /&gt;
Pads can be created under https://cpad.milliways.info/, select &amp;quot;code&amp;quot;. Please link here as &amp;quot;view&amp;quot; when done and as &amp;quot;edit&amp;quot; in the relevant channels during the meeting. The meetings usually happen at &#039;&#039;&#039;2100 German time&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
! When !! Where !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 2026-03-29 || [https://jitsi.milliways.info/milliways jitsi] || Kickoff Meeting || [https://cpad.milliways.info/code/#/2/code/view/zbUqSpcwUkWlG9t+E4yb6cOeGclQ7B1SKHVPoX+d+yM/ Meeting Minutes]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Plans==&lt;br /&gt;
&lt;br /&gt;
To have fun.&lt;br /&gt;
&lt;br /&gt;
[[EMF2026Gear | Gear needed]]&lt;br /&gt;
&lt;br /&gt;
==Coin==&lt;br /&gt;
&lt;br /&gt;
Doesn&#039;t exist yet.&lt;br /&gt;
&lt;br /&gt;
[[Category:emfcamp2026]]&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=Emfcamp_2026&amp;diff=7037</id>
		<title>Emfcamp 2026</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=Emfcamp_2026&amp;diff=7037"/>
		<updated>2026-03-24T09:46:50Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Subassemblies */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==General==&lt;br /&gt;
Electromagnetic Field 2026 will be held on 16–19 July 2026 at Eastnor Castle Deer Park, Eastnor. &lt;br /&gt;
&lt;br /&gt;
Links: &lt;br /&gt;
* https://www.emfcamp.org/&lt;br /&gt;
* https://chaos.social/@emf@emfcamp.org&lt;br /&gt;
&lt;br /&gt;
Tickets:&lt;br /&gt;
* &amp;lt;s&amp;gt;Monday 9th March, 20:00 UTC&amp;lt;/s&amp;gt;&lt;br /&gt;
* &amp;lt;s&amp;gt;Sunday 22nd March, 15:00 UTC&amp;lt;/s&amp;gt;&lt;br /&gt;
* Thursday 2nd April, 18:00 BST&lt;br /&gt;
&lt;br /&gt;
==Recent News==&lt;br /&gt;
* First meeting Sunday 29th.&lt;br /&gt;
* If you want a Milliways Tent add yourself in one of the sections below.&lt;br /&gt;
&lt;br /&gt;
==Communication==&lt;br /&gt;
    Matrix: join #milliways-emfcamp:milliways.info (and of course the milliways space: https://matrix.to/#/#milliways-space:milliways.info)&lt;br /&gt;
&lt;br /&gt;
==Travelers==&lt;br /&gt;
&lt;br /&gt;
[[EMF2026Travelers | Travelers to emfcamp 2026]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Timeline==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Planned timeline subject to changes&lt;br /&gt;
|-&lt;br /&gt;
! Date !! 13:00 !! 20:00&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-14 (Day -1) || [[EMF2026Gear | Gear Arrives]] || Nothing&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-15 (Day 0) || Buildup || Buildup&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-16 (Day 1) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-17 (Day 2) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-18 (Day 3) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-19 (Day 4) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-20 (Day 5) || Teardown || Teardown&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-21 (Day 6) || [[EMF2026Gear | Gear Leaves]] || Nothing&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Subassemblies==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
! Name !! Description !! How to contact?&lt;br /&gt;
|-&lt;br /&gt;
| Freeside || Freeside is a loose conglomerate of hackers from around the globe. Cyberpunks, Stirner enthusiasts, beer drinkers, hedgehog admirers,  and victims of the programming language Go. || #freeside @ hackint&lt;br /&gt;
|-&lt;br /&gt;
| Your Assembly here? || ??? || ???&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Milliways Requestable Event Shelters==&lt;br /&gt;
&lt;br /&gt;
Aka tents which you can request to appear for you at the campsite. They will be taken from storage and brought just for you so you need to sign up before the event. Each tent fits up to two people, specify the number of people that will sleep in the particular tent so we know how many sleeping bags to grab. The tents come with everything you need e.g. a sleeping pad and sleeping bags.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Tent number !! Names of the people entitled to pick the tent up !! Number of sleeping bags !! Notes &lt;br /&gt;
|-&lt;br /&gt;
| 1 || coderobe || 2 || &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Meetings==&lt;br /&gt;
Pads can be created under https://cpad.milliways.info/, select &amp;quot;code&amp;quot;. Please link here as &amp;quot;view&amp;quot; when done and as &amp;quot;edit&amp;quot; in the relevant channels during the meeting. The meetings usually happen at &#039;&#039;&#039;2100 German time&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
! When !! Where !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 2026-03-29 || [https://jitsi.milliways.info/milliways jitsi] || Kickoff Meeting || [https://example.com Meeting Minutes Todo]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Plans==&lt;br /&gt;
&lt;br /&gt;
To have fun.&lt;br /&gt;
&lt;br /&gt;
[[EMF2026Gear | Gear needed]]&lt;br /&gt;
&lt;br /&gt;
==Coin==&lt;br /&gt;
&lt;br /&gt;
Doesn&#039;t exist yet.&lt;br /&gt;
&lt;br /&gt;
[[Category:emfcamp2026]]&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=EMF2026Travelers&amp;diff=7036</id>
		<title>EMF2026Travelers</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=EMF2026Travelers&amp;diff=7036"/>
		<updated>2026-03-24T09:45:37Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go back to [[emfcamp 2026 | EMFCamp 2026]]&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add yourself if you have a ticket:&lt;br /&gt;
&lt;br /&gt;
# User1&lt;br /&gt;
# User2&lt;br /&gt;
# User3&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=EMF2026Travelers&amp;diff=7035</id>
		<title>EMF2026Travelers</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=EMF2026Travelers&amp;diff=7035"/>
		<updated>2026-03-24T09:45:23Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go back to [[emfcamp 2026 | EMFCamp 2026]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Travelers to emfcamp 2026&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add yourself if you have a ticket:&lt;br /&gt;
&lt;br /&gt;
# User1&lt;br /&gt;
# User2&lt;br /&gt;
# User3&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=EMF2026Gear&amp;diff=7027</id>
		<title>EMF2026Gear</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=EMF2026Gear&amp;diff=7027"/>
		<updated>2026-03-17T09:51:19Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go back to [[emfcamp 2026 | EMFCamp 2026]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Take a look at what we brought to [[Weeze_Inventory|EMFCamp 2024]]&lt;br /&gt;
&lt;br /&gt;
==Timeline==&lt;br /&gt;
Gear arrives with [[User:obsidian| Obsidian]] on Tuesday 14th July. ETA 15:00.&amp;lt;br /&amp;gt;&lt;br /&gt;
Gear leaves with [[User:obsidian| Obsidian]] on Tuesday 21nd July ETD 13:00.&lt;br /&gt;
&lt;br /&gt;
==Transport==&lt;br /&gt;
Obsidian transports all gear from DE and UK storage in a single [https://www.vandeburgwal.nl/huur-tandemas-gesloten-3-meter.-extra-breed-en-hoog trailer].&amp;lt;br /&amp;gt;&lt;br /&gt;
Dimensions of trailer are 306x154x180.&amp;lt;br /&amp;gt;&lt;br /&gt;
This is probably oversized for our gear, but the length is necessary for dome struts and nothing else makes as much sense value wise from various dutch rental places.&lt;br /&gt;
&lt;br /&gt;
==UK Storage==&lt;br /&gt;
*Dome&lt;br /&gt;
*Power&lt;br /&gt;
*???&lt;br /&gt;
&lt;br /&gt;
==DE Storage==&lt;br /&gt;
*Rummary Tent&lt;br /&gt;
*Freeside Shelter&lt;br /&gt;
*[[Emfcamp 2026#Milliways_Requestable_Event_Shelters| Eventshelters]]&lt;br /&gt;
*Lights&lt;br /&gt;
**1 Clamp per light&lt;br /&gt;
**1 Safety wire loop per light&lt;br /&gt;
**1 Power cable per light&lt;br /&gt;
**1 DMX cable per light&lt;br /&gt;
*Power&lt;br /&gt;
*Pots &amp;amp; Pans&lt;br /&gt;
*Cooking utensils?&lt;br /&gt;
*Induction Stove&lt;br /&gt;
*Cofeemaker&lt;br /&gt;
*???&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=EMF2026Gear&amp;diff=7026</id>
		<title>EMF2026Gear</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=EMF2026Gear&amp;diff=7026"/>
		<updated>2026-03-15T14:58:53Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* DE Storage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go back to [[emfcamp 2026 | EMFCamp 2026]]&lt;br /&gt;
Take a look at what we brought to [[Weeze_Inventory|EMFCamp 2024]]&lt;br /&gt;
&lt;br /&gt;
==Timeline==&lt;br /&gt;
Gear arrives with [[User:obsidian| Obsidian]] on Tuesday 14th July. ETA 15:00.&amp;lt;br /&amp;gt;&lt;br /&gt;
Gear leaves with [[User:obsidian| Obsidian]] on Tuesday 21nd July ETD 13:00.&lt;br /&gt;
&lt;br /&gt;
==Transport==&lt;br /&gt;
Obsidian transports all gear from DE and UK storage in a single [https://www.vandeburgwal.nl/huur-tandemas-gesloten-3-meter.-extra-breed-en-hoog trailer].&amp;lt;br /&amp;gt;&lt;br /&gt;
Dimensions of trailer are 306x154x180.&amp;lt;br /&amp;gt;&lt;br /&gt;
This is probably oversized for our gear, but the length is necessary for dome struts and nothing else makes as much sense value wise from various dutch rental places.&lt;br /&gt;
&lt;br /&gt;
==UK Storage==&lt;br /&gt;
*Dome&lt;br /&gt;
*Power&lt;br /&gt;
*???&lt;br /&gt;
&lt;br /&gt;
==DE Storage==&lt;br /&gt;
*Rummary Tent&lt;br /&gt;
*Freeside Shelter&lt;br /&gt;
*[[Emfcamp 2026#Milliways_Requestable_Event_Shelters| Eventshelters]]&lt;br /&gt;
*Lights&lt;br /&gt;
**1 Clamp per light&lt;br /&gt;
**1 Safety wire loop per light&lt;br /&gt;
**1 Power cable per light&lt;br /&gt;
**1 DMX cable per light&lt;br /&gt;
*Power&lt;br /&gt;
*Pots &amp;amp; Pans&lt;br /&gt;
*Cooking utensils?&lt;br /&gt;
*Induction Stove&lt;br /&gt;
*Cofeemaker&lt;br /&gt;
*???&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=EMF2026Gear&amp;diff=7025</id>
		<title>EMF2026Gear</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=EMF2026Gear&amp;diff=7025"/>
		<updated>2026-03-15T14:57:20Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go back to [[emfcamp 2026 | EMFCamp 2026]]&lt;br /&gt;
Take a look at what we brought to [[Weeze_Inventory|EMFCamp 2024]]&lt;br /&gt;
&lt;br /&gt;
==Timeline==&lt;br /&gt;
Gear arrives with [[User:obsidian| Obsidian]] on Tuesday 14th July. ETA 15:00.&amp;lt;br /&amp;gt;&lt;br /&gt;
Gear leaves with [[User:obsidian| Obsidian]] on Tuesday 21nd July ETD 13:00.&lt;br /&gt;
&lt;br /&gt;
==Transport==&lt;br /&gt;
Obsidian transports all gear from DE and UK storage in a single [https://www.vandeburgwal.nl/huur-tandemas-gesloten-3-meter.-extra-breed-en-hoog trailer].&amp;lt;br /&amp;gt;&lt;br /&gt;
Dimensions of trailer are 306x154x180.&amp;lt;br /&amp;gt;&lt;br /&gt;
This is probably oversized for our gear, but the length is necessary for dome struts and nothing else makes as much sense value wise from various dutch rental places.&lt;br /&gt;
&lt;br /&gt;
==UK Storage==&lt;br /&gt;
*Dome&lt;br /&gt;
*Power&lt;br /&gt;
*???&lt;br /&gt;
&lt;br /&gt;
==DE Storage==&lt;br /&gt;
*Rummary Tent&lt;br /&gt;
*Freeside Shelter&lt;br /&gt;
*[[Emfcamp 2026#Milliways_Requestable_Event_Shelters| Eventshelters]]&lt;br /&gt;
*Lights&lt;br /&gt;
*Power&lt;br /&gt;
*Pots &amp;amp; Pans&lt;br /&gt;
*Induction Stove&lt;br /&gt;
*Cofeemaker&lt;br /&gt;
*???&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=Emfcamp_2026&amp;diff=7024</id>
		<title>Emfcamp 2026</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=Emfcamp_2026&amp;diff=7024"/>
		<updated>2026-03-15T14:54:45Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Plans */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==General==&lt;br /&gt;
Electromagnetic Field 2026 will be held on 16–19 July 2026 at Eastnor Castle Deer Park, Eastnor. &lt;br /&gt;
&lt;br /&gt;
Links: &lt;br /&gt;
* https://www.emfcamp.org/&lt;br /&gt;
* https://chaos.social/@emf@emfcamp.org&lt;br /&gt;
&lt;br /&gt;
Tickets:&lt;br /&gt;
* &amp;lt;s&amp;gt;Monday 9th March, 20:00 UTC&amp;lt;/s&amp;gt;&lt;br /&gt;
* Sunday 22nd March, 15:00 UTC&lt;br /&gt;
* Thursday 2nd April, 18:00 BST&lt;br /&gt;
&lt;br /&gt;
==Recent News==&lt;br /&gt;
* There is no news.&lt;br /&gt;
&lt;br /&gt;
==Communication==&lt;br /&gt;
    Matrix: join #milliways-emfcamp:milliways.info (and of course the milliways space: https://matrix.to/#/#milliways-space:milliways.info)&lt;br /&gt;
&lt;br /&gt;
==Timeline==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Planned timeline subject to changes&lt;br /&gt;
|-&lt;br /&gt;
! Date !! 13:00 !! 20:00&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-14 (Day -1) || [[EMF2026Gear | Gear Arrives]] || Nothing&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-15 (Day 0) || Buildup || Buildup&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-16 (Day 1) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-17 (Day 2) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-18 (Day 3) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-19 (Day 4) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-20 (Day 5) || Teardown || Teardown&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-21 (Day 6) || [[EMF2026Gear | Gear Leaves]] || Nothing&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Subassemblies==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
! Name !! Description !! How to contact?&lt;br /&gt;
|-&lt;br /&gt;
| Freeside ||  || #freeside @ hackint&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Milliways Requestable Event Shelters==&lt;br /&gt;
&lt;br /&gt;
Aka tents which you can request to appear for you at the campsite. They will be taken from storage and brought just for you so you need to sign up before the event. Each tent fits up to two people, specify the number of people that will sleep in the particular tent so we know how many sleeping bags to grab. The tents come with everything you need e.g. a sleeping pad and sleeping bags.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Tent number !! Names of the people entitled to pick the tent up !! Number of sleeping bags !! Notes &lt;br /&gt;
|-&lt;br /&gt;
| 1 || coderobe || 2 || &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Meetings==&lt;br /&gt;
Pads can be created under https://cpad.milliways.info/, select &amp;quot;code&amp;quot;. Please link here as &amp;quot;view&amp;quot; when done and as &amp;quot;edit&amp;quot; in the relevant channels during the meeting. The meetings usually happen at 2100 German time.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
! When !! Where !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 2026-??-?? || [https://jitsi.milliways.info/milliways jitsi] || Orga Meeting || [https://example.com Meeting Minutes]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Plans==&lt;br /&gt;
&lt;br /&gt;
To have fun.&lt;br /&gt;
&lt;br /&gt;
[[EMF2026Gear | Gear needed]]&lt;br /&gt;
&lt;br /&gt;
==Coin==&lt;br /&gt;
&lt;br /&gt;
Doesn&#039;t exist yet.&lt;br /&gt;
&lt;br /&gt;
[[Category:emfcamp2026]]&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=EMF2026Gear&amp;diff=7023</id>
		<title>EMF2026Gear</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=EMF2026Gear&amp;diff=7023"/>
		<updated>2026-03-15T14:53:57Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go back to [[emfcamp 2026 | EMFCamp 2026]]&lt;br /&gt;
&lt;br /&gt;
==Timeline==&lt;br /&gt;
Gear arrives with [[User:obsidian| Obsidian]] on Tuesday 14th July. ETA 15:00.&amp;lt;br /&amp;gt;&lt;br /&gt;
Gear leaves with [[User:obsidian| Obsidian]] on Tuesday 21nd July ETD 13:00.&lt;br /&gt;
&lt;br /&gt;
==Transport==&lt;br /&gt;
Obsidian transports all gear from DE and UK storage in a single [https://www.vandeburgwal.nl/huur-tandemas-gesloten-3-meter.-extra-breed-en-hoog trailer].&amp;lt;br /&amp;gt;&lt;br /&gt;
Dimensions of trailer are 306x154x180.&amp;lt;br /&amp;gt;&lt;br /&gt;
This is probably oversized for our gear, but the length is necessary for dome struts and nothing else makes as much sense value wise from various dutch rental places.&lt;br /&gt;
&lt;br /&gt;
==UK Storage==&lt;br /&gt;
*Dome&lt;br /&gt;
*Power&lt;br /&gt;
*???&lt;br /&gt;
&lt;br /&gt;
==DE Storage==&lt;br /&gt;
*Rummary Tent&lt;br /&gt;
*Freeside Shelter&lt;br /&gt;
*Lights&lt;br /&gt;
*Power&lt;br /&gt;
*Pots &amp;amp; Pans&lt;br /&gt;
*Induction Stove&lt;br /&gt;
*Cofeemaker&lt;br /&gt;
*???&lt;br /&gt;
*[[Emfcamp 2026#Milliways_Requestable_Event_Shelters| Eventshelters]]&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=EMF2026Gear&amp;diff=7022</id>
		<title>EMF2026Gear</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=EMF2026Gear&amp;diff=7022"/>
		<updated>2026-03-15T14:49:41Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go back to [[emfcamp 2026 | EMFCamp 2026]]&lt;br /&gt;
&lt;br /&gt;
==Timeline==&lt;br /&gt;
Gear arrives with [[User:obsidian| Obsidian]] on Tuesday 14th July&lt;br /&gt;
Gear leaves with [[User:obsidian| Obsidian]] on Tuesday 21nd July&lt;br /&gt;
&lt;br /&gt;
==Transport==&lt;br /&gt;
Obsidian transports all gear from DE and UK storage in a single [https://www.vandeburgwal.nl/huur-tandemas-gesloten-3-meter.-extra-breed-en-hoog trailer].&amp;lt;br /&amp;gt;&lt;br /&gt;
Dimensions of trailer are 306x154x180.&amp;lt;br /&amp;gt;&lt;br /&gt;
This is probably oversized for our gear, but the length is necessary for dome struts.&lt;br /&gt;
&lt;br /&gt;
==UK Storage==&lt;br /&gt;
*Dome&lt;br /&gt;
*Power&lt;br /&gt;
*???&lt;br /&gt;
&lt;br /&gt;
==DE Storage==&lt;br /&gt;
*Rummary Tent&lt;br /&gt;
*Freeside Shelter&lt;br /&gt;
*Lights&lt;br /&gt;
*Power&lt;br /&gt;
*Pots &amp;amp; Pans&lt;br /&gt;
*Induction Stove&lt;br /&gt;
*Cofeemaker&lt;br /&gt;
*???&lt;br /&gt;
*Eventshelters&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=EMF2026Gear&amp;diff=7021</id>
		<title>EMF2026Gear</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=EMF2026Gear&amp;diff=7021"/>
		<updated>2026-03-15T14:42:32Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go back to [[emfcamp 2026 | EMFCamp 2026]]&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=EMF2026Gear&amp;diff=7020</id>
		<title>EMF2026Gear</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=EMF2026Gear&amp;diff=7020"/>
		<updated>2026-03-15T14:42:18Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: Created page with &amp;quot;Go back to  EMFCamp 2026&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go back to [[[[emfcamp 2026 | EMFCamp 2026]]&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=Emfcamp_2026&amp;diff=7019</id>
		<title>Emfcamp 2026</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=Emfcamp_2026&amp;diff=7019"/>
		<updated>2026-03-15T14:41:14Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Timeline */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==General==&lt;br /&gt;
Electromagnetic Field 2026 will be held on 16–19 July 2026 at Eastnor Castle Deer Park, Eastnor. &lt;br /&gt;
&lt;br /&gt;
Links: &lt;br /&gt;
* https://www.emfcamp.org/&lt;br /&gt;
* https://chaos.social/@emf@emfcamp.org&lt;br /&gt;
&lt;br /&gt;
Tickets:&lt;br /&gt;
* &amp;lt;s&amp;gt;Monday 9th March, 20:00 UTC&amp;lt;/s&amp;gt;&lt;br /&gt;
* Sunday 22nd March, 15:00 UTC&lt;br /&gt;
* Thursday 2nd April, 18:00 BST&lt;br /&gt;
&lt;br /&gt;
==Recent News==&lt;br /&gt;
* There is no news.&lt;br /&gt;
&lt;br /&gt;
==Communication==&lt;br /&gt;
    Matrix: join #milliways-emfcamp:milliways.info (and of course the milliways space: https://matrix.to/#/#milliways-space:milliways.info)&lt;br /&gt;
&lt;br /&gt;
==Timeline==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Planned timeline subject to changes&lt;br /&gt;
|-&lt;br /&gt;
! Date !! 13:00 !! 20:00&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-14 (Day -1) || [[EMF2026Gear | Gear Arrives]] || Nothing&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-15 (Day 0) || Buildup || Buildup&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-16 (Day 1) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-17 (Day 2) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-18 (Day 3) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-19 (Day 4) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-20 (Day 5) || Teardown || Teardown&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-21 (Day 6) || [[EMF2026Gear | Gear Leaves]] || Nothing&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Subassemblies==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
! Name !! Description !! How to contact?&lt;br /&gt;
|-&lt;br /&gt;
| Freeside ||  || #freeside @ hackint&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Milliways Requestable Event Shelters==&lt;br /&gt;
&lt;br /&gt;
Aka tents which you can request to appear for you at the campsite. They will be taken from storage and brought just for you so you need to sign up before the event. Each tent fits up to two people, specify the number of people that will sleep in the particular tent so we know how many sleeping bags to grab. The tents come with everything you need e.g. a sleeping pad and sleeping bags.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Tent number !! Names of the people entitled to pick the tent up !! Number of sleeping bags !! Notes &lt;br /&gt;
|-&lt;br /&gt;
| 1 || coderobe || 2 || &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Meetings==&lt;br /&gt;
Pads can be created under https://cpad.milliways.info/, select &amp;quot;code&amp;quot;. Please link here as &amp;quot;view&amp;quot; when done and as &amp;quot;edit&amp;quot; in the relevant channels during the meeting. The meetings usually happen at 2100 German time.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
! When !! Where !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 2026-??-?? || [https://jitsi.milliways.info/milliways jitsi] || Orga Meeting || [https://example.com Meeting Minutes]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Plans==&lt;br /&gt;
&lt;br /&gt;
To have fun.&lt;br /&gt;
&lt;br /&gt;
==Coin==&lt;br /&gt;
&lt;br /&gt;
Doesn&#039;t exist yet.&lt;br /&gt;
&lt;br /&gt;
[[Category:emfcamp2026]]&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=Emfcamp_2026&amp;diff=7018</id>
		<title>Emfcamp 2026</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=Emfcamp_2026&amp;diff=7018"/>
		<updated>2026-03-15T14:37:52Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Timeline */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==General==&lt;br /&gt;
Electromagnetic Field 2026 will be held on 16–19 July 2026 at Eastnor Castle Deer Park, Eastnor. &lt;br /&gt;
&lt;br /&gt;
Links: &lt;br /&gt;
* https://www.emfcamp.org/&lt;br /&gt;
* https://chaos.social/@emf@emfcamp.org&lt;br /&gt;
&lt;br /&gt;
Tickets:&lt;br /&gt;
* &amp;lt;s&amp;gt;Monday 9th March, 20:00 UTC&amp;lt;/s&amp;gt;&lt;br /&gt;
* Sunday 22nd March, 15:00 UTC&lt;br /&gt;
* Thursday 2nd April, 18:00 BST&lt;br /&gt;
&lt;br /&gt;
==Recent News==&lt;br /&gt;
* There is no news.&lt;br /&gt;
&lt;br /&gt;
==Communication==&lt;br /&gt;
    Matrix: join #milliways-emfcamp:milliways.info (and of course the milliways space: https://matrix.to/#/#milliways-space:milliways.info)&lt;br /&gt;
&lt;br /&gt;
==Timeline==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Planned timeline subject to changes&lt;br /&gt;
|-&lt;br /&gt;
! Date !! 13:00 !! 20:00&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-14 (Day -1) || Gear Arrives || Nothing&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-15 (Day 0) || Buildup || Buildup&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-16 (Day 1) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-17 (Day 2) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-18 (Day 3) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-19 (Day 4) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-20 (Day 5) || Teardown || Teardown&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-21 (Day 6) || Gear Leaves || Nothing&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Subassemblies==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
! Name !! Description !! How to contact?&lt;br /&gt;
|-&lt;br /&gt;
| Freeside ||  || #freeside @ hackint&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Milliways Requestable Event Shelters==&lt;br /&gt;
&lt;br /&gt;
Aka tents which you can request to appear for you at the campsite. They will be taken from storage and brought just for you so you need to sign up before the event. Each tent fits up to two people, specify the number of people that will sleep in the particular tent so we know how many sleeping bags to grab. The tents come with everything you need e.g. a sleeping pad and sleeping bags.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Tent number !! Names of the people entitled to pick the tent up !! Number of sleeping bags !! Notes &lt;br /&gt;
|-&lt;br /&gt;
| 1 || coderobe || 2 || &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Meetings==&lt;br /&gt;
Pads can be created under https://cpad.milliways.info/, select &amp;quot;code&amp;quot;. Please link here as &amp;quot;view&amp;quot; when done and as &amp;quot;edit&amp;quot; in the relevant channels during the meeting. The meetings usually happen at 2100 German time.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
! When !! Where !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 2026-??-?? || [https://jitsi.milliways.info/milliways jitsi] || Orga Meeting || [https://example.com Meeting Minutes]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Plans==&lt;br /&gt;
&lt;br /&gt;
To have fun.&lt;br /&gt;
&lt;br /&gt;
==Coin==&lt;br /&gt;
&lt;br /&gt;
Doesn&#039;t exist yet.&lt;br /&gt;
&lt;br /&gt;
[[Category:emfcamp2026]]&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=Emfcamp_2026&amp;diff=7017</id>
		<title>Emfcamp 2026</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=Emfcamp_2026&amp;diff=7017"/>
		<updated>2026-03-15T14:35:53Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Timeline */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==General==&lt;br /&gt;
Electromagnetic Field 2026 will be held on 16–19 July 2026 at Eastnor Castle Deer Park, Eastnor. &lt;br /&gt;
&lt;br /&gt;
Links: &lt;br /&gt;
* https://www.emfcamp.org/&lt;br /&gt;
* https://chaos.social/@emf@emfcamp.org&lt;br /&gt;
&lt;br /&gt;
Tickets:&lt;br /&gt;
* &amp;lt;s&amp;gt;Monday 9th March, 20:00 UTC&amp;lt;/s&amp;gt;&lt;br /&gt;
* Sunday 22nd March, 15:00 UTC&lt;br /&gt;
* Thursday 2nd April, 18:00 BST&lt;br /&gt;
&lt;br /&gt;
==Recent News==&lt;br /&gt;
* There is no news.&lt;br /&gt;
&lt;br /&gt;
==Communication==&lt;br /&gt;
    Matrix: join #milliways-emfcamp:milliways.info (and of course the milliways space: https://matrix.to/#/#milliways-space:milliways.info)&lt;br /&gt;
&lt;br /&gt;
==Timeline==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Planned timeline subject to changes&lt;br /&gt;
|-&lt;br /&gt;
! Date !! 13:00 !! 20:00&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-14 (Day -1) || Gear Arrives || Nothing&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-15 (Day 0) || Buildup || Buildup&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-16 (Day 1) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-17 (Day 2) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-18 (Day 3) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-19 (Day 4) || ? || ? &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Subassemblies==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
! Name !! Description !! How to contact?&lt;br /&gt;
|-&lt;br /&gt;
| Freeside ||  || #freeside @ hackint&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Milliways Requestable Event Shelters==&lt;br /&gt;
&lt;br /&gt;
Aka tents which you can request to appear for you at the campsite. They will be taken from storage and brought just for you so you need to sign up before the event. Each tent fits up to two people, specify the number of people that will sleep in the particular tent so we know how many sleeping bags to grab. The tents come with everything you need e.g. a sleeping pad and sleeping bags.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Tent number !! Names of the people entitled to pick the tent up !! Number of sleeping bags !! Notes &lt;br /&gt;
|-&lt;br /&gt;
| 1 || coderobe || 2 || &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Meetings==&lt;br /&gt;
Pads can be created under https://cpad.milliways.info/, select &amp;quot;code&amp;quot;. Please link here as &amp;quot;view&amp;quot; when done and as &amp;quot;edit&amp;quot; in the relevant channels during the meeting. The meetings usually happen at 2100 German time.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
! When !! Where !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 2026-??-?? || [https://jitsi.milliways.info/milliways jitsi] || Orga Meeting || [https://example.com Meeting Minutes]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Plans==&lt;br /&gt;
&lt;br /&gt;
To have fun.&lt;br /&gt;
&lt;br /&gt;
==Coin==&lt;br /&gt;
&lt;br /&gt;
Doesn&#039;t exist yet.&lt;br /&gt;
&lt;br /&gt;
[[Category:emfcamp2026]]&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=Emfcamp_2026&amp;diff=7016</id>
		<title>Emfcamp 2026</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=Emfcamp_2026&amp;diff=7016"/>
		<updated>2026-03-15T14:35:36Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Timeline */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==General==&lt;br /&gt;
Electromagnetic Field 2026 will be held on 16–19 July 2026 at Eastnor Castle Deer Park, Eastnor. &lt;br /&gt;
&lt;br /&gt;
Links: &lt;br /&gt;
* https://www.emfcamp.org/&lt;br /&gt;
* https://chaos.social/@emf@emfcamp.org&lt;br /&gt;
&lt;br /&gt;
Tickets:&lt;br /&gt;
* &amp;lt;s&amp;gt;Monday 9th March, 20:00 UTC&amp;lt;/s&amp;gt;&lt;br /&gt;
* Sunday 22nd March, 15:00 UTC&lt;br /&gt;
* Thursday 2nd April, 18:00 BST&lt;br /&gt;
&lt;br /&gt;
==Recent News==&lt;br /&gt;
* There is no news.&lt;br /&gt;
&lt;br /&gt;
==Communication==&lt;br /&gt;
    Matrix: join #milliways-emfcamp:milliways.info (and of course the milliways space: https://matrix.to/#/#milliways-space:milliways.info)&lt;br /&gt;
&lt;br /&gt;
==Timeline==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Planned timeline subject to changes&lt;br /&gt;
|-&lt;br /&gt;
! Date !! 13:00 !! 20:00&lt;br /&gt;
| 2026-07-14 (Day -1) || Gear Arrives || Nothing&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-15 (Day 0) || Buildup || Buildup&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-16 (Day 1) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-17 (Day 2) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-18 (Day 3) || ? || ?&lt;br /&gt;
|-&lt;br /&gt;
| 2026-07-19 (Day 4) || ? || ? &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Subassemblies==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
! Name !! Description !! How to contact?&lt;br /&gt;
|-&lt;br /&gt;
| Freeside ||  || #freeside @ hackint&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Milliways Requestable Event Shelters==&lt;br /&gt;
&lt;br /&gt;
Aka tents which you can request to appear for you at the campsite. They will be taken from storage and brought just for you so you need to sign up before the event. Each tent fits up to two people, specify the number of people that will sleep in the particular tent so we know how many sleeping bags to grab. The tents come with everything you need e.g. a sleeping pad and sleeping bags.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Tent number !! Names of the people entitled to pick the tent up !! Number of sleeping bags !! Notes &lt;br /&gt;
|-&lt;br /&gt;
| 1 || coderobe || 2 || &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Meetings==&lt;br /&gt;
Pads can be created under https://cpad.milliways.info/, select &amp;quot;code&amp;quot;. Please link here as &amp;quot;view&amp;quot; when done and as &amp;quot;edit&amp;quot; in the relevant channels during the meeting. The meetings usually happen at 2100 German time.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
! When !! Where !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 2026-??-?? || [https://jitsi.milliways.info/milliways jitsi] || Orga Meeting || [https://example.com Meeting Minutes]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Plans==&lt;br /&gt;
&lt;br /&gt;
To have fun.&lt;br /&gt;
&lt;br /&gt;
==Coin==&lt;br /&gt;
&lt;br /&gt;
Doesn&#039;t exist yet.&lt;br /&gt;
&lt;br /&gt;
[[Category:emfcamp2026]]&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=Dome&amp;diff=7006</id>
		<title>Dome</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=Dome&amp;diff=7006"/>
		<updated>2026-02-17T15:25:02Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Bolts */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:dome.jpg&lt;br /&gt;
File:dome_night.jpg&lt;br /&gt;
File:EMF24_Sunrise_Dome.jpg&lt;br /&gt;
File:Dome being built up at why 2025.jpg&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There are two domes, one in the UK and one in the EU. Both domes are the same spec and have a radius of 4.75m. One is in [[Weeze_Storage]] and the second one is in UK Storage (unclear where - ask EMF people).&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== EMF 2024 ===&lt;br /&gt;
A new tarp can be ruined after several strong gusts of wind. &lt;br /&gt;
We need ground ankers and 200m of high tensile rope for each dome to prevent this from happening again + improved plans on setting up the dome and adding a tarp so it&#039;s clear for newbies.&lt;br /&gt;
&lt;br /&gt;
=== WHY 2025 ===&lt;br /&gt;
&lt;br /&gt;
Multiple screws (4-5?) could not be removed during teardown and had to be cut with an angle grinder, it felt like it was more than normally.&lt;br /&gt;
&lt;br /&gt;
The dome was anchored with relatively small rebar anchors in the fears that it will fly away. There is no possible scenario in which the dome will fly off in strong winds. If that were to happen we will likely be dead by then. To prevent this the anchors needed would be those 50-100cm massive pegs for big festival tents, not the tiny things that were used. Adding those tiny anchors just makes teardown more difficult and serves no practical purpose. I&#039;m not running any simulations to prove that I&#039;m right but I also suspect that there is no data to prove that anchors are needed.&lt;br /&gt;
&lt;br /&gt;
The tarp was rubbing on a joint and developed a hole, this was fixed with duct tape.&lt;br /&gt;
&lt;br /&gt;
== Buildup ==&lt;br /&gt;
&lt;br /&gt;
=== Diagram ===&lt;br /&gt;
[[File:Dome diagram.gif]]&lt;br /&gt;
&lt;br /&gt;
=== Struts ===&lt;br /&gt;
&lt;br /&gt;
The dome is built out of struts.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Amount !! Color !! Length !! Notes &lt;br /&gt;
|-&lt;br /&gt;
| 35 || Blue || about 3 meters || &amp;quot;The long ones&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| 30 || Red || about 2.66 meters || &amp;quot;The short ones&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
There is also a box with bolts and required tools which is likely to be stored next to the struts as well as a tarp used to cover the dome.&lt;br /&gt;
&lt;br /&gt;
=== Tools ===&lt;br /&gt;
&lt;br /&gt;
17 wrench, 17 ratchet with extra long nut driver.&lt;br /&gt;
&lt;br /&gt;
=== Bolts ===&lt;br /&gt;
Preference;&lt;br /&gt;
* [https://www.nen.nl/en/nen-en-15048-1-2016-en-222710 EN 15048] DIN 933 bolts in M10x150&lt;br /&gt;
* Matching nuts and washers&lt;br /&gt;
&lt;br /&gt;
EN 15048 is for &amp;quot;Non preloaded structural bolting&amp;quot; &amp;lt;br /&amp;gt;&lt;br /&gt;
DIN 933 determines the bolt has a full thread&lt;br /&gt;
&lt;br /&gt;
These bolts are a good high strength steel meant for steel construction that doesn&#039;t move/shift/vibrate much. Basically tensile or shearing forces only.&lt;br /&gt;
&lt;br /&gt;
Emergency use only:&lt;br /&gt;
* Materials:&lt;br /&gt;
** (ungraded or unknown) Zinc Galvanized Steel&lt;br /&gt;
** (ungraded or unknown) Stainless Steel&lt;br /&gt;
&lt;br /&gt;
These materials can and will suffer from [https://en.wikipedia.org/wiki/Galling Galling] or [https://en.wikipedia.org/wiki/Cold_welding cold weld] when the dome is built. You will need an anglegrinder or saw to remove these bolts after use.&lt;br /&gt;
&lt;br /&gt;
* Bolt types:&lt;br /&gt;
** DIN 931 &lt;br /&gt;
&lt;br /&gt;
This bolt type has a partial thread, meaning the dome struts cannot be properly fastened&lt;br /&gt;
&lt;br /&gt;
== Tarp ==&lt;br /&gt;
@TODO&lt;br /&gt;
&lt;br /&gt;
== Benches ==&lt;br /&gt;
&lt;br /&gt;
=== Wood ===&lt;br /&gt;
&lt;br /&gt;
[[:File:Milliways_Final_dome_benches.pdf | Measurements.pdf]]&lt;br /&gt;
&lt;br /&gt;
=== Screws ===&lt;br /&gt;
&lt;br /&gt;
Woodscrews 4.5mm diameter 4.5cm long.&lt;br /&gt;
&lt;br /&gt;
like http://i.ebayimg.com/thumbs/images/m/mDJf7KpPBwFp1AFym1DO9kQ/s-l225.jpg&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=Dome&amp;diff=7005</id>
		<title>Dome</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=Dome&amp;diff=7005"/>
		<updated>2026-02-17T15:21:38Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Screws */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:dome.jpg&lt;br /&gt;
File:dome_night.jpg&lt;br /&gt;
File:EMF24_Sunrise_Dome.jpg&lt;br /&gt;
File:Dome being built up at why 2025.jpg&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There are two domes, one in the UK and one in the EU. Both domes are the same spec and have a radius of 4.75m. One is in [[Weeze_Storage]] and the second one is in UK Storage (unclear where - ask EMF people).&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== EMF 2024 ===&lt;br /&gt;
A new tarp can be ruined after several strong gusts of wind. &lt;br /&gt;
We need ground ankers and 200m of high tensile rope for each dome to prevent this from happening again + improved plans on setting up the dome and adding a tarp so it&#039;s clear for newbies.&lt;br /&gt;
&lt;br /&gt;
=== WHY 2025 ===&lt;br /&gt;
&lt;br /&gt;
Multiple screws (4-5?) could not be removed during teardown and had to be cut with an angle grinder, it felt like it was more than normally.&lt;br /&gt;
&lt;br /&gt;
The dome was anchored with relatively small rebar anchors in the fears that it will fly away. There is no possible scenario in which the dome will fly off in strong winds. If that were to happen we will likely be dead by then. To prevent this the anchors needed would be those 50-100cm massive pegs for big festival tents, not the tiny things that were used. Adding those tiny anchors just makes teardown more difficult and serves no practical purpose. I&#039;m not running any simulations to prove that I&#039;m right but I also suspect that there is no data to prove that anchors are needed.&lt;br /&gt;
&lt;br /&gt;
The tarp was rubbing on a joint and developed a hole, this was fixed with duct tape.&lt;br /&gt;
&lt;br /&gt;
== Buildup ==&lt;br /&gt;
&lt;br /&gt;
=== Diagram ===&lt;br /&gt;
[[File:Dome diagram.gif]]&lt;br /&gt;
&lt;br /&gt;
=== Struts ===&lt;br /&gt;
&lt;br /&gt;
The dome is built out of struts.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Amount !! Color !! Length !! Notes &lt;br /&gt;
|-&lt;br /&gt;
| 35 || Blue || about 3 meters || &amp;quot;The long ones&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| 30 || Red || about 2.66 meters || &amp;quot;The short ones&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
There is also a box with bolts and required tools which is likely to be stored next to the struts as well as a tarp used to cover the dome.&lt;br /&gt;
&lt;br /&gt;
=== Tools ===&lt;br /&gt;
&lt;br /&gt;
17 wrench, 17 ratchet with extra long nut driver.&lt;br /&gt;
&lt;br /&gt;
=== Bolts ===&lt;br /&gt;
Preference;&lt;br /&gt;
* [https://www.nen.nl/en/nen-en-14399-4-2015-en-204280 EN 14399] HV DIN 933 bolts in M10x150&lt;br /&gt;
* Matching nuts and washers&lt;br /&gt;
&lt;br /&gt;
EN 14399 is for &amp;quot;Preloaded structural bolting&amp;quot; &amp;lt;br /&amp;gt;&lt;br /&gt;
DIN 933 determines the bolt has a full thread&lt;br /&gt;
&lt;br /&gt;
These bolts are a good high strength steel meant for steel construction that can undergo more than just tensile or shearing forces, like vibrations through use (think of bridges) or environment (wind).&lt;br /&gt;
&lt;br /&gt;
* Note: There are two systems! HR (UK) and HV (Germany) - we try to use the HV System! It should be stamped on the bolt&#039;s head.&lt;br /&gt;
&lt;br /&gt;
Good alternative;&lt;br /&gt;
* [https://www.nen.nl/en/nen-en-15048-1-2016-en-222710 EN 15048] DIN 933 bolts in M10x150&lt;br /&gt;
* Matching nuts and washers&lt;br /&gt;
&lt;br /&gt;
EN 15048 is for &amp;quot;Non preloaded structural bolting&amp;quot;&lt;br /&gt;
&lt;br /&gt;
These bolts are a good high strength steel meant for steel construction that doesn&#039;t move/shift/vibrate much. Basically tensile or shearing forces only.&lt;br /&gt;
&lt;br /&gt;
Emergency use only:&lt;br /&gt;
* Materials:&lt;br /&gt;
** (ungraded or unknown) Zinc Galvanized Steel&lt;br /&gt;
** (ungraded or unknown) Stainless Steel&lt;br /&gt;
&lt;br /&gt;
These materials can and will suffer from [https://en.wikipedia.org/wiki/Galling Galling] or [https://en.wikipedia.org/wiki/Cold_welding cold weld] when the dome is built. You will need an anglegrinder or saw to remove these bolts after use.&lt;br /&gt;
&lt;br /&gt;
* Bolt types:&lt;br /&gt;
** DIN 931 &lt;br /&gt;
&lt;br /&gt;
This bolt type has a partial thread, meaning the dome struts cannot be properly fastened&lt;br /&gt;
&lt;br /&gt;
== Tarp ==&lt;br /&gt;
@TODO&lt;br /&gt;
&lt;br /&gt;
== Benches ==&lt;br /&gt;
&lt;br /&gt;
=== Wood ===&lt;br /&gt;
&lt;br /&gt;
[[:File:Milliways_Final_dome_benches.pdf | Measurements.pdf]]&lt;br /&gt;
&lt;br /&gt;
=== Screws ===&lt;br /&gt;
&lt;br /&gt;
Woodscrews 4.5mm diameter 4.5cm long.&lt;br /&gt;
&lt;br /&gt;
like http://i.ebayimg.com/thumbs/images/m/mDJf7KpPBwFp1AFym1DO9kQ/s-l225.jpg&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6999</id>
		<title>MilliwaysStack</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6999"/>
		<updated>2026-01-23T20:55:06Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Shopping List */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We want to run an OpenStack experiment&lt;br /&gt;
&lt;br /&gt;
== The grander idea ==&lt;br /&gt;
&lt;br /&gt;
We want to try out an installation of OpenStack to give people around milliways experience with running (on). &lt;br /&gt;
&lt;br /&gt;
From an unnamed source we got 10 HPE servers. We will use 8 of them to run OpenStack on it. Storage is on a seperate machine. &lt;br /&gt;
&lt;br /&gt;
=== MVP ===&lt;br /&gt;
A more realistic MVP than the 1st version is:&lt;br /&gt;
* Networking&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* Databases - mariaDB / PostgreSQL &lt;br /&gt;
&lt;br /&gt;
=== e-MVP ===&lt;br /&gt;
What used to be in the first MVP, but now is actually recognized to be e-MVP;&lt;br /&gt;
* Kubernetes / docker &lt;br /&gt;
* object storage&lt;br /&gt;
* file systems&lt;br /&gt;
* Firewalling&lt;br /&gt;
* Someone something redis I guess&lt;br /&gt;
* container registry&lt;br /&gt;
&lt;br /&gt;
=== super e-MVP ===&lt;br /&gt;
The super extended MVP would be:&lt;br /&gt;
* functional Monitoring &amp;amp; alerting&lt;br /&gt;
* autoscaling &lt;br /&gt;
* integration into milliways identity &amp;amp; access management authentik &lt;br /&gt;
* logging &amp;amp; alerting&lt;br /&gt;
&lt;br /&gt;
== the software stack explained ==&lt;br /&gt;
&lt;br /&gt;
OpenStack is a cloud framework stack that offeres AWS / Azure / GC alike services. &lt;br /&gt;
&lt;br /&gt;
Most documentation is availible for Ubuntu &amp;amp; Red Hat. On the longer term an installation under NixOS might be feasable.&lt;br /&gt;
&lt;br /&gt;
== Asset List ==&lt;br /&gt;
=== Rack ===&lt;br /&gt;
* 47U&lt;br /&gt;
* 950mm external depth&lt;br /&gt;
** 915mm internal depth&lt;br /&gt;
=== Consumables &amp;amp; Small Materials ===&lt;br /&gt;
* 1 x Samsung 2.5&amp;quot; 860 EVO 2TB&lt;br /&gt;
* Assorted M2 - M3 screws&lt;br /&gt;
* Assorted mismatched bundle of M5 and M6 cagenuts and bolts&lt;br /&gt;
* SFPs&lt;br /&gt;
=== Switches ===&lt;br /&gt;
* 2 x Dell PowerConnect 7048R-RA&lt;br /&gt;
* 1 x Cisco 3560e&lt;br /&gt;
=== [[MilliwaysStack_Servers | Servers]] ===&lt;br /&gt;
* 1 Dell PowerEdge R710 server as storage&lt;br /&gt;
** 2 x X5570 2,93GHz&lt;br /&gt;
** 192GB RAM&lt;br /&gt;
** 6 x 3,5&amp;quot; bays&lt;br /&gt;
*** 6 x hotswap 3,5&amp;quot; drive sleds/brackets&lt;br /&gt;
** Drives&lt;br /&gt;
*** 1 x Samsung 2.5&amp;quot; 850 EVO 500GB&lt;br /&gt;
**** for OS&lt;br /&gt;
**** Hidden in aftermarket [https://www.amazon.nl/dp/B083XJPCGL &amp;quot;Optical Drive&amp;quot;] adapter.&lt;br /&gt;
***We have more drives than bays, but not enough drives to make a nice or ideal configuration. As such, the Dell storage situation is likely temporary until we can figure out if we add more 12T or 10T or keep as-is.&lt;br /&gt;
**** 2 x Seagate Exos X18 12TB&lt;br /&gt;
**** 1 x Seagate Exos X18 10TB&lt;br /&gt;
**** 4 x WD Red 4TB&lt;br /&gt;
**** 4 x WD Green 3TB&lt;br /&gt;
** no rails&lt;br /&gt;
* 2 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram&lt;br /&gt;
** PCI Riser to 4* NVMe adapter&lt;br /&gt;
*** 1TB Crucial NVMe &lt;br /&gt;
** iLO4&lt;br /&gt;
*** It seems it accepts 35DPH-SVSXJ-HGBJN-C7N5R-2SS4W as activation key for iLO Advanced license?&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** Slide rails&lt;br /&gt;
* 8 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram &lt;br /&gt;
** iLO4&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** 7 x slide rails&lt;br /&gt;
&lt;br /&gt;
== Shopping List ==&lt;br /&gt;
 It&#039;s ofc. sexy as all hell to buy memory, AI cards, flash storage and allsorts, but literally none of that will ever work if we don&#039;t have our Generic basics in order. While we prefer big donations go to big ticket items, many small ticket items unexpectedly add up in the long run. Please do not forget the generic basics!&lt;br /&gt;
* Generic Basics&lt;br /&gt;
** PDU&lt;br /&gt;
*** &amp;lt;s&amp;gt;Temporary 1U unmanaged PDU with 16A/230V C19 input and 1* C19 + 8* Type F outlet.&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Perfect: Managed Rack mountable PDU with CEE Red 16A/20A 400v input to C13/C14 + C19/C20 outlets.&lt;br /&gt;
**** [https://www.eaton.com/us/en-us/skuPage.PDU3XEVSR6G20.html Stupid expensive example]&lt;br /&gt;
*** Alternatively; a &amp;quot;normal&amp;quot; Serverrack PDU (still strong prefer managed) + 16A/20A 400v -&amp;gt; 16A 230V transform&lt;br /&gt;
** Network Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [Color]&lt;br /&gt;
**** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Power Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Screws, Nuts, Bolts&lt;br /&gt;
*** &amp;lt;s&amp;gt;Assorted M2,M2.5,M3 Screws&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Some actual properly matching cage nuts\bolts would be great&lt;br /&gt;
** PCI Risers&lt;br /&gt;
*** &amp;lt;s&amp;gt;Single NVMe adapters&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Multi NVMe adapters&lt;br /&gt;
** KVM&lt;br /&gt;
*** PiKVM?&lt;br /&gt;
* Dell - Storage&lt;br /&gt;
** &amp;lt;s&amp;gt;2* Drive sleds&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;New RAID Card that supports passthrough\JBOD&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;2* SFF-8087 -&amp;gt; SFF-8087 Mini SAS Cable&amp;lt;/s&amp;gt;&lt;br /&gt;
** Drives&lt;br /&gt;
*** &amp;lt;s&amp;gt;500GB SSD for OS&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Bracket and SATA Cable Adapter for SSD&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Technically not shopping, but for historical tracking;&lt;br /&gt;
**** Old Exos X16 2 x 12T and 1 x 10T were RMA&#039;d and replaced with X18&#039;s&amp;lt;/s&amp;gt;&lt;br /&gt;
*** 12T ?&lt;br /&gt;
** &amp;lt;s&amp;gt;iDRAC6 Enterprise card&amp;lt;/s&amp;gt;&lt;br /&gt;
* HP1 - Control&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* HP2 - Compute&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* Flash Storage&lt;br /&gt;
** We&#039;ll need [https://www.amazon.de/-/en/dp/B07GCDH5D8 Drive Trays] for the HPs if we wanna add 2.5&amp;quot; SSDs&lt;br /&gt;
** Control and Compute servers each have 3 open m.2 NVMe slots&lt;br /&gt;
** &amp;lt;s&amp;gt;1 x 2TB Samsung 860 EVO&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
 nb. this is quick &#039;n&#039; dirty as I go along.&lt;br /&gt;
 In the short-term future I&#039;d much rather replace this adhoc documentation with something like NetBox.&lt;br /&gt;
=== Network ===&lt;br /&gt;
* Supernet 10.42.0.0/16&lt;br /&gt;
** Vlan 42&lt;br /&gt;
*** Interconnect&lt;br /&gt;
*** 10.42.0.0/30&lt;br /&gt;
**** Gateway 10.42.0.1&lt;br /&gt;
**** Milliways Core 10.42.0.2&lt;br /&gt;
** Vlan 5&lt;br /&gt;
*** Mgmt \ OOB&lt;br /&gt;
*** 10.42.1.0/24&lt;br /&gt;
**** Milliways Core 10.42.1.1&lt;br /&gt;
**** Dell iDRAC 10.42.1.5&lt;br /&gt;
**** Dell RAID Controller 10.42.1.6&lt;br /&gt;
**** HP 1 iLO 10.42.1.7&lt;br /&gt;
**** HP 2 iLO 10.42.1.8&lt;br /&gt;
** Vlan 10&lt;br /&gt;
*** Prod&lt;br /&gt;
*** 10.42.10.0/24&lt;br /&gt;
**** Milliways Core 10.42.10.1&lt;br /&gt;
**** Dell 10.42.10.2&lt;br /&gt;
**** HP 1 10.42.10.3&lt;br /&gt;
**** HP 2 10.42.10.5&lt;br /&gt;
** Vlan 15&lt;br /&gt;
*** [https://docs.openstack.org/neutron/2025.1/install/environment-networking-ubuntu.html Provider Network]&lt;br /&gt;
**** This is an OpenStack thing for the secondary Control and Compute node interfaces.&lt;br /&gt;
**** Currently [https://docs.openstack.org/neutron/2025.1/install/environment-networking-controller-ubuntu.html no IP] address assigned.&lt;br /&gt;
**** May change in future if documentation mandates it.&lt;br /&gt;
&lt;br /&gt;
=== Cable Mgmt ===&lt;br /&gt;
 As there are some early ambitions to physically take this environment to events, perhaps we should seriously think about making our lives easier by already thinking about colorcoding connectivity. While this will help us connecting everything again at $event when we&#039;re sleepdeprived\drunk\explaining to newbies, this has the added effect of making it all look slightly more cooler than just a spaghetti of all boring white cables or worse, a spaghetti of whatever the fuck we have lying around.&lt;br /&gt;
&lt;br /&gt;
This is all just made-up without too much thought. This is specifically intended to start a discussion so we can work toward an agreement, it is not intended to be a unilateral decision. Example; You&#039;ll notice 0 thought was put into fiber or not ;)&lt;br /&gt;
* RED&lt;br /&gt;
** Mgmt \ OOB&lt;br /&gt;
*** iDRACs, iLOs, RAID Cards, etc&lt;br /&gt;
* GREEN&lt;br /&gt;
** Storage Prod&lt;br /&gt;
*** At least the Dell, maybe HPs if we get into flash storage&lt;br /&gt;
* BLUE&lt;br /&gt;
** Compute Prod&lt;br /&gt;
*** Likely overwhelmingly the HPs&lt;br /&gt;
* YELLOW&lt;br /&gt;
** Interconnect&lt;br /&gt;
*** Connectivity to $outside, between switches, whatever&lt;br /&gt;
&lt;br /&gt;
=== Naming Convention ===&lt;br /&gt;
 We need names!&lt;br /&gt;
 Can&#039;t keep calling these &amp;quot;Dell&amp;quot;, &amp;quot;HP1&amp;quot;, &amp;quot;HP2&amp;quot; etc.&lt;br /&gt;
 Calling them by their S/Ns is also super boring and cumbersome; &amp;quot;Oh yea, we need to setup 5V6S064&amp;quot;&lt;br /&gt;
 We could even opt for dual names. Internally, when logged in to $shell, the names could be functional &amp;quot;milliways-control-node-1&amp;quot; so it&#039;s clear what you&#039;re doing, but externally, the Asset Tag could be a Hitchhiker&#039;s Guide to the Galaxy character or a DiscWorld town or something. That way, if we do ever show this off at events, we can do cool shit with light up tags, make stuff funny and recognizable and cool to talk about - it also makes it way more relatable to market for when asking for donations; &amp;quot;Ya, we&#039;re looking for extra storage for Überwald&amp;quot; sounds much better than &amp;quot;Ya we&#039;re looking for extra storage for 5V6S064 or milliways-control-node-1&amp;quot;&lt;br /&gt;
 Naturally, once we get NetBox going, we can map the Asset names to the actual server name and potentially it&#039;s serial so we don&#039;t get confused internally (if we want to use serials, there&#039;s somethign to be said for not using serials here)&lt;br /&gt;
&lt;br /&gt;
* Functional&lt;br /&gt;
** milliways-control-node-1&lt;br /&gt;
** milliways-control-node-2&lt;br /&gt;
** control-node-1&lt;br /&gt;
** compute-node-1&lt;br /&gt;
** flash-storage-1&lt;br /&gt;
&lt;br /&gt;
* Marketing&lt;br /&gt;
** HGttG characters&lt;br /&gt;
*** Arthur&lt;br /&gt;
*** Ford&lt;br /&gt;
*** Zaphod&lt;br /&gt;
** Discworld locations&lt;br /&gt;
*** Ankh-Morpork&lt;br /&gt;
*** Überwald&lt;br /&gt;
*** Lancre&lt;br /&gt;
***&lt;br /&gt;
&lt;br /&gt;
=== OpenStack ===&lt;br /&gt;
&lt;br /&gt;
 We&#039;re using 2025.1 (epoxy) as 2025.2 (flamingo) has an undocumented breaking change making installation of keystone impossible. We have registered a bug with the documentation on launchpad for this.&lt;br /&gt;
&lt;br /&gt;
* [https://docs.openstack.org/install-guide/ Installation guide]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-2025-1-epoxy Minimal Deployment]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/overview.html#example-architecture Example Architecture]&lt;br /&gt;
 Following installation guide recommendation, passwords are created with &amp;lt;code&amp;gt;openssl rand -hex 10&amp;lt;/code&amp;gt; and saved in a password store.&lt;br /&gt;
&lt;br /&gt;
==== Controller ====&lt;br /&gt;
* [https://docs.openstack.org/keystone/2025.1/install/ Identity service]&lt;br /&gt;
** [https://docs.openstack.org/keystone/2025.2/install/keystone-users-ubuntu.html Broken] in 2025.2&lt;br /&gt;
***[https://opendev.org/openstack/keystone/src/commit/82c80dccf6c2e74e27b90f5204de6da1fc6bd76d/releasenotes/notes/remove-wsgi-scripts-615b97ee4d6e0de2.yaml This] commit removes the WSGI scripts, ``keystone-wsgi-admin`` and ``keystone-wsgi-public``.&lt;br /&gt;
*** Both scripts are still called by the openstack command. This means running any openstack command to create a domain, projects, users, and roles fails with the error&lt;br /&gt;
****&amp;lt;code&amp;gt;Failed to discover available identity versions when contacting http://controller:5000/v3. Attempting to parse version from URL.&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Evidence:&lt;br /&gt;
**** &amp;lt;code&amp;gt;tail /var/log/apache2/keystone.log&amp;lt;/code&amp;gt;&lt;br /&gt;
***** &amp;lt;code&amp;gt;Target WSGI script not found or unable to stat: /usr/bin/keystone-wsgi-public&amp;lt;/code&amp;gt;&lt;br /&gt;
** Workaround, use 2025.1 instead&lt;br /&gt;
** Completed 2025-01-18&lt;br /&gt;
* [https://docs.openstack.org/glance/2025.1/install/ Image service]&lt;br /&gt;
** [https://docs.openstack.org/glance/2025.1/install/install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide has you create 3 API endpoints for the service.&lt;br /&gt;
**** You need to configure access to keystone with one of them, but you are not told which one. Only &amp;lt;code&amp;gt;public&amp;lt;/code&amp;gt; will work.&lt;br /&gt;
*** Configuring glance-api.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** Completed 2025-01-19&lt;br /&gt;
* [https://docs.openstack.org/placement/2025.1/install/ Placement service]&lt;br /&gt;
** [https://docs.openstack.org/placement/2025.1/install/verify.html Bad] Documentation&lt;br /&gt;
*** If you followed the guide, your user account [https://storyboard.openstack.org/#!/story/2008969 does not have the rights] to read &amp;lt;code&amp;gt;/etc/placement/placement.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Running &amp;lt;code&amp;gt;placement-status upgrade check&amp;lt;/code&amp;gt; as root proves the service works.&lt;br /&gt;
*** Undocumented requirement fulfilled; &amp;lt;code&amp;gt;usermod -aG placement&amp;lt;/code&amp;gt;&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portions of [https://docs.openstack.org/nova/2025.1/install/ Compute]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/controller-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring nova.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
*** The guide attempts to make you configure options for the networking service, which you have not installed yet, because the guide makes you install this service first&lt;br /&gt;
*** &amp;lt;code&amp;gt;Due to a packaging bug, remove the log_dir option from the [DEFAULT] section.&amp;lt;/code&amp;gt;&lt;br /&gt;
**** ???? THEN FIX THE PACKAGE?!?!?!!!!&lt;br /&gt;
*** The &amp;lt;code&amp;gt;[glance]&amp;lt;/code&amp;gt; option you are instructed to use is deprecated&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portion of [https://docs.openstack.org/neutron/2025.1/install/ Networking]&lt;br /&gt;
** [https://docs.openstack.org/neutron/2025.1/install/controller-install-option2-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring neutron.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** More [https://docs.openstack.org/neutron/2025.1/admin/deploy-ovs-provider.html Bad] Documentation&lt;br /&gt;
*** The guide refers to configuring the Open vSwitch agent and offers more information which directly contradicts the guide.&lt;br /&gt;
**** The guide says to edit neutron.conf with &amp;lt;code&amp;gt;service_plugins = router&amp;lt;/code&amp;gt;&lt;br /&gt;
**** The Open vSwitch agent example configuration for controllers says: &amp;quot;Disable service plug-ins because provider networks do not require any.&amp;quot;&lt;br /&gt;
*** Configuring openvswitch_agent.ini is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
**** The guide attempts to make you configure the the name of the bridge connected to the underlying provider physical network but you have not yet created this bridge when the guide asks you for the name.&lt;br /&gt;
** Completed 2025-01-21&lt;br /&gt;
* [https://docs.openstack.org/horizon/2025.1/install/ Dashboard]&lt;br /&gt;
** Extremely weird behavior, Dashboard will only load if Debug is set to True and compression is turned on.&lt;br /&gt;
** Completed 2025-01-21&lt;br /&gt;
&lt;br /&gt;
==== Compute ====&lt;br /&gt;
* [https://docs.openstack.org/nova/2025.1/install/ Compute Service]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/compute-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide entirely fails to mention that keeping default config in &amp;lt;code&amp;gt;[api_database]&amp;lt;/code&amp;gt; will result in errors. &lt;br /&gt;
*** The guide entirely fails to mention that keeping default config in &amp;lt;code&amp;gt;[database]&amp;lt;/code&amp;gt; will result in errors.&lt;br /&gt;
**** Yes, that&#039;s basically the same documentation error twice, but for two different options, on the same page. &lt;br /&gt;
* [https://docs.openstack.org/neutron/2025.1/install/ Networking Service]&lt;br /&gt;
* Extremely weird behaviour when linking up with control node. placement service decided password was wrong, it wasn&#039;t, nova scheduler and conducter wouldn&#039;t start, the &amp;quot;fix&amp;quot;  is basically patience, no changes were made between it not working and it working and I have no idea why this now works.&lt;br /&gt;
* Completed 2025-01-21&lt;br /&gt;
&lt;br /&gt;
==== Overall feelings ====&lt;br /&gt;
* Clunky and convoluted.&lt;br /&gt;
** The MVP environment with 1 control node and 1 compute node feels about as capable as a 4-bay enthusiast NAS running proxmox.&lt;br /&gt;
*** If I purely count only the time spent on control and compute nodes, I reckon it took me about 18hrs to do something I did in 20mins on my terramaster. &lt;br /&gt;
* Documentation is unacceptably bad.&lt;br /&gt;
** Not kidding, there is better documentation on running automated piracy software.&lt;br /&gt;
*** Heck, there&#039;s better documentation written by indian scam farms to trick your family members to run TeamViewer to run play store giftcard scams.&lt;br /&gt;
** There can and should no excuse at all for the level of sheer incompetence displayed in these docs.&lt;br /&gt;
** OpenStack&#039;s documentation is abysmal and the responsible parties deserve to be held accountable for this.&lt;br /&gt;
* Super bad initial impression, like, I would not, could not even, recommend this in any professional capacity.&lt;br /&gt;
* I am dearly hoping that with scalability, this thing outrips my terramaster fast, else I wonder if this is worth the time, effort, fuel, money and electricity I have pumped into it.&lt;br /&gt;
** I fully realize OpenStack and proxmox are not the same, Nova is basically comparable to proxmox and all other services or modules are extra functionality that proxmox does not aim to offer. Like, obviously, I am exaggerating out of frustration and emotion, but this MVP is basically an externally managed proxmox and taken at face value, I think I&#039;d recommend proxmox over this. But! Let&#039;s see how this evolves and scales.&lt;br /&gt;
* Tempted to try SUSE&#039; [https://harvesterhci.io/ Harvester HCI]&lt;br /&gt;
** Although perhaps this isn&#039;t comparable? &lt;br /&gt;
*** I&#039;m getting mixed messages in both writeups and user experience. I&#039;m worried this means the documentation is equally as bad and people just don&#039;t know what harvester actually is or can do.&lt;br /&gt;
&lt;br /&gt;
== communications ==&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6998</id>
		<title>MilliwaysStack</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6998"/>
		<updated>2026-01-22T18:44:25Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* e-MVP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We want to run an OpenStack experiment&lt;br /&gt;
&lt;br /&gt;
== The grander idea ==&lt;br /&gt;
&lt;br /&gt;
We want to try out an installation of OpenStack to give people around milliways experience with running (on). &lt;br /&gt;
&lt;br /&gt;
From an unnamed source we got 10 HPE servers. We will use 8 of them to run OpenStack on it. Storage is on a seperate machine. &lt;br /&gt;
&lt;br /&gt;
=== MVP ===&lt;br /&gt;
A more realistic MVP than the 1st version is:&lt;br /&gt;
* Networking&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* Databases - mariaDB / PostgreSQL &lt;br /&gt;
&lt;br /&gt;
=== e-MVP ===&lt;br /&gt;
What used to be in the first MVP, but now is actually recognized to be e-MVP;&lt;br /&gt;
* Kubernetes / docker &lt;br /&gt;
* object storage&lt;br /&gt;
* file systems&lt;br /&gt;
* Firewalling&lt;br /&gt;
* Someone something redis I guess&lt;br /&gt;
* container registry&lt;br /&gt;
&lt;br /&gt;
=== super e-MVP ===&lt;br /&gt;
The super extended MVP would be:&lt;br /&gt;
* functional Monitoring &amp;amp; alerting&lt;br /&gt;
* autoscaling &lt;br /&gt;
* integration into milliways identity &amp;amp; access management authentik &lt;br /&gt;
* logging &amp;amp; alerting&lt;br /&gt;
&lt;br /&gt;
== the software stack explained ==&lt;br /&gt;
&lt;br /&gt;
OpenStack is a cloud framework stack that offeres AWS / Azure / GC alike services. &lt;br /&gt;
&lt;br /&gt;
Most documentation is availible for Ubuntu &amp;amp; Red Hat. On the longer term an installation under NixOS might be feasable.&lt;br /&gt;
&lt;br /&gt;
== Asset List ==&lt;br /&gt;
=== Rack ===&lt;br /&gt;
* 47U&lt;br /&gt;
* 950mm external depth&lt;br /&gt;
** 915mm internal depth&lt;br /&gt;
=== Consumables &amp;amp; Small Materials ===&lt;br /&gt;
* 1 x Samsung 2.5&amp;quot; 860 EVO 2TB&lt;br /&gt;
* Assorted M2 - M3 screws&lt;br /&gt;
* Assorted mismatched bundle of M5 and M6 cagenuts and bolts&lt;br /&gt;
* SFPs&lt;br /&gt;
=== Switches ===&lt;br /&gt;
* 2 x Dell PowerConnect 7048R-RA&lt;br /&gt;
* 1 x Cisco 3560e&lt;br /&gt;
=== [[MilliwaysStack_Servers | Servers]] ===&lt;br /&gt;
* 1 Dell PowerEdge R710 server as storage&lt;br /&gt;
** 2 x X5570 2,93GHz&lt;br /&gt;
** 192GB RAM&lt;br /&gt;
** 6 x 3,5&amp;quot; bays&lt;br /&gt;
*** 6 x hotswap 3,5&amp;quot; drive sleds/brackets&lt;br /&gt;
** Drives&lt;br /&gt;
*** 1 x Samsung 2.5&amp;quot; 850 EVO 500GB&lt;br /&gt;
**** for OS&lt;br /&gt;
**** Hidden in aftermarket [https://www.amazon.nl/dp/B083XJPCGL &amp;quot;Optical Drive&amp;quot;] adapter.&lt;br /&gt;
***We have more drives than bays, but not enough drives to make a nice or ideal configuration. As such, the Dell storage situation is likely temporary until we can figure out if we add more 12T or 10T or keep as-is.&lt;br /&gt;
**** 2 x Seagate Exos X18 12TB&lt;br /&gt;
**** 1 x Seagate Exos X18 10TB&lt;br /&gt;
**** 4 x WD Red 4TB&lt;br /&gt;
**** 4 x WD Green 3TB&lt;br /&gt;
** no rails&lt;br /&gt;
* 2 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram&lt;br /&gt;
** PCI Riser to 4* NVMe adapter&lt;br /&gt;
*** 1TB Crucial NVMe &lt;br /&gt;
** iLO4&lt;br /&gt;
*** It seems it accepts 35DPH-SVSXJ-HGBJN-C7N5R-2SS4W as activation key for iLO Advanced license?&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** Slide rails&lt;br /&gt;
* 8 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram &lt;br /&gt;
** iLO4&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** 7 x slide rails&lt;br /&gt;
&lt;br /&gt;
== Shopping List ==&lt;br /&gt;
 It&#039;s ofc. sexy as all hell to buy memory, AI cards, flash storage and allsorts, but literally none of that will ever work if we don&#039;t have our Generic basics in order. While we prefer big donations go to big ticket items, many small ticket items unexpectedly add up in the long run. Please do not forget the generic basics!&lt;br /&gt;
* Generic Basics&lt;br /&gt;
** PDU&lt;br /&gt;
*** &amp;lt;s&amp;gt;Temporary 1U unmanaged PDU with 16A/230V C19 input and 1* C19 + 8* Type F outlet.&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Perfect: Managed Rack mountable PDU with CEE Red 16A/20A 400v input to C13/C14 + C19/C20 outlets.&lt;br /&gt;
**** [https://www.eaton.com/us/en-us/skuPage.PDU3XEVSR6G20.html Stupid expensive example]&lt;br /&gt;
*** Alternatively; a &amp;quot;normal&amp;quot; Serverrack PDU (still strong prefer managed) + 16A/20A 400v -&amp;gt; 16A 230V transform&lt;br /&gt;
** Network Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [Color]&lt;br /&gt;
**** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Power Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Screws, Nuts, Bolts&lt;br /&gt;
*** &amp;lt;s&amp;gt;Assorted M2,M2.5,M3 Screws&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Some actual properly matching cage nuts\bolts would be great&lt;br /&gt;
** PCI Risers&lt;br /&gt;
*** &amp;lt;s&amp;gt;Single NVMe adapters&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Multi NVMe adapters&lt;br /&gt;
** KVM&lt;br /&gt;
*** PiKVM?&lt;br /&gt;
* Dell - Storage&lt;br /&gt;
** &amp;lt;s&amp;gt;2* Drive sleds&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;New RAID Card that supports passthrough\JBOD&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;2* SFF-8087 -&amp;gt; SFF-8087 Mini SAS Cable&amp;lt;/s&amp;gt;&lt;br /&gt;
** Drives&lt;br /&gt;
*** &amp;lt;s&amp;gt;500GB SSD for OS&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Bracket and SATA Cable Adapter for SSD&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Technically not shopping, but for historical tracking;&lt;br /&gt;
**** Old Exos X16 2 x 12T and 1 x 10T were RMA&#039;d and replaced with X18&#039;s&amp;lt;/s&amp;gt;&lt;br /&gt;
*** 12T ?&lt;br /&gt;
* HP1 - Control&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* HP2 - Compute&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* Flash Storage&lt;br /&gt;
** We&#039;ll need [https://www.amazon.de/-/en/dp/B07GCDH5D8 Drive Trays] for the HPs if we wanna add 2.5&amp;quot; SSDs&lt;br /&gt;
** Control and Compute servers each have 3 open m.2 NVMe slots&lt;br /&gt;
** &amp;lt;s&amp;gt;1 x 2TB Samsung 860 EVO&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
 nb. this is quick &#039;n&#039; dirty as I go along.&lt;br /&gt;
 In the short-term future I&#039;d much rather replace this adhoc documentation with something like NetBox.&lt;br /&gt;
=== Network ===&lt;br /&gt;
* Supernet 10.42.0.0/16&lt;br /&gt;
** Vlan 42&lt;br /&gt;
*** Interconnect&lt;br /&gt;
*** 10.42.0.0/30&lt;br /&gt;
**** Gateway 10.42.0.1&lt;br /&gt;
**** Milliways Core 10.42.0.2&lt;br /&gt;
** Vlan 5&lt;br /&gt;
*** Mgmt \ OOB&lt;br /&gt;
*** 10.42.1.0/24&lt;br /&gt;
**** Milliways Core 10.42.1.1&lt;br /&gt;
**** Dell iDRAC 10.42.1.5&lt;br /&gt;
**** Dell RAID Controller 10.42.1.6&lt;br /&gt;
**** HP 1 iLO 10.42.1.7&lt;br /&gt;
**** HP 2 iLO 10.42.1.8&lt;br /&gt;
** Vlan 10&lt;br /&gt;
*** Prod&lt;br /&gt;
*** 10.42.10.0/24&lt;br /&gt;
**** Milliways Core 10.42.10.1&lt;br /&gt;
**** Dell 10.42.10.2&lt;br /&gt;
**** HP 1 10.42.10.3&lt;br /&gt;
**** HP 2 10.42.10.5&lt;br /&gt;
** Vlan 15&lt;br /&gt;
*** [https://docs.openstack.org/neutron/2025.1/install/environment-networking-ubuntu.html Provider Network]&lt;br /&gt;
**** This is an OpenStack thing for the secondary Control and Compute node interfaces.&lt;br /&gt;
**** Currently [https://docs.openstack.org/neutron/2025.1/install/environment-networking-controller-ubuntu.html no IP] address assigned.&lt;br /&gt;
**** May change in future if documentation mandates it.&lt;br /&gt;
&lt;br /&gt;
=== Cable Mgmt ===&lt;br /&gt;
 As there are some early ambitions to physically take this environment to events, perhaps we should seriously think about making our lives easier by already thinking about colorcoding connectivity. While this will help us connecting everything again at $event when we&#039;re sleepdeprived\drunk\explaining to newbies, this has the added effect of making it all look slightly more cooler than just a spaghetti of all boring white cables or worse, a spaghetti of whatever the fuck we have lying around.&lt;br /&gt;
&lt;br /&gt;
This is all just made-up without too much thought. This is specifically intended to start a discussion so we can work toward an agreement, it is not intended to be a unilateral decision. Example; You&#039;ll notice 0 thought was put into fiber or not ;)&lt;br /&gt;
* RED&lt;br /&gt;
** Mgmt \ OOB&lt;br /&gt;
*** iDRACs, iLOs, RAID Cards, etc&lt;br /&gt;
* GREEN&lt;br /&gt;
** Storage Prod&lt;br /&gt;
*** At least the Dell, maybe HPs if we get into flash storage&lt;br /&gt;
* BLUE&lt;br /&gt;
** Compute Prod&lt;br /&gt;
*** Likely overwhelmingly the HPs&lt;br /&gt;
* YELLOW&lt;br /&gt;
** Interconnect&lt;br /&gt;
*** Connectivity to $outside, between switches, whatever&lt;br /&gt;
&lt;br /&gt;
=== Naming Convention ===&lt;br /&gt;
 We need names!&lt;br /&gt;
 Can&#039;t keep calling these &amp;quot;Dell&amp;quot;, &amp;quot;HP1&amp;quot;, &amp;quot;HP2&amp;quot; etc.&lt;br /&gt;
 Calling them by their S/Ns is also super boring and cumbersome; &amp;quot;Oh yea, we need to setup 5V6S064&amp;quot;&lt;br /&gt;
 We could even opt for dual names. Internally, when logged in to $shell, the names could be functional &amp;quot;milliways-control-node-1&amp;quot; so it&#039;s clear what you&#039;re doing, but externally, the Asset Tag could be a Hitchhiker&#039;s Guide to the Galaxy character or a DiscWorld town or something. That way, if we do ever show this off at events, we can do cool shit with light up tags, make stuff funny and recognizable and cool to talk about - it also makes it way more relatable to market for when asking for donations; &amp;quot;Ya, we&#039;re looking for extra storage for Überwald&amp;quot; sounds much better than &amp;quot;Ya we&#039;re looking for extra storage for 5V6S064 or milliways-control-node-1&amp;quot;&lt;br /&gt;
 Naturally, once we get NetBox going, we can map the Asset names to the actual server name and potentially it&#039;s serial so we don&#039;t get confused internally (if we want to use serials, there&#039;s somethign to be said for not using serials here)&lt;br /&gt;
&lt;br /&gt;
* Functional&lt;br /&gt;
** milliways-control-node-1&lt;br /&gt;
** milliways-control-node-2&lt;br /&gt;
** control-node-1&lt;br /&gt;
** compute-node-1&lt;br /&gt;
** flash-storage-1&lt;br /&gt;
&lt;br /&gt;
* Marketing&lt;br /&gt;
** HGttG characters&lt;br /&gt;
*** Arthur&lt;br /&gt;
*** Ford&lt;br /&gt;
*** Zaphod&lt;br /&gt;
** Discworld locations&lt;br /&gt;
*** Ankh-Morpork&lt;br /&gt;
*** Überwald&lt;br /&gt;
*** Lancre&lt;br /&gt;
***&lt;br /&gt;
&lt;br /&gt;
=== OpenStack ===&lt;br /&gt;
&lt;br /&gt;
 We&#039;re using 2025.1 (epoxy) as 2025.2 (flamingo) has an undocumented breaking change making installation of keystone impossible. We have registered a bug with the documentation on launchpad for this.&lt;br /&gt;
&lt;br /&gt;
* [https://docs.openstack.org/install-guide/ Installation guide]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-2025-1-epoxy Minimal Deployment]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/overview.html#example-architecture Example Architecture]&lt;br /&gt;
 Following installation guide recommendation, passwords are created with &amp;lt;code&amp;gt;openssl rand -hex 10&amp;lt;/code&amp;gt; and saved in a password store.&lt;br /&gt;
&lt;br /&gt;
==== Controller ====&lt;br /&gt;
* [https://docs.openstack.org/keystone/2025.1/install/ Identity service]&lt;br /&gt;
** [https://docs.openstack.org/keystone/2025.2/install/keystone-users-ubuntu.html Broken] in 2025.2&lt;br /&gt;
***[https://opendev.org/openstack/keystone/src/commit/82c80dccf6c2e74e27b90f5204de6da1fc6bd76d/releasenotes/notes/remove-wsgi-scripts-615b97ee4d6e0de2.yaml This] commit removes the WSGI scripts, ``keystone-wsgi-admin`` and ``keystone-wsgi-public``.&lt;br /&gt;
*** Both scripts are still called by the openstack command. This means running any openstack command to create a domain, projects, users, and roles fails with the error&lt;br /&gt;
****&amp;lt;code&amp;gt;Failed to discover available identity versions when contacting http://controller:5000/v3. Attempting to parse version from URL.&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Evidence:&lt;br /&gt;
**** &amp;lt;code&amp;gt;tail /var/log/apache2/keystone.log&amp;lt;/code&amp;gt;&lt;br /&gt;
***** &amp;lt;code&amp;gt;Target WSGI script not found or unable to stat: /usr/bin/keystone-wsgi-public&amp;lt;/code&amp;gt;&lt;br /&gt;
** Workaround, use 2025.1 instead&lt;br /&gt;
** Completed 2025-01-18&lt;br /&gt;
* [https://docs.openstack.org/glance/2025.1/install/ Image service]&lt;br /&gt;
** [https://docs.openstack.org/glance/2025.1/install/install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide has you create 3 API endpoints for the service.&lt;br /&gt;
**** You need to configure access to keystone with one of them, but you are not told which one. Only &amp;lt;code&amp;gt;public&amp;lt;/code&amp;gt; will work.&lt;br /&gt;
*** Configuring glance-api.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** Completed 2025-01-19&lt;br /&gt;
* [https://docs.openstack.org/placement/2025.1/install/ Placement service]&lt;br /&gt;
** [https://docs.openstack.org/placement/2025.1/install/verify.html Bad] Documentation&lt;br /&gt;
*** If you followed the guide, your user account [https://storyboard.openstack.org/#!/story/2008969 does not have the rights] to read &amp;lt;code&amp;gt;/etc/placement/placement.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Running &amp;lt;code&amp;gt;placement-status upgrade check&amp;lt;/code&amp;gt; as root proves the service works.&lt;br /&gt;
*** Undocumented requirement fulfilled; &amp;lt;code&amp;gt;usermod -aG placement&amp;lt;/code&amp;gt;&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portions of [https://docs.openstack.org/nova/2025.1/install/ Compute]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/controller-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring nova.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
*** The guide attempts to make you configure options for the networking service, which you have not installed yet, because the guide makes you install this service first&lt;br /&gt;
*** &amp;lt;code&amp;gt;Due to a packaging bug, remove the log_dir option from the [DEFAULT] section.&amp;lt;/code&amp;gt;&lt;br /&gt;
**** ???? THEN FIX THE PACKAGE?!?!?!!!!&lt;br /&gt;
*** The &amp;lt;code&amp;gt;[glance]&amp;lt;/code&amp;gt; option you are instructed to use is deprecated&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portion of [https://docs.openstack.org/neutron/2025.1/install/ Networking]&lt;br /&gt;
** [https://docs.openstack.org/neutron/2025.1/install/controller-install-option2-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring neutron.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** More [https://docs.openstack.org/neutron/2025.1/admin/deploy-ovs-provider.html Bad] Documentation&lt;br /&gt;
*** The guide refers to configuring the Open vSwitch agent and offers more information which directly contradicts the guide.&lt;br /&gt;
**** The guide says to edit neutron.conf with &amp;lt;code&amp;gt;service_plugins = router&amp;lt;/code&amp;gt;&lt;br /&gt;
**** The Open vSwitch agent example configuration for controllers says: &amp;quot;Disable service plug-ins because provider networks do not require any.&amp;quot;&lt;br /&gt;
*** Configuring openvswitch_agent.ini is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
**** The guide attempts to make you configure the the name of the bridge connected to the underlying provider physical network but you have not yet created this bridge when the guide asks you for the name.&lt;br /&gt;
** Completed 2025-01-21&lt;br /&gt;
* [https://docs.openstack.org/horizon/2025.1/install/ Dashboard]&lt;br /&gt;
** Extremely weird behavior, Dashboard will only load if Debug is set to True and compression is turned on.&lt;br /&gt;
** Completed 2025-01-21&lt;br /&gt;
&lt;br /&gt;
==== Compute ====&lt;br /&gt;
* [https://docs.openstack.org/nova/2025.1/install/ Compute Service]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/compute-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide entirely fails to mention that keeping default config in &amp;lt;code&amp;gt;[api_database]&amp;lt;/code&amp;gt; will result in errors. &lt;br /&gt;
*** The guide entirely fails to mention that keeping default config in &amp;lt;code&amp;gt;[database]&amp;lt;/code&amp;gt; will result in errors.&lt;br /&gt;
**** Yes, that&#039;s basically the same documentation error twice, but for two different options, on the same page. &lt;br /&gt;
* [https://docs.openstack.org/neutron/2025.1/install/ Networking Service]&lt;br /&gt;
* Extremely weird behaviour when linking up with control node. placement service decided password was wrong, it wasn&#039;t, nova scheduler and conducter wouldn&#039;t start, the &amp;quot;fix&amp;quot;  is basically patience, no changes were made between it not working and it working and I have no idea why this now works.&lt;br /&gt;
* Completed 2025-01-21&lt;br /&gt;
&lt;br /&gt;
==== Overall feelings ====&lt;br /&gt;
* Clunky and convoluted.&lt;br /&gt;
** The MVP environment with 1 control node and 1 compute node feels about as capable as a 4-bay enthusiast NAS running proxmox.&lt;br /&gt;
*** If I purely count only the time spent on control and compute nodes, I reckon it took me about 18hrs to do something I did in 20mins on my terramaster. &lt;br /&gt;
* Documentation is unacceptably bad.&lt;br /&gt;
** Not kidding, there is better documentation on running automated piracy software.&lt;br /&gt;
*** Heck, there&#039;s better documentation written by indian scam farms to trick your family members to run TeamViewer to run play store giftcard scams.&lt;br /&gt;
** There can and should no excuse at all for the level of sheer incompetence displayed in these docs.&lt;br /&gt;
** OpenStack&#039;s documentation is abysmal and the responsible parties deserve to be held accountable for this.&lt;br /&gt;
* Super bad initial impression, like, I would not, could not even, recommend this in any professional capacity.&lt;br /&gt;
* I am dearly hoping that with scalability, this thing outrips my terramaster fast, else I wonder if this is worth the time, effort, fuel, money and electricity I have pumped into it.&lt;br /&gt;
** I fully realize OpenStack and proxmox are not the same, Nova is basically comparable to proxmox and all other services or modules are extra functionality that proxmox does not aim to offer. Like, obviously, I am exaggerating out of frustration and emotion, but this MVP is basically an externally managed proxmox and taken at face value, I think I&#039;d recommend proxmox over this. But! Let&#039;s see how this evolves and scales.&lt;br /&gt;
* Tempted to try SUSE&#039; [https://harvesterhci.io/ Harvester HCI]&lt;br /&gt;
** Although perhaps this isn&#039;t comparable? &lt;br /&gt;
*** I&#039;m getting mixed messages in both writeups and user experience. I&#039;m worried this means the documentation is equally as bad and people just don&#039;t know what harvester actually is or can do.&lt;br /&gt;
&lt;br /&gt;
== communications ==&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6997</id>
		<title>MilliwaysStack</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6997"/>
		<updated>2026-01-22T18:43:57Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* MVP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We want to run an OpenStack experiment&lt;br /&gt;
&lt;br /&gt;
== The grander idea ==&lt;br /&gt;
&lt;br /&gt;
We want to try out an installation of OpenStack to give people around milliways experience with running (on). &lt;br /&gt;
&lt;br /&gt;
From an unnamed source we got 10 HPE servers. We will use 8 of them to run OpenStack on it. Storage is on a seperate machine. &lt;br /&gt;
&lt;br /&gt;
=== MVP ===&lt;br /&gt;
A more realistic MVP than the 1st version is:&lt;br /&gt;
* Networking&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* Databases - mariaDB / PostgreSQL &lt;br /&gt;
&lt;br /&gt;
=== e-MVP ===&lt;br /&gt;
What used to be in the first MVP, but now is actually recognized to be e-MVP;&lt;br /&gt;
An actual extended MVP;&lt;br /&gt;
* Kubernetes / docker &lt;br /&gt;
* object storage&lt;br /&gt;
* file systems&lt;br /&gt;
* Firewalling&lt;br /&gt;
* Someone something redis I guess&lt;br /&gt;
* container registry&lt;br /&gt;
&lt;br /&gt;
=== super e-MVP ===&lt;br /&gt;
The super extended MVP would be:&lt;br /&gt;
* functional Monitoring &amp;amp; alerting&lt;br /&gt;
* autoscaling &lt;br /&gt;
* integration into milliways identity &amp;amp; access management authentik &lt;br /&gt;
* logging &amp;amp; alerting&lt;br /&gt;
&lt;br /&gt;
== the software stack explained ==&lt;br /&gt;
&lt;br /&gt;
OpenStack is a cloud framework stack that offeres AWS / Azure / GC alike services. &lt;br /&gt;
&lt;br /&gt;
Most documentation is availible for Ubuntu &amp;amp; Red Hat. On the longer term an installation under NixOS might be feasable.&lt;br /&gt;
&lt;br /&gt;
== Asset List ==&lt;br /&gt;
=== Rack ===&lt;br /&gt;
* 47U&lt;br /&gt;
* 950mm external depth&lt;br /&gt;
** 915mm internal depth&lt;br /&gt;
=== Consumables &amp;amp; Small Materials ===&lt;br /&gt;
* 1 x Samsung 2.5&amp;quot; 860 EVO 2TB&lt;br /&gt;
* Assorted M2 - M3 screws&lt;br /&gt;
* Assorted mismatched bundle of M5 and M6 cagenuts and bolts&lt;br /&gt;
* SFPs&lt;br /&gt;
=== Switches ===&lt;br /&gt;
* 2 x Dell PowerConnect 7048R-RA&lt;br /&gt;
* 1 x Cisco 3560e&lt;br /&gt;
=== [[MilliwaysStack_Servers | Servers]] ===&lt;br /&gt;
* 1 Dell PowerEdge R710 server as storage&lt;br /&gt;
** 2 x X5570 2,93GHz&lt;br /&gt;
** 192GB RAM&lt;br /&gt;
** 6 x 3,5&amp;quot; bays&lt;br /&gt;
*** 6 x hotswap 3,5&amp;quot; drive sleds/brackets&lt;br /&gt;
** Drives&lt;br /&gt;
*** 1 x Samsung 2.5&amp;quot; 850 EVO 500GB&lt;br /&gt;
**** for OS&lt;br /&gt;
**** Hidden in aftermarket [https://www.amazon.nl/dp/B083XJPCGL &amp;quot;Optical Drive&amp;quot;] adapter.&lt;br /&gt;
***We have more drives than bays, but not enough drives to make a nice or ideal configuration. As such, the Dell storage situation is likely temporary until we can figure out if we add more 12T or 10T or keep as-is.&lt;br /&gt;
**** 2 x Seagate Exos X18 12TB&lt;br /&gt;
**** 1 x Seagate Exos X18 10TB&lt;br /&gt;
**** 4 x WD Red 4TB&lt;br /&gt;
**** 4 x WD Green 3TB&lt;br /&gt;
** no rails&lt;br /&gt;
* 2 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram&lt;br /&gt;
** PCI Riser to 4* NVMe adapter&lt;br /&gt;
*** 1TB Crucial NVMe &lt;br /&gt;
** iLO4&lt;br /&gt;
*** It seems it accepts 35DPH-SVSXJ-HGBJN-C7N5R-2SS4W as activation key for iLO Advanced license?&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** Slide rails&lt;br /&gt;
* 8 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram &lt;br /&gt;
** iLO4&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** 7 x slide rails&lt;br /&gt;
&lt;br /&gt;
== Shopping List ==&lt;br /&gt;
 It&#039;s ofc. sexy as all hell to buy memory, AI cards, flash storage and allsorts, but literally none of that will ever work if we don&#039;t have our Generic basics in order. While we prefer big donations go to big ticket items, many small ticket items unexpectedly add up in the long run. Please do not forget the generic basics!&lt;br /&gt;
* Generic Basics&lt;br /&gt;
** PDU&lt;br /&gt;
*** &amp;lt;s&amp;gt;Temporary 1U unmanaged PDU with 16A/230V C19 input and 1* C19 + 8* Type F outlet.&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Perfect: Managed Rack mountable PDU with CEE Red 16A/20A 400v input to C13/C14 + C19/C20 outlets.&lt;br /&gt;
**** [https://www.eaton.com/us/en-us/skuPage.PDU3XEVSR6G20.html Stupid expensive example]&lt;br /&gt;
*** Alternatively; a &amp;quot;normal&amp;quot; Serverrack PDU (still strong prefer managed) + 16A/20A 400v -&amp;gt; 16A 230V transform&lt;br /&gt;
** Network Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [Color]&lt;br /&gt;
**** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Power Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Screws, Nuts, Bolts&lt;br /&gt;
*** &amp;lt;s&amp;gt;Assorted M2,M2.5,M3 Screws&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Some actual properly matching cage nuts\bolts would be great&lt;br /&gt;
** PCI Risers&lt;br /&gt;
*** &amp;lt;s&amp;gt;Single NVMe adapters&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Multi NVMe adapters&lt;br /&gt;
** KVM&lt;br /&gt;
*** PiKVM?&lt;br /&gt;
* Dell - Storage&lt;br /&gt;
** &amp;lt;s&amp;gt;2* Drive sleds&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;New RAID Card that supports passthrough\JBOD&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;2* SFF-8087 -&amp;gt; SFF-8087 Mini SAS Cable&amp;lt;/s&amp;gt;&lt;br /&gt;
** Drives&lt;br /&gt;
*** &amp;lt;s&amp;gt;500GB SSD for OS&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Bracket and SATA Cable Adapter for SSD&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Technically not shopping, but for historical tracking;&lt;br /&gt;
**** Old Exos X16 2 x 12T and 1 x 10T were RMA&#039;d and replaced with X18&#039;s&amp;lt;/s&amp;gt;&lt;br /&gt;
*** 12T ?&lt;br /&gt;
* HP1 - Control&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* HP2 - Compute&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* Flash Storage&lt;br /&gt;
** We&#039;ll need [https://www.amazon.de/-/en/dp/B07GCDH5D8 Drive Trays] for the HPs if we wanna add 2.5&amp;quot; SSDs&lt;br /&gt;
** Control and Compute servers each have 3 open m.2 NVMe slots&lt;br /&gt;
** &amp;lt;s&amp;gt;1 x 2TB Samsung 860 EVO&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
 nb. this is quick &#039;n&#039; dirty as I go along.&lt;br /&gt;
 In the short-term future I&#039;d much rather replace this adhoc documentation with something like NetBox.&lt;br /&gt;
=== Network ===&lt;br /&gt;
* Supernet 10.42.0.0/16&lt;br /&gt;
** Vlan 42&lt;br /&gt;
*** Interconnect&lt;br /&gt;
*** 10.42.0.0/30&lt;br /&gt;
**** Gateway 10.42.0.1&lt;br /&gt;
**** Milliways Core 10.42.0.2&lt;br /&gt;
** Vlan 5&lt;br /&gt;
*** Mgmt \ OOB&lt;br /&gt;
*** 10.42.1.0/24&lt;br /&gt;
**** Milliways Core 10.42.1.1&lt;br /&gt;
**** Dell iDRAC 10.42.1.5&lt;br /&gt;
**** Dell RAID Controller 10.42.1.6&lt;br /&gt;
**** HP 1 iLO 10.42.1.7&lt;br /&gt;
**** HP 2 iLO 10.42.1.8&lt;br /&gt;
** Vlan 10&lt;br /&gt;
*** Prod&lt;br /&gt;
*** 10.42.10.0/24&lt;br /&gt;
**** Milliways Core 10.42.10.1&lt;br /&gt;
**** Dell 10.42.10.2&lt;br /&gt;
**** HP 1 10.42.10.3&lt;br /&gt;
**** HP 2 10.42.10.5&lt;br /&gt;
** Vlan 15&lt;br /&gt;
*** [https://docs.openstack.org/neutron/2025.1/install/environment-networking-ubuntu.html Provider Network]&lt;br /&gt;
**** This is an OpenStack thing for the secondary Control and Compute node interfaces.&lt;br /&gt;
**** Currently [https://docs.openstack.org/neutron/2025.1/install/environment-networking-controller-ubuntu.html no IP] address assigned.&lt;br /&gt;
**** May change in future if documentation mandates it.&lt;br /&gt;
&lt;br /&gt;
=== Cable Mgmt ===&lt;br /&gt;
 As there are some early ambitions to physically take this environment to events, perhaps we should seriously think about making our lives easier by already thinking about colorcoding connectivity. While this will help us connecting everything again at $event when we&#039;re sleepdeprived\drunk\explaining to newbies, this has the added effect of making it all look slightly more cooler than just a spaghetti of all boring white cables or worse, a spaghetti of whatever the fuck we have lying around.&lt;br /&gt;
&lt;br /&gt;
This is all just made-up without too much thought. This is specifically intended to start a discussion so we can work toward an agreement, it is not intended to be a unilateral decision. Example; You&#039;ll notice 0 thought was put into fiber or not ;)&lt;br /&gt;
* RED&lt;br /&gt;
** Mgmt \ OOB&lt;br /&gt;
*** iDRACs, iLOs, RAID Cards, etc&lt;br /&gt;
* GREEN&lt;br /&gt;
** Storage Prod&lt;br /&gt;
*** At least the Dell, maybe HPs if we get into flash storage&lt;br /&gt;
* BLUE&lt;br /&gt;
** Compute Prod&lt;br /&gt;
*** Likely overwhelmingly the HPs&lt;br /&gt;
* YELLOW&lt;br /&gt;
** Interconnect&lt;br /&gt;
*** Connectivity to $outside, between switches, whatever&lt;br /&gt;
&lt;br /&gt;
=== Naming Convention ===&lt;br /&gt;
 We need names!&lt;br /&gt;
 Can&#039;t keep calling these &amp;quot;Dell&amp;quot;, &amp;quot;HP1&amp;quot;, &amp;quot;HP2&amp;quot; etc.&lt;br /&gt;
 Calling them by their S/Ns is also super boring and cumbersome; &amp;quot;Oh yea, we need to setup 5V6S064&amp;quot;&lt;br /&gt;
 We could even opt for dual names. Internally, when logged in to $shell, the names could be functional &amp;quot;milliways-control-node-1&amp;quot; so it&#039;s clear what you&#039;re doing, but externally, the Asset Tag could be a Hitchhiker&#039;s Guide to the Galaxy character or a DiscWorld town or something. That way, if we do ever show this off at events, we can do cool shit with light up tags, make stuff funny and recognizable and cool to talk about - it also makes it way more relatable to market for when asking for donations; &amp;quot;Ya, we&#039;re looking for extra storage for Überwald&amp;quot; sounds much better than &amp;quot;Ya we&#039;re looking for extra storage for 5V6S064 or milliways-control-node-1&amp;quot;&lt;br /&gt;
 Naturally, once we get NetBox going, we can map the Asset names to the actual server name and potentially it&#039;s serial so we don&#039;t get confused internally (if we want to use serials, there&#039;s somethign to be said for not using serials here)&lt;br /&gt;
&lt;br /&gt;
* Functional&lt;br /&gt;
** milliways-control-node-1&lt;br /&gt;
** milliways-control-node-2&lt;br /&gt;
** control-node-1&lt;br /&gt;
** compute-node-1&lt;br /&gt;
** flash-storage-1&lt;br /&gt;
&lt;br /&gt;
* Marketing&lt;br /&gt;
** HGttG characters&lt;br /&gt;
*** Arthur&lt;br /&gt;
*** Ford&lt;br /&gt;
*** Zaphod&lt;br /&gt;
** Discworld locations&lt;br /&gt;
*** Ankh-Morpork&lt;br /&gt;
*** Überwald&lt;br /&gt;
*** Lancre&lt;br /&gt;
***&lt;br /&gt;
&lt;br /&gt;
=== OpenStack ===&lt;br /&gt;
&lt;br /&gt;
 We&#039;re using 2025.1 (epoxy) as 2025.2 (flamingo) has an undocumented breaking change making installation of keystone impossible. We have registered a bug with the documentation on launchpad for this.&lt;br /&gt;
&lt;br /&gt;
* [https://docs.openstack.org/install-guide/ Installation guide]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-2025-1-epoxy Minimal Deployment]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/overview.html#example-architecture Example Architecture]&lt;br /&gt;
 Following installation guide recommendation, passwords are created with &amp;lt;code&amp;gt;openssl rand -hex 10&amp;lt;/code&amp;gt; and saved in a password store.&lt;br /&gt;
&lt;br /&gt;
==== Controller ====&lt;br /&gt;
* [https://docs.openstack.org/keystone/2025.1/install/ Identity service]&lt;br /&gt;
** [https://docs.openstack.org/keystone/2025.2/install/keystone-users-ubuntu.html Broken] in 2025.2&lt;br /&gt;
***[https://opendev.org/openstack/keystone/src/commit/82c80dccf6c2e74e27b90f5204de6da1fc6bd76d/releasenotes/notes/remove-wsgi-scripts-615b97ee4d6e0de2.yaml This] commit removes the WSGI scripts, ``keystone-wsgi-admin`` and ``keystone-wsgi-public``.&lt;br /&gt;
*** Both scripts are still called by the openstack command. This means running any openstack command to create a domain, projects, users, and roles fails with the error&lt;br /&gt;
****&amp;lt;code&amp;gt;Failed to discover available identity versions when contacting http://controller:5000/v3. Attempting to parse version from URL.&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Evidence:&lt;br /&gt;
**** &amp;lt;code&amp;gt;tail /var/log/apache2/keystone.log&amp;lt;/code&amp;gt;&lt;br /&gt;
***** &amp;lt;code&amp;gt;Target WSGI script not found or unable to stat: /usr/bin/keystone-wsgi-public&amp;lt;/code&amp;gt;&lt;br /&gt;
** Workaround, use 2025.1 instead&lt;br /&gt;
** Completed 2025-01-18&lt;br /&gt;
* [https://docs.openstack.org/glance/2025.1/install/ Image service]&lt;br /&gt;
** [https://docs.openstack.org/glance/2025.1/install/install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide has you create 3 API endpoints for the service.&lt;br /&gt;
**** You need to configure access to keystone with one of them, but you are not told which one. Only &amp;lt;code&amp;gt;public&amp;lt;/code&amp;gt; will work.&lt;br /&gt;
*** Configuring glance-api.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** Completed 2025-01-19&lt;br /&gt;
* [https://docs.openstack.org/placement/2025.1/install/ Placement service]&lt;br /&gt;
** [https://docs.openstack.org/placement/2025.1/install/verify.html Bad] Documentation&lt;br /&gt;
*** If you followed the guide, your user account [https://storyboard.openstack.org/#!/story/2008969 does not have the rights] to read &amp;lt;code&amp;gt;/etc/placement/placement.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Running &amp;lt;code&amp;gt;placement-status upgrade check&amp;lt;/code&amp;gt; as root proves the service works.&lt;br /&gt;
*** Undocumented requirement fulfilled; &amp;lt;code&amp;gt;usermod -aG placement&amp;lt;/code&amp;gt;&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portions of [https://docs.openstack.org/nova/2025.1/install/ Compute]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/controller-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring nova.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
*** The guide attempts to make you configure options for the networking service, which you have not installed yet, because the guide makes you install this service first&lt;br /&gt;
*** &amp;lt;code&amp;gt;Due to a packaging bug, remove the log_dir option from the [DEFAULT] section.&amp;lt;/code&amp;gt;&lt;br /&gt;
**** ???? THEN FIX THE PACKAGE?!?!?!!!!&lt;br /&gt;
*** The &amp;lt;code&amp;gt;[glance]&amp;lt;/code&amp;gt; option you are instructed to use is deprecated&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portion of [https://docs.openstack.org/neutron/2025.1/install/ Networking]&lt;br /&gt;
** [https://docs.openstack.org/neutron/2025.1/install/controller-install-option2-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring neutron.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** More [https://docs.openstack.org/neutron/2025.1/admin/deploy-ovs-provider.html Bad] Documentation&lt;br /&gt;
*** The guide refers to configuring the Open vSwitch agent and offers more information which directly contradicts the guide.&lt;br /&gt;
**** The guide says to edit neutron.conf with &amp;lt;code&amp;gt;service_plugins = router&amp;lt;/code&amp;gt;&lt;br /&gt;
**** The Open vSwitch agent example configuration for controllers says: &amp;quot;Disable service plug-ins because provider networks do not require any.&amp;quot;&lt;br /&gt;
*** Configuring openvswitch_agent.ini is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
**** The guide attempts to make you configure the the name of the bridge connected to the underlying provider physical network but you have not yet created this bridge when the guide asks you for the name.&lt;br /&gt;
** Completed 2025-01-21&lt;br /&gt;
* [https://docs.openstack.org/horizon/2025.1/install/ Dashboard]&lt;br /&gt;
** Extremely weird behavior, Dashboard will only load if Debug is set to True and compression is turned on.&lt;br /&gt;
** Completed 2025-01-21&lt;br /&gt;
&lt;br /&gt;
==== Compute ====&lt;br /&gt;
* [https://docs.openstack.org/nova/2025.1/install/ Compute Service]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/compute-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide entirely fails to mention that keeping default config in &amp;lt;code&amp;gt;[api_database]&amp;lt;/code&amp;gt; will result in errors. &lt;br /&gt;
*** The guide entirely fails to mention that keeping default config in &amp;lt;code&amp;gt;[database]&amp;lt;/code&amp;gt; will result in errors.&lt;br /&gt;
**** Yes, that&#039;s basically the same documentation error twice, but for two different options, on the same page. &lt;br /&gt;
* [https://docs.openstack.org/neutron/2025.1/install/ Networking Service]&lt;br /&gt;
* Extremely weird behaviour when linking up with control node. placement service decided password was wrong, it wasn&#039;t, nova scheduler and conducter wouldn&#039;t start, the &amp;quot;fix&amp;quot;  is basically patience, no changes were made between it not working and it working and I have no idea why this now works.&lt;br /&gt;
* Completed 2025-01-21&lt;br /&gt;
&lt;br /&gt;
==== Overall feelings ====&lt;br /&gt;
* Clunky and convoluted.&lt;br /&gt;
** The MVP environment with 1 control node and 1 compute node feels about as capable as a 4-bay enthusiast NAS running proxmox.&lt;br /&gt;
*** If I purely count only the time spent on control and compute nodes, I reckon it took me about 18hrs to do something I did in 20mins on my terramaster. &lt;br /&gt;
* Documentation is unacceptably bad.&lt;br /&gt;
** Not kidding, there is better documentation on running automated piracy software.&lt;br /&gt;
*** Heck, there&#039;s better documentation written by indian scam farms to trick your family members to run TeamViewer to run play store giftcard scams.&lt;br /&gt;
** There can and should no excuse at all for the level of sheer incompetence displayed in these docs.&lt;br /&gt;
** OpenStack&#039;s documentation is abysmal and the responsible parties deserve to be held accountable for this.&lt;br /&gt;
* Super bad initial impression, like, I would not, could not even, recommend this in any professional capacity.&lt;br /&gt;
* I am dearly hoping that with scalability, this thing outrips my terramaster fast, else I wonder if this is worth the time, effort, fuel, money and electricity I have pumped into it.&lt;br /&gt;
** I fully realize OpenStack and proxmox are not the same, Nova is basically comparable to proxmox and all other services or modules are extra functionality that proxmox does not aim to offer. Like, obviously, I am exaggerating out of frustration and emotion, but this MVP is basically an externally managed proxmox and taken at face value, I think I&#039;d recommend proxmox over this. But! Let&#039;s see how this evolves and scales.&lt;br /&gt;
* Tempted to try SUSE&#039; [https://harvesterhci.io/ Harvester HCI]&lt;br /&gt;
** Although perhaps this isn&#039;t comparable? &lt;br /&gt;
*** I&#039;m getting mixed messages in both writeups and user experience. I&#039;m worried this means the documentation is equally as bad and people just don&#039;t know what harvester actually is or can do.&lt;br /&gt;
&lt;br /&gt;
== communications ==&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6996</id>
		<title>MilliwaysStack</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6996"/>
		<updated>2026-01-22T18:42:04Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* e-MVP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We want to run an OpenStack experiment&lt;br /&gt;
&lt;br /&gt;
== The grander idea ==&lt;br /&gt;
&lt;br /&gt;
We want to try out an installation of OpenStack to give people around milliways experience with running (on). &lt;br /&gt;
&lt;br /&gt;
From an unnamed source we got 10 HPE servers. We will use 8 of them to run OpenStack on it. Storage is on a seperate machine. &lt;br /&gt;
&lt;br /&gt;
=== MVP ===&lt;br /&gt;
The MVP would be:&lt;br /&gt;
* Kubernetes / docker &lt;br /&gt;
* object storage&lt;br /&gt;
* file systems&lt;br /&gt;
* Networking&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* Firewalling&lt;br /&gt;
* Databases - mariaDB / PostgreSQL &lt;br /&gt;
* Someone something redis I guess&lt;br /&gt;
* container registry&lt;br /&gt;
&lt;br /&gt;
=== super e-MVP ===&lt;br /&gt;
The super extended MVP would be:&lt;br /&gt;
* functional Monitoring &amp;amp; alerting&lt;br /&gt;
* autoscaling &lt;br /&gt;
* integration into milliways identity &amp;amp; access management authentik &lt;br /&gt;
* logging &amp;amp; alerting&lt;br /&gt;
&lt;br /&gt;
== the software stack explained ==&lt;br /&gt;
&lt;br /&gt;
OpenStack is a cloud framework stack that offeres AWS / Azure / GC alike services. &lt;br /&gt;
&lt;br /&gt;
Most documentation is availible for Ubuntu &amp;amp; Red Hat. On the longer term an installation under NixOS might be feasable.&lt;br /&gt;
&lt;br /&gt;
== Asset List ==&lt;br /&gt;
=== Rack ===&lt;br /&gt;
* 47U&lt;br /&gt;
* 950mm external depth&lt;br /&gt;
** 915mm internal depth&lt;br /&gt;
=== Consumables &amp;amp; Small Materials ===&lt;br /&gt;
* 1 x Samsung 2.5&amp;quot; 860 EVO 2TB&lt;br /&gt;
* Assorted M2 - M3 screws&lt;br /&gt;
* Assorted mismatched bundle of M5 and M6 cagenuts and bolts&lt;br /&gt;
* SFPs&lt;br /&gt;
=== Switches ===&lt;br /&gt;
* 2 x Dell PowerConnect 7048R-RA&lt;br /&gt;
* 1 x Cisco 3560e&lt;br /&gt;
=== [[MilliwaysStack_Servers | Servers]] ===&lt;br /&gt;
* 1 Dell PowerEdge R710 server as storage&lt;br /&gt;
** 2 x X5570 2,93GHz&lt;br /&gt;
** 192GB RAM&lt;br /&gt;
** 6 x 3,5&amp;quot; bays&lt;br /&gt;
*** 6 x hotswap 3,5&amp;quot; drive sleds/brackets&lt;br /&gt;
** Drives&lt;br /&gt;
*** 1 x Samsung 2.5&amp;quot; 850 EVO 500GB&lt;br /&gt;
**** for OS&lt;br /&gt;
**** Hidden in aftermarket [https://www.amazon.nl/dp/B083XJPCGL &amp;quot;Optical Drive&amp;quot;] adapter.&lt;br /&gt;
***We have more drives than bays, but not enough drives to make a nice or ideal configuration. As such, the Dell storage situation is likely temporary until we can figure out if we add more 12T or 10T or keep as-is.&lt;br /&gt;
**** 2 x Seagate Exos X18 12TB&lt;br /&gt;
**** 1 x Seagate Exos X18 10TB&lt;br /&gt;
**** 4 x WD Red 4TB&lt;br /&gt;
**** 4 x WD Green 3TB&lt;br /&gt;
** no rails&lt;br /&gt;
* 2 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram&lt;br /&gt;
** PCI Riser to 4* NVMe adapter&lt;br /&gt;
*** 1TB Crucial NVMe &lt;br /&gt;
** iLO4&lt;br /&gt;
*** It seems it accepts 35DPH-SVSXJ-HGBJN-C7N5R-2SS4W as activation key for iLO Advanced license?&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** Slide rails&lt;br /&gt;
* 8 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram &lt;br /&gt;
** iLO4&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** 7 x slide rails&lt;br /&gt;
&lt;br /&gt;
== Shopping List ==&lt;br /&gt;
 It&#039;s ofc. sexy as all hell to buy memory, AI cards, flash storage and allsorts, but literally none of that will ever work if we don&#039;t have our Generic basics in order. While we prefer big donations go to big ticket items, many small ticket items unexpectedly add up in the long run. Please do not forget the generic basics!&lt;br /&gt;
* Generic Basics&lt;br /&gt;
** PDU&lt;br /&gt;
*** &amp;lt;s&amp;gt;Temporary 1U unmanaged PDU with 16A/230V C19 input and 1* C19 + 8* Type F outlet.&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Perfect: Managed Rack mountable PDU with CEE Red 16A/20A 400v input to C13/C14 + C19/C20 outlets.&lt;br /&gt;
**** [https://www.eaton.com/us/en-us/skuPage.PDU3XEVSR6G20.html Stupid expensive example]&lt;br /&gt;
*** Alternatively; a &amp;quot;normal&amp;quot; Serverrack PDU (still strong prefer managed) + 16A/20A 400v -&amp;gt; 16A 230V transform&lt;br /&gt;
** Network Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [Color]&lt;br /&gt;
**** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Power Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Screws, Nuts, Bolts&lt;br /&gt;
*** &amp;lt;s&amp;gt;Assorted M2,M2.5,M3 Screws&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Some actual properly matching cage nuts\bolts would be great&lt;br /&gt;
** PCI Risers&lt;br /&gt;
*** &amp;lt;s&amp;gt;Single NVMe adapters&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Multi NVMe adapters&lt;br /&gt;
** KVM&lt;br /&gt;
*** PiKVM?&lt;br /&gt;
* Dell - Storage&lt;br /&gt;
** &amp;lt;s&amp;gt;2* Drive sleds&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;New RAID Card that supports passthrough\JBOD&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;2* SFF-8087 -&amp;gt; SFF-8087 Mini SAS Cable&amp;lt;/s&amp;gt;&lt;br /&gt;
** Drives&lt;br /&gt;
*** &amp;lt;s&amp;gt;500GB SSD for OS&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Bracket and SATA Cable Adapter for SSD&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Technically not shopping, but for historical tracking;&lt;br /&gt;
**** Old Exos X16 2 x 12T and 1 x 10T were RMA&#039;d and replaced with X18&#039;s&amp;lt;/s&amp;gt;&lt;br /&gt;
*** 12T ?&lt;br /&gt;
* HP1 - Control&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* HP2 - Compute&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* Flash Storage&lt;br /&gt;
** We&#039;ll need [https://www.amazon.de/-/en/dp/B07GCDH5D8 Drive Trays] for the HPs if we wanna add 2.5&amp;quot; SSDs&lt;br /&gt;
** Control and Compute servers each have 3 open m.2 NVMe slots&lt;br /&gt;
** &amp;lt;s&amp;gt;1 x 2TB Samsung 860 EVO&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
 nb. this is quick &#039;n&#039; dirty as I go along.&lt;br /&gt;
 In the short-term future I&#039;d much rather replace this adhoc documentation with something like NetBox.&lt;br /&gt;
=== Network ===&lt;br /&gt;
* Supernet 10.42.0.0/16&lt;br /&gt;
** Vlan 42&lt;br /&gt;
*** Interconnect&lt;br /&gt;
*** 10.42.0.0/30&lt;br /&gt;
**** Gateway 10.42.0.1&lt;br /&gt;
**** Milliways Core 10.42.0.2&lt;br /&gt;
** Vlan 5&lt;br /&gt;
*** Mgmt \ OOB&lt;br /&gt;
*** 10.42.1.0/24&lt;br /&gt;
**** Milliways Core 10.42.1.1&lt;br /&gt;
**** Dell iDRAC 10.42.1.5&lt;br /&gt;
**** Dell RAID Controller 10.42.1.6&lt;br /&gt;
**** HP 1 iLO 10.42.1.7&lt;br /&gt;
**** HP 2 iLO 10.42.1.8&lt;br /&gt;
** Vlan 10&lt;br /&gt;
*** Prod&lt;br /&gt;
*** 10.42.10.0/24&lt;br /&gt;
**** Milliways Core 10.42.10.1&lt;br /&gt;
**** Dell 10.42.10.2&lt;br /&gt;
**** HP 1 10.42.10.3&lt;br /&gt;
**** HP 2 10.42.10.5&lt;br /&gt;
** Vlan 15&lt;br /&gt;
*** [https://docs.openstack.org/neutron/2025.1/install/environment-networking-ubuntu.html Provider Network]&lt;br /&gt;
**** This is an OpenStack thing for the secondary Control and Compute node interfaces.&lt;br /&gt;
**** Currently [https://docs.openstack.org/neutron/2025.1/install/environment-networking-controller-ubuntu.html no IP] address assigned.&lt;br /&gt;
**** May change in future if documentation mandates it.&lt;br /&gt;
&lt;br /&gt;
=== Cable Mgmt ===&lt;br /&gt;
 As there are some early ambitions to physically take this environment to events, perhaps we should seriously think about making our lives easier by already thinking about colorcoding connectivity. While this will help us connecting everything again at $event when we&#039;re sleepdeprived\drunk\explaining to newbies, this has the added effect of making it all look slightly more cooler than just a spaghetti of all boring white cables or worse, a spaghetti of whatever the fuck we have lying around.&lt;br /&gt;
&lt;br /&gt;
This is all just made-up without too much thought. This is specifically intended to start a discussion so we can work toward an agreement, it is not intended to be a unilateral decision. Example; You&#039;ll notice 0 thought was put into fiber or not ;)&lt;br /&gt;
* RED&lt;br /&gt;
** Mgmt \ OOB&lt;br /&gt;
*** iDRACs, iLOs, RAID Cards, etc&lt;br /&gt;
* GREEN&lt;br /&gt;
** Storage Prod&lt;br /&gt;
*** At least the Dell, maybe HPs if we get into flash storage&lt;br /&gt;
* BLUE&lt;br /&gt;
** Compute Prod&lt;br /&gt;
*** Likely overwhelmingly the HPs&lt;br /&gt;
* YELLOW&lt;br /&gt;
** Interconnect&lt;br /&gt;
*** Connectivity to $outside, between switches, whatever&lt;br /&gt;
&lt;br /&gt;
=== Naming Convention ===&lt;br /&gt;
 We need names!&lt;br /&gt;
 Can&#039;t keep calling these &amp;quot;Dell&amp;quot;, &amp;quot;HP1&amp;quot;, &amp;quot;HP2&amp;quot; etc.&lt;br /&gt;
 Calling them by their S/Ns is also super boring and cumbersome; &amp;quot;Oh yea, we need to setup 5V6S064&amp;quot;&lt;br /&gt;
 We could even opt for dual names. Internally, when logged in to $shell, the names could be functional &amp;quot;milliways-control-node-1&amp;quot; so it&#039;s clear what you&#039;re doing, but externally, the Asset Tag could be a Hitchhiker&#039;s Guide to the Galaxy character or a DiscWorld town or something. That way, if we do ever show this off at events, we can do cool shit with light up tags, make stuff funny and recognizable and cool to talk about - it also makes it way more relatable to market for when asking for donations; &amp;quot;Ya, we&#039;re looking for extra storage for Überwald&amp;quot; sounds much better than &amp;quot;Ya we&#039;re looking for extra storage for 5V6S064 or milliways-control-node-1&amp;quot;&lt;br /&gt;
 Naturally, once we get NetBox going, we can map the Asset names to the actual server name and potentially it&#039;s serial so we don&#039;t get confused internally (if we want to use serials, there&#039;s somethign to be said for not using serials here)&lt;br /&gt;
&lt;br /&gt;
* Functional&lt;br /&gt;
** milliways-control-node-1&lt;br /&gt;
** milliways-control-node-2&lt;br /&gt;
** control-node-1&lt;br /&gt;
** compute-node-1&lt;br /&gt;
** flash-storage-1&lt;br /&gt;
&lt;br /&gt;
* Marketing&lt;br /&gt;
** HGttG characters&lt;br /&gt;
*** Arthur&lt;br /&gt;
*** Ford&lt;br /&gt;
*** Zaphod&lt;br /&gt;
** Discworld locations&lt;br /&gt;
*** Ankh-Morpork&lt;br /&gt;
*** Überwald&lt;br /&gt;
*** Lancre&lt;br /&gt;
***&lt;br /&gt;
&lt;br /&gt;
=== OpenStack ===&lt;br /&gt;
&lt;br /&gt;
 We&#039;re using 2025.1 (epoxy) as 2025.2 (flamingo) has an undocumented breaking change making installation of keystone impossible. We have registered a bug with the documentation on launchpad for this.&lt;br /&gt;
&lt;br /&gt;
* [https://docs.openstack.org/install-guide/ Installation guide]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-2025-1-epoxy Minimal Deployment]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/overview.html#example-architecture Example Architecture]&lt;br /&gt;
 Following installation guide recommendation, passwords are created with &amp;lt;code&amp;gt;openssl rand -hex 10&amp;lt;/code&amp;gt; and saved in a password store.&lt;br /&gt;
&lt;br /&gt;
==== Controller ====&lt;br /&gt;
* [https://docs.openstack.org/keystone/2025.1/install/ Identity service]&lt;br /&gt;
** [https://docs.openstack.org/keystone/2025.2/install/keystone-users-ubuntu.html Broken] in 2025.2&lt;br /&gt;
***[https://opendev.org/openstack/keystone/src/commit/82c80dccf6c2e74e27b90f5204de6da1fc6bd76d/releasenotes/notes/remove-wsgi-scripts-615b97ee4d6e0de2.yaml This] commit removes the WSGI scripts, ``keystone-wsgi-admin`` and ``keystone-wsgi-public``.&lt;br /&gt;
*** Both scripts are still called by the openstack command. This means running any openstack command to create a domain, projects, users, and roles fails with the error&lt;br /&gt;
****&amp;lt;code&amp;gt;Failed to discover available identity versions when contacting http://controller:5000/v3. Attempting to parse version from URL.&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Evidence:&lt;br /&gt;
**** &amp;lt;code&amp;gt;tail /var/log/apache2/keystone.log&amp;lt;/code&amp;gt;&lt;br /&gt;
***** &amp;lt;code&amp;gt;Target WSGI script not found or unable to stat: /usr/bin/keystone-wsgi-public&amp;lt;/code&amp;gt;&lt;br /&gt;
** Workaround, use 2025.1 instead&lt;br /&gt;
** Completed 2025-01-18&lt;br /&gt;
* [https://docs.openstack.org/glance/2025.1/install/ Image service]&lt;br /&gt;
** [https://docs.openstack.org/glance/2025.1/install/install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide has you create 3 API endpoints for the service.&lt;br /&gt;
**** You need to configure access to keystone with one of them, but you are not told which one. Only &amp;lt;code&amp;gt;public&amp;lt;/code&amp;gt; will work.&lt;br /&gt;
*** Configuring glance-api.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** Completed 2025-01-19&lt;br /&gt;
* [https://docs.openstack.org/placement/2025.1/install/ Placement service]&lt;br /&gt;
** [https://docs.openstack.org/placement/2025.1/install/verify.html Bad] Documentation&lt;br /&gt;
*** If you followed the guide, your user account [https://storyboard.openstack.org/#!/story/2008969 does not have the rights] to read &amp;lt;code&amp;gt;/etc/placement/placement.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Running &amp;lt;code&amp;gt;placement-status upgrade check&amp;lt;/code&amp;gt; as root proves the service works.&lt;br /&gt;
*** Undocumented requirement fulfilled; &amp;lt;code&amp;gt;usermod -aG placement&amp;lt;/code&amp;gt;&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portions of [https://docs.openstack.org/nova/2025.1/install/ Compute]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/controller-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring nova.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
*** The guide attempts to make you configure options for the networking service, which you have not installed yet, because the guide makes you install this service first&lt;br /&gt;
*** &amp;lt;code&amp;gt;Due to a packaging bug, remove the log_dir option from the [DEFAULT] section.&amp;lt;/code&amp;gt;&lt;br /&gt;
**** ???? THEN FIX THE PACKAGE?!?!?!!!!&lt;br /&gt;
*** The &amp;lt;code&amp;gt;[glance]&amp;lt;/code&amp;gt; option you are instructed to use is deprecated&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portion of [https://docs.openstack.org/neutron/2025.1/install/ Networking]&lt;br /&gt;
** [https://docs.openstack.org/neutron/2025.1/install/controller-install-option2-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring neutron.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** More [https://docs.openstack.org/neutron/2025.1/admin/deploy-ovs-provider.html Bad] Documentation&lt;br /&gt;
*** The guide refers to configuring the Open vSwitch agent and offers more information which directly contradicts the guide.&lt;br /&gt;
**** The guide says to edit neutron.conf with &amp;lt;code&amp;gt;service_plugins = router&amp;lt;/code&amp;gt;&lt;br /&gt;
**** The Open vSwitch agent example configuration for controllers says: &amp;quot;Disable service plug-ins because provider networks do not require any.&amp;quot;&lt;br /&gt;
*** Configuring openvswitch_agent.ini is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
**** The guide attempts to make you configure the the name of the bridge connected to the underlying provider physical network but you have not yet created this bridge when the guide asks you for the name.&lt;br /&gt;
** Completed 2025-01-21&lt;br /&gt;
* [https://docs.openstack.org/horizon/2025.1/install/ Dashboard]&lt;br /&gt;
** Extremely weird behavior, Dashboard will only load if Debug is set to True and compression is turned on.&lt;br /&gt;
** Completed 2025-01-21&lt;br /&gt;
&lt;br /&gt;
==== Compute ====&lt;br /&gt;
* [https://docs.openstack.org/nova/2025.1/install/ Compute Service]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/compute-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide entirely fails to mention that keeping default config in &amp;lt;code&amp;gt;[api_database]&amp;lt;/code&amp;gt; will result in errors. &lt;br /&gt;
*** The guide entirely fails to mention that keeping default config in &amp;lt;code&amp;gt;[database]&amp;lt;/code&amp;gt; will result in errors.&lt;br /&gt;
**** Yes, that&#039;s basically the same documentation error twice, but for two different options, on the same page. &lt;br /&gt;
* [https://docs.openstack.org/neutron/2025.1/install/ Networking Service]&lt;br /&gt;
* Extremely weird behaviour when linking up with control node. placement service decided password was wrong, it wasn&#039;t, nova scheduler and conducter wouldn&#039;t start, the &amp;quot;fix&amp;quot;  is basically patience, no changes were made between it not working and it working and I have no idea why this now works.&lt;br /&gt;
* Completed 2025-01-21&lt;br /&gt;
&lt;br /&gt;
==== Overall feelings ====&lt;br /&gt;
* Clunky and convoluted.&lt;br /&gt;
** The MVP environment with 1 control node and 1 compute node feels about as capable as a 4-bay enthusiast NAS running proxmox.&lt;br /&gt;
*** If I purely count only the time spent on control and compute nodes, I reckon it took me about 18hrs to do something I did in 20mins on my terramaster. &lt;br /&gt;
* Documentation is unacceptably bad.&lt;br /&gt;
** Not kidding, there is better documentation on running automated piracy software.&lt;br /&gt;
*** Heck, there&#039;s better documentation written by indian scam farms to trick your family members to run TeamViewer to run play store giftcard scams.&lt;br /&gt;
** There can and should no excuse at all for the level of sheer incompetence displayed in these docs.&lt;br /&gt;
** OpenStack&#039;s documentation is abysmal and the responsible parties deserve to be held accountable for this.&lt;br /&gt;
* Super bad initial impression, like, I would not, could not even, recommend this in any professional capacity.&lt;br /&gt;
* I am dearly hoping that with scalability, this thing outrips my terramaster fast, else I wonder if this is worth the time, effort, fuel, money and electricity I have pumped into it.&lt;br /&gt;
** I fully realize OpenStack and proxmox are not the same, Nova is basically comparable to proxmox and all other services or modules are extra functionality that proxmox does not aim to offer. Like, obviously, I am exaggerating out of frustration and emotion, but this MVP is basically an externally managed proxmox and taken at face value, I think I&#039;d recommend proxmox over this. But! Let&#039;s see how this evolves and scales.&lt;br /&gt;
* Tempted to try SUSE&#039; [https://harvesterhci.io/ Harvester HCI]&lt;br /&gt;
** Although perhaps this isn&#039;t comparable? &lt;br /&gt;
*** I&#039;m getting mixed messages in both writeups and user experience. I&#039;m worried this means the documentation is equally as bad and people just don&#039;t know what harvester actually is or can do.&lt;br /&gt;
&lt;br /&gt;
== communications ==&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6995</id>
		<title>MilliwaysStack</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6995"/>
		<updated>2026-01-22T11:38:21Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Overall feelings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We want to run an OpenStack experiment&lt;br /&gt;
&lt;br /&gt;
== The grander idea ==&lt;br /&gt;
&lt;br /&gt;
We want to try out an installation of OpenStack to give people around milliways experience with running (on). &lt;br /&gt;
&lt;br /&gt;
From an unnamed source we got 10 HPE servers. We will use 8 of them to run OpenStack on it. Storage is on a seperate machine. &lt;br /&gt;
&lt;br /&gt;
=== MVP ===&lt;br /&gt;
The MVP would be:&lt;br /&gt;
* Kubernetes / docker &lt;br /&gt;
* object storage&lt;br /&gt;
* file systems&lt;br /&gt;
* Networking&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* Firewalling&lt;br /&gt;
* Databases - mariaDB / PostgreSQL &lt;br /&gt;
* Someone something redis I guess&lt;br /&gt;
* container registry&lt;br /&gt;
&lt;br /&gt;
=== e-MVP ===&lt;br /&gt;
The extended MVP would be:&lt;br /&gt;
* functional Monitoring &amp;amp; alerting&lt;br /&gt;
* autoscaling &lt;br /&gt;
* integration into milliways identity &amp;amp; access management authentik &lt;br /&gt;
* logging &amp;amp; alerting&lt;br /&gt;
&lt;br /&gt;
== the software stack explained ==&lt;br /&gt;
&lt;br /&gt;
OpenStack is a cloud framework stack that offeres AWS / Azure / GC alike services. &lt;br /&gt;
&lt;br /&gt;
Most documentation is availible for Ubuntu &amp;amp; Red Hat. On the longer term an installation under NixOS might be feasable.&lt;br /&gt;
&lt;br /&gt;
== Asset List ==&lt;br /&gt;
=== Rack ===&lt;br /&gt;
* 47U&lt;br /&gt;
* 950mm external depth&lt;br /&gt;
** 915mm internal depth&lt;br /&gt;
=== Consumables &amp;amp; Small Materials ===&lt;br /&gt;
* 1 x Samsung 2.5&amp;quot; 860 EVO 2TB&lt;br /&gt;
* Assorted M2 - M3 screws&lt;br /&gt;
* Assorted mismatched bundle of M5 and M6 cagenuts and bolts&lt;br /&gt;
* SFPs&lt;br /&gt;
=== Switches ===&lt;br /&gt;
* 2 x Dell PowerConnect 7048R-RA&lt;br /&gt;
* 1 x Cisco 3560e&lt;br /&gt;
=== [[MilliwaysStack_Servers | Servers]] ===&lt;br /&gt;
* 1 Dell PowerEdge R710 server as storage&lt;br /&gt;
** 2 x X5570 2,93GHz&lt;br /&gt;
** 192GB RAM&lt;br /&gt;
** 6 x 3,5&amp;quot; bays&lt;br /&gt;
*** 6 x hotswap 3,5&amp;quot; drive sleds/brackets&lt;br /&gt;
** Drives&lt;br /&gt;
*** 1 x Samsung 2.5&amp;quot; 850 EVO 500GB&lt;br /&gt;
**** for OS&lt;br /&gt;
**** Hidden in aftermarket [https://www.amazon.nl/dp/B083XJPCGL &amp;quot;Optical Drive&amp;quot;] adapter.&lt;br /&gt;
***We have more drives than bays, but not enough drives to make a nice or ideal configuration. As such, the Dell storage situation is likely temporary until we can figure out if we add more 12T or 10T or keep as-is.&lt;br /&gt;
**** 2 x Seagate Exos X18 12TB&lt;br /&gt;
**** 1 x Seagate Exos X18 10TB&lt;br /&gt;
**** 4 x WD Red 4TB&lt;br /&gt;
**** 4 x WD Green 3TB&lt;br /&gt;
** no rails&lt;br /&gt;
* 2 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram&lt;br /&gt;
** PCI Riser to 4* NVMe adapter&lt;br /&gt;
*** 1TB Crucial NVMe &lt;br /&gt;
** iLO4&lt;br /&gt;
*** It seems it accepts 35DPH-SVSXJ-HGBJN-C7N5R-2SS4W as activation key for iLO Advanced license?&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** Slide rails&lt;br /&gt;
* 8 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram &lt;br /&gt;
** iLO4&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** 7 x slide rails&lt;br /&gt;
&lt;br /&gt;
== Shopping List ==&lt;br /&gt;
 It&#039;s ofc. sexy as all hell to buy memory, AI cards, flash storage and allsorts, but literally none of that will ever work if we don&#039;t have our Generic basics in order. While we prefer big donations go to big ticket items, many small ticket items unexpectedly add up in the long run. Please do not forget the generic basics!&lt;br /&gt;
* Generic Basics&lt;br /&gt;
** PDU&lt;br /&gt;
*** &amp;lt;s&amp;gt;Temporary 1U unmanaged PDU with 16A/230V C19 input and 1* C19 + 8* Type F outlet.&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Perfect: Managed Rack mountable PDU with CEE Red 16A/20A 400v input to C13/C14 + C19/C20 outlets.&lt;br /&gt;
**** [https://www.eaton.com/us/en-us/skuPage.PDU3XEVSR6G20.html Stupid expensive example]&lt;br /&gt;
*** Alternatively; a &amp;quot;normal&amp;quot; Serverrack PDU (still strong prefer managed) + 16A/20A 400v -&amp;gt; 16A 230V transform&lt;br /&gt;
** Network Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [Color]&lt;br /&gt;
**** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Power Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Screws, Nuts, Bolts&lt;br /&gt;
*** &amp;lt;s&amp;gt;Assorted M2,M2.5,M3 Screws&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Some actual properly matching cage nuts\bolts would be great&lt;br /&gt;
** PCI Risers&lt;br /&gt;
*** &amp;lt;s&amp;gt;Single NVMe adapters&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Multi NVMe adapters&lt;br /&gt;
** KVM&lt;br /&gt;
*** PiKVM?&lt;br /&gt;
* Dell - Storage&lt;br /&gt;
** &amp;lt;s&amp;gt;2* Drive sleds&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;New RAID Card that supports passthrough\JBOD&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;2* SFF-8087 -&amp;gt; SFF-8087 Mini SAS Cable&amp;lt;/s&amp;gt;&lt;br /&gt;
** Drives&lt;br /&gt;
*** &amp;lt;s&amp;gt;500GB SSD for OS&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Bracket and SATA Cable Adapter for SSD&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Technically not shopping, but for historical tracking;&lt;br /&gt;
**** Old Exos X16 2 x 12T and 1 x 10T were RMA&#039;d and replaced with X18&#039;s&amp;lt;/s&amp;gt;&lt;br /&gt;
*** 12T ?&lt;br /&gt;
* HP1 - Control&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* HP2 - Compute&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* Flash Storage&lt;br /&gt;
** We&#039;ll need [https://www.amazon.de/-/en/dp/B07GCDH5D8 Drive Trays] for the HPs if we wanna add 2.5&amp;quot; SSDs&lt;br /&gt;
** Control and Compute servers each have 3 open m.2 NVMe slots&lt;br /&gt;
** &amp;lt;s&amp;gt;1 x 2TB Samsung 860 EVO&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
 nb. this is quick &#039;n&#039; dirty as I go along.&lt;br /&gt;
 In the short-term future I&#039;d much rather replace this adhoc documentation with something like NetBox.&lt;br /&gt;
=== Network ===&lt;br /&gt;
* Supernet 10.42.0.0/16&lt;br /&gt;
** Vlan 42&lt;br /&gt;
*** Interconnect&lt;br /&gt;
*** 10.42.0.0/30&lt;br /&gt;
**** Gateway 10.42.0.1&lt;br /&gt;
**** Milliways Core 10.42.0.2&lt;br /&gt;
** Vlan 5&lt;br /&gt;
*** Mgmt \ OOB&lt;br /&gt;
*** 10.42.1.0/24&lt;br /&gt;
**** Milliways Core 10.42.1.1&lt;br /&gt;
**** Dell iDRAC 10.42.1.5&lt;br /&gt;
**** Dell RAID Controller 10.42.1.6&lt;br /&gt;
**** HP 1 iLO 10.42.1.7&lt;br /&gt;
**** HP 2 iLO 10.42.1.8&lt;br /&gt;
** Vlan 10&lt;br /&gt;
*** Prod&lt;br /&gt;
*** 10.42.10.0/24&lt;br /&gt;
**** Milliways Core 10.42.10.1&lt;br /&gt;
**** Dell 10.42.10.2&lt;br /&gt;
**** HP 1 10.42.10.3&lt;br /&gt;
**** HP 2 10.42.10.5&lt;br /&gt;
** Vlan 15&lt;br /&gt;
*** [https://docs.openstack.org/neutron/2025.1/install/environment-networking-ubuntu.html Provider Network]&lt;br /&gt;
**** This is an OpenStack thing for the secondary Control and Compute node interfaces.&lt;br /&gt;
**** Currently [https://docs.openstack.org/neutron/2025.1/install/environment-networking-controller-ubuntu.html no IP] address assigned.&lt;br /&gt;
**** May change in future if documentation mandates it.&lt;br /&gt;
&lt;br /&gt;
=== Cable Mgmt ===&lt;br /&gt;
 As there are some early ambitions to physically take this environment to events, perhaps we should seriously think about making our lives easier by already thinking about colorcoding connectivity. While this will help us connecting everything again at $event when we&#039;re sleepdeprived\drunk\explaining to newbies, this has the added effect of making it all look slightly more cooler than just a spaghetti of all boring white cables or worse, a spaghetti of whatever the fuck we have lying around.&lt;br /&gt;
&lt;br /&gt;
This is all just made-up without too much thought. This is specifically intended to start a discussion so we can work toward an agreement, it is not intended to be a unilateral decision. Example; You&#039;ll notice 0 thought was put into fiber or not ;)&lt;br /&gt;
* RED&lt;br /&gt;
** Mgmt \ OOB&lt;br /&gt;
*** iDRACs, iLOs, RAID Cards, etc&lt;br /&gt;
* GREEN&lt;br /&gt;
** Storage Prod&lt;br /&gt;
*** At least the Dell, maybe HPs if we get into flash storage&lt;br /&gt;
* BLUE&lt;br /&gt;
** Compute Prod&lt;br /&gt;
*** Likely overwhelmingly the HPs&lt;br /&gt;
* YELLOW&lt;br /&gt;
** Interconnect&lt;br /&gt;
*** Connectivity to $outside, between switches, whatever&lt;br /&gt;
&lt;br /&gt;
=== Naming Convention ===&lt;br /&gt;
 We need names!&lt;br /&gt;
 Can&#039;t keep calling these &amp;quot;Dell&amp;quot;, &amp;quot;HP1&amp;quot;, &amp;quot;HP2&amp;quot; etc.&lt;br /&gt;
 Calling them by their S/Ns is also super boring and cumbersome; &amp;quot;Oh yea, we need to setup 5V6S064&amp;quot;&lt;br /&gt;
 We could even opt for dual names. Internally, when logged in to $shell, the names could be functional &amp;quot;milliways-control-node-1&amp;quot; so it&#039;s clear what you&#039;re doing, but externally, the Asset Tag could be a Hitchhiker&#039;s Guide to the Galaxy character or a DiscWorld town or something. That way, if we do ever show this off at events, we can do cool shit with light up tags, make stuff funny and recognizable and cool to talk about - it also makes it way more relatable to market for when asking for donations; &amp;quot;Ya, we&#039;re looking for extra storage for Überwald&amp;quot; sounds much better than &amp;quot;Ya we&#039;re looking for extra storage for 5V6S064 or milliways-control-node-1&amp;quot;&lt;br /&gt;
 Naturally, once we get NetBox going, we can map the Asset names to the actual server name and potentially it&#039;s serial so we don&#039;t get confused internally (if we want to use serials, there&#039;s somethign to be said for not using serials here)&lt;br /&gt;
&lt;br /&gt;
* Functional&lt;br /&gt;
** milliways-control-node-1&lt;br /&gt;
** milliways-control-node-2&lt;br /&gt;
** control-node-1&lt;br /&gt;
** compute-node-1&lt;br /&gt;
** flash-storage-1&lt;br /&gt;
&lt;br /&gt;
* Marketing&lt;br /&gt;
** HGttG characters&lt;br /&gt;
*** Arthur&lt;br /&gt;
*** Ford&lt;br /&gt;
*** Zaphod&lt;br /&gt;
** Discworld locations&lt;br /&gt;
*** Ankh-Morpork&lt;br /&gt;
*** Überwald&lt;br /&gt;
*** Lancre&lt;br /&gt;
***&lt;br /&gt;
&lt;br /&gt;
=== OpenStack ===&lt;br /&gt;
&lt;br /&gt;
 We&#039;re using 2025.1 (epoxy) as 2025.2 (flamingo) has an undocumented breaking change making installation of keystone impossible. We have registered a bug with the documentation on launchpad for this.&lt;br /&gt;
&lt;br /&gt;
* [https://docs.openstack.org/install-guide/ Installation guide]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-2025-1-epoxy Minimal Deployment]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/overview.html#example-architecture Example Architecture]&lt;br /&gt;
 Following installation guide recommendation, passwords are created with &amp;lt;code&amp;gt;openssl rand -hex 10&amp;lt;/code&amp;gt; and saved in a password store.&lt;br /&gt;
&lt;br /&gt;
==== Controller ====&lt;br /&gt;
* [https://docs.openstack.org/keystone/2025.1/install/ Identity service]&lt;br /&gt;
** [https://docs.openstack.org/keystone/2025.2/install/keystone-users-ubuntu.html Broken] in 2025.2&lt;br /&gt;
***[https://opendev.org/openstack/keystone/src/commit/82c80dccf6c2e74e27b90f5204de6da1fc6bd76d/releasenotes/notes/remove-wsgi-scripts-615b97ee4d6e0de2.yaml This] commit removes the WSGI scripts, ``keystone-wsgi-admin`` and ``keystone-wsgi-public``.&lt;br /&gt;
*** Both scripts are still called by the openstack command. This means running any openstack command to create a domain, projects, users, and roles fails with the error&lt;br /&gt;
****&amp;lt;code&amp;gt;Failed to discover available identity versions when contacting http://controller:5000/v3. Attempting to parse version from URL.&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Evidence:&lt;br /&gt;
**** &amp;lt;code&amp;gt;tail /var/log/apache2/keystone.log&amp;lt;/code&amp;gt;&lt;br /&gt;
***** &amp;lt;code&amp;gt;Target WSGI script not found or unable to stat: /usr/bin/keystone-wsgi-public&amp;lt;/code&amp;gt;&lt;br /&gt;
** Workaround, use 2025.1 instead&lt;br /&gt;
** Completed 2025-01-18&lt;br /&gt;
* [https://docs.openstack.org/glance/2025.1/install/ Image service]&lt;br /&gt;
** [https://docs.openstack.org/glance/2025.1/install/install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide has you create 3 API endpoints for the service.&lt;br /&gt;
**** You need to configure access to keystone with one of them, but you are not told which one. Only &amp;lt;code&amp;gt;public&amp;lt;/code&amp;gt; will work.&lt;br /&gt;
*** Configuring glance-api.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** Completed 2025-01-19&lt;br /&gt;
* [https://docs.openstack.org/placement/2025.1/install/ Placement service]&lt;br /&gt;
** [https://docs.openstack.org/placement/2025.1/install/verify.html Bad] Documentation&lt;br /&gt;
*** If you followed the guide, your user account [https://storyboard.openstack.org/#!/story/2008969 does not have the rights] to read &amp;lt;code&amp;gt;/etc/placement/placement.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Running &amp;lt;code&amp;gt;placement-status upgrade check&amp;lt;/code&amp;gt; as root proves the service works.&lt;br /&gt;
*** Undocumented requirement fulfilled; &amp;lt;code&amp;gt;usermod -aG placement&amp;lt;/code&amp;gt;&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portions of [https://docs.openstack.org/nova/2025.1/install/ Compute]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/controller-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring nova.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
*** The guide attempts to make you configure options for the networking service, which you have not installed yet, because the guide makes you install this service first&lt;br /&gt;
*** &amp;lt;code&amp;gt;Due to a packaging bug, remove the log_dir option from the [DEFAULT] section.&amp;lt;/code&amp;gt;&lt;br /&gt;
**** ???? THEN FIX THE PACKAGE?!?!?!!!!&lt;br /&gt;
*** The &amp;lt;code&amp;gt;[glance]&amp;lt;/code&amp;gt; option you are instructed to use is deprecated&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portion of [https://docs.openstack.org/neutron/2025.1/install/ Networking]&lt;br /&gt;
** [https://docs.openstack.org/neutron/2025.1/install/controller-install-option2-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring neutron.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** More [https://docs.openstack.org/neutron/2025.1/admin/deploy-ovs-provider.html Bad] Documentation&lt;br /&gt;
*** The guide refers to configuring the Open vSwitch agent and offers more information which directly contradicts the guide.&lt;br /&gt;
**** The guide says to edit neutron.conf with &amp;lt;code&amp;gt;service_plugins = router&amp;lt;/code&amp;gt;&lt;br /&gt;
**** The Open vSwitch agent example configuration for controllers says: &amp;quot;Disable service plug-ins because provider networks do not require any.&amp;quot;&lt;br /&gt;
*** Configuring openvswitch_agent.ini is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
**** The guide attempts to make you configure the the name of the bridge connected to the underlying provider physical network but you have not yet created this bridge when the guide asks you for the name.&lt;br /&gt;
** Completed 2025-01-21&lt;br /&gt;
* [https://docs.openstack.org/horizon/2025.1/install/ Dashboard]&lt;br /&gt;
** Extremely weird behavior, Dashboard will only load if Debug is set to True and compression is turned on.&lt;br /&gt;
** Completed 2025-01-21&lt;br /&gt;
&lt;br /&gt;
==== Compute ====&lt;br /&gt;
* [https://docs.openstack.org/nova/2025.1/install/ Compute Service]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/compute-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide entirely fails to mention that keeping default config in &amp;lt;code&amp;gt;[api_database]&amp;lt;/code&amp;gt; will result in errors. &lt;br /&gt;
*** The guide entirely fails to mention that keeping default config in &amp;lt;code&amp;gt;[database]&amp;lt;/code&amp;gt; will result in errors.&lt;br /&gt;
**** Yes, that&#039;s basically the same documentation error twice, but for two different options, on the same page. &lt;br /&gt;
* [https://docs.openstack.org/neutron/2025.1/install/ Networking Service]&lt;br /&gt;
* Extremely weird behaviour when linking up with control node. placement service decided password was wrong, it wasn&#039;t, nova scheduler and conducter wouldn&#039;t start, the &amp;quot;fix&amp;quot;  is basically patience, no changes were made between it not working and it working and I have no idea why this now works.&lt;br /&gt;
* Completed 2025-01-21&lt;br /&gt;
&lt;br /&gt;
==== Overall feelings ====&lt;br /&gt;
* Clunky and convoluted.&lt;br /&gt;
** The MVP environment with 1 control node and 1 compute node feels about as capable as a 4-bay enthusiast NAS running proxmox.&lt;br /&gt;
*** If I purely count only the time spent on control and compute nodes, I reckon it took me about 18hrs to do something I did in 20mins on my terramaster. &lt;br /&gt;
* Documentation is unacceptably bad.&lt;br /&gt;
** Not kidding, there is better documentation on running automated piracy software.&lt;br /&gt;
*** Heck, there&#039;s better documentation written by indian scam farms to trick your family members to run TeamViewer to run play store giftcard scams.&lt;br /&gt;
** There can and should no excuse at all for the level of sheer incompetence displayed in these docs.&lt;br /&gt;
** OpenStack&#039;s documentation is abysmal and the responsible parties deserve to be held accountable for this.&lt;br /&gt;
* Super bad initial impression, like, I would not, could not even, recommend this in any professional capacity.&lt;br /&gt;
* I am dearly hoping that with scalability, this thing outrips my terramaster fast, else I wonder if this is worth the time, effort, fuel, money and electricity I have pumped into it.&lt;br /&gt;
** I fully realize OpenStack and proxmox are not the same, Nova is basically comparable to proxmox and all other services or modules are extra functionality that proxmox does not aim to offer. Like, obviously, I am exaggerating out of frustration and emotion, but this MVP is basically an externally managed proxmox and taken at face value, I think I&#039;d recommend proxmox over this. But! Let&#039;s see how this evolves and scales.&lt;br /&gt;
* Tempted to try SUSE&#039; [https://harvesterhci.io/ Harvester HCI]&lt;br /&gt;
** Although perhaps this isn&#039;t comparable? &lt;br /&gt;
*** I&#039;m getting mixed messages in both writeups and user experience. I&#039;m worried this means the documentation is equally as bad and people just don&#039;t know what harvester actually is or can do.&lt;br /&gt;
&lt;br /&gt;
== communications ==&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6994</id>
		<title>MilliwaysStack</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6994"/>
		<updated>2026-01-22T11:31:03Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Overall feelings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We want to run an OpenStack experiment&lt;br /&gt;
&lt;br /&gt;
== The grander idea ==&lt;br /&gt;
&lt;br /&gt;
We want to try out an installation of OpenStack to give people around milliways experience with running (on). &lt;br /&gt;
&lt;br /&gt;
From an unnamed source we got 10 HPE servers. We will use 8 of them to run OpenStack on it. Storage is on a seperate machine. &lt;br /&gt;
&lt;br /&gt;
=== MVP ===&lt;br /&gt;
The MVP would be:&lt;br /&gt;
* Kubernetes / docker &lt;br /&gt;
* object storage&lt;br /&gt;
* file systems&lt;br /&gt;
* Networking&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* Firewalling&lt;br /&gt;
* Databases - mariaDB / PostgreSQL &lt;br /&gt;
* Someone something redis I guess&lt;br /&gt;
* container registry&lt;br /&gt;
&lt;br /&gt;
=== e-MVP ===&lt;br /&gt;
The extended MVP would be:&lt;br /&gt;
* functional Monitoring &amp;amp; alerting&lt;br /&gt;
* autoscaling &lt;br /&gt;
* integration into milliways identity &amp;amp; access management authentik &lt;br /&gt;
* logging &amp;amp; alerting&lt;br /&gt;
&lt;br /&gt;
== the software stack explained ==&lt;br /&gt;
&lt;br /&gt;
OpenStack is a cloud framework stack that offeres AWS / Azure / GC alike services. &lt;br /&gt;
&lt;br /&gt;
Most documentation is availible for Ubuntu &amp;amp; Red Hat. On the longer term an installation under NixOS might be feasable.&lt;br /&gt;
&lt;br /&gt;
== Asset List ==&lt;br /&gt;
=== Rack ===&lt;br /&gt;
* 47U&lt;br /&gt;
* 950mm external depth&lt;br /&gt;
** 915mm internal depth&lt;br /&gt;
=== Consumables &amp;amp; Small Materials ===&lt;br /&gt;
* 1 x Samsung 2.5&amp;quot; 860 EVO 2TB&lt;br /&gt;
* Assorted M2 - M3 screws&lt;br /&gt;
* Assorted mismatched bundle of M5 and M6 cagenuts and bolts&lt;br /&gt;
* SFPs&lt;br /&gt;
=== Switches ===&lt;br /&gt;
* 2 x Dell PowerConnect 7048R-RA&lt;br /&gt;
* 1 x Cisco 3560e&lt;br /&gt;
=== [[MilliwaysStack_Servers | Servers]] ===&lt;br /&gt;
* 1 Dell PowerEdge R710 server as storage&lt;br /&gt;
** 2 x X5570 2,93GHz&lt;br /&gt;
** 192GB RAM&lt;br /&gt;
** 6 x 3,5&amp;quot; bays&lt;br /&gt;
*** 6 x hotswap 3,5&amp;quot; drive sleds/brackets&lt;br /&gt;
** Drives&lt;br /&gt;
*** 1 x Samsung 2.5&amp;quot; 850 EVO 500GB&lt;br /&gt;
**** for OS&lt;br /&gt;
**** Hidden in aftermarket [https://www.amazon.nl/dp/B083XJPCGL &amp;quot;Optical Drive&amp;quot;] adapter.&lt;br /&gt;
***We have more drives than bays, but not enough drives to make a nice or ideal configuration. As such, the Dell storage situation is likely temporary until we can figure out if we add more 12T or 10T or keep as-is.&lt;br /&gt;
**** 2 x Seagate Exos X18 12TB&lt;br /&gt;
**** 1 x Seagate Exos X18 10TB&lt;br /&gt;
**** 4 x WD Red 4TB&lt;br /&gt;
**** 4 x WD Green 3TB&lt;br /&gt;
** no rails&lt;br /&gt;
* 2 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram&lt;br /&gt;
** PCI Riser to 4* NVMe adapter&lt;br /&gt;
*** 1TB Crucial NVMe &lt;br /&gt;
** iLO4&lt;br /&gt;
*** It seems it accepts 35DPH-SVSXJ-HGBJN-C7N5R-2SS4W as activation key for iLO Advanced license?&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** Slide rails&lt;br /&gt;
* 8 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram &lt;br /&gt;
** iLO4&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** 7 x slide rails&lt;br /&gt;
&lt;br /&gt;
== Shopping List ==&lt;br /&gt;
 It&#039;s ofc. sexy as all hell to buy memory, AI cards, flash storage and allsorts, but literally none of that will ever work if we don&#039;t have our Generic basics in order. While we prefer big donations go to big ticket items, many small ticket items unexpectedly add up in the long run. Please do not forget the generic basics!&lt;br /&gt;
* Generic Basics&lt;br /&gt;
** PDU&lt;br /&gt;
*** &amp;lt;s&amp;gt;Temporary 1U unmanaged PDU with 16A/230V C19 input and 1* C19 + 8* Type F outlet.&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Perfect: Managed Rack mountable PDU with CEE Red 16A/20A 400v input to C13/C14 + C19/C20 outlets.&lt;br /&gt;
**** [https://www.eaton.com/us/en-us/skuPage.PDU3XEVSR6G20.html Stupid expensive example]&lt;br /&gt;
*** Alternatively; a &amp;quot;normal&amp;quot; Serverrack PDU (still strong prefer managed) + 16A/20A 400v -&amp;gt; 16A 230V transform&lt;br /&gt;
** Network Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [Color]&lt;br /&gt;
**** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Power Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Screws, Nuts, Bolts&lt;br /&gt;
*** &amp;lt;s&amp;gt;Assorted M2,M2.5,M3 Screws&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Some actual properly matching cage nuts\bolts would be great&lt;br /&gt;
** PCI Risers&lt;br /&gt;
*** &amp;lt;s&amp;gt;Single NVMe adapters&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Multi NVMe adapters&lt;br /&gt;
** KVM&lt;br /&gt;
*** PiKVM?&lt;br /&gt;
* Dell - Storage&lt;br /&gt;
** &amp;lt;s&amp;gt;2* Drive sleds&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;New RAID Card that supports passthrough\JBOD&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;2* SFF-8087 -&amp;gt; SFF-8087 Mini SAS Cable&amp;lt;/s&amp;gt;&lt;br /&gt;
** Drives&lt;br /&gt;
*** &amp;lt;s&amp;gt;500GB SSD for OS&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Bracket and SATA Cable Adapter for SSD&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Technically not shopping, but for historical tracking;&lt;br /&gt;
**** Old Exos X16 2 x 12T and 1 x 10T were RMA&#039;d and replaced with X18&#039;s&amp;lt;/s&amp;gt;&lt;br /&gt;
*** 12T ?&lt;br /&gt;
* HP1 - Control&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* HP2 - Compute&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* Flash Storage&lt;br /&gt;
** We&#039;ll need [https://www.amazon.de/-/en/dp/B07GCDH5D8 Drive Trays] for the HPs if we wanna add 2.5&amp;quot; SSDs&lt;br /&gt;
** Control and Compute servers each have 3 open m.2 NVMe slots&lt;br /&gt;
** &amp;lt;s&amp;gt;1 x 2TB Samsung 860 EVO&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
 nb. this is quick &#039;n&#039; dirty as I go along.&lt;br /&gt;
 In the short-term future I&#039;d much rather replace this adhoc documentation with something like NetBox.&lt;br /&gt;
=== Network ===&lt;br /&gt;
* Supernet 10.42.0.0/16&lt;br /&gt;
** Vlan 42&lt;br /&gt;
*** Interconnect&lt;br /&gt;
*** 10.42.0.0/30&lt;br /&gt;
**** Gateway 10.42.0.1&lt;br /&gt;
**** Milliways Core 10.42.0.2&lt;br /&gt;
** Vlan 5&lt;br /&gt;
*** Mgmt \ OOB&lt;br /&gt;
*** 10.42.1.0/24&lt;br /&gt;
**** Milliways Core 10.42.1.1&lt;br /&gt;
**** Dell iDRAC 10.42.1.5&lt;br /&gt;
**** Dell RAID Controller 10.42.1.6&lt;br /&gt;
**** HP 1 iLO 10.42.1.7&lt;br /&gt;
**** HP 2 iLO 10.42.1.8&lt;br /&gt;
** Vlan 10&lt;br /&gt;
*** Prod&lt;br /&gt;
*** 10.42.10.0/24&lt;br /&gt;
**** Milliways Core 10.42.10.1&lt;br /&gt;
**** Dell 10.42.10.2&lt;br /&gt;
**** HP 1 10.42.10.3&lt;br /&gt;
**** HP 2 10.42.10.5&lt;br /&gt;
** Vlan 15&lt;br /&gt;
*** [https://docs.openstack.org/neutron/2025.1/install/environment-networking-ubuntu.html Provider Network]&lt;br /&gt;
**** This is an OpenStack thing for the secondary Control and Compute node interfaces.&lt;br /&gt;
**** Currently [https://docs.openstack.org/neutron/2025.1/install/environment-networking-controller-ubuntu.html no IP] address assigned.&lt;br /&gt;
**** May change in future if documentation mandates it.&lt;br /&gt;
&lt;br /&gt;
=== Cable Mgmt ===&lt;br /&gt;
 As there are some early ambitions to physically take this environment to events, perhaps we should seriously think about making our lives easier by already thinking about colorcoding connectivity. While this will help us connecting everything again at $event when we&#039;re sleepdeprived\drunk\explaining to newbies, this has the added effect of making it all look slightly more cooler than just a spaghetti of all boring white cables or worse, a spaghetti of whatever the fuck we have lying around.&lt;br /&gt;
&lt;br /&gt;
This is all just made-up without too much thought. This is specifically intended to start a discussion so we can work toward an agreement, it is not intended to be a unilateral decision. Example; You&#039;ll notice 0 thought was put into fiber or not ;)&lt;br /&gt;
* RED&lt;br /&gt;
** Mgmt \ OOB&lt;br /&gt;
*** iDRACs, iLOs, RAID Cards, etc&lt;br /&gt;
* GREEN&lt;br /&gt;
** Storage Prod&lt;br /&gt;
*** At least the Dell, maybe HPs if we get into flash storage&lt;br /&gt;
* BLUE&lt;br /&gt;
** Compute Prod&lt;br /&gt;
*** Likely overwhelmingly the HPs&lt;br /&gt;
* YELLOW&lt;br /&gt;
** Interconnect&lt;br /&gt;
*** Connectivity to $outside, between switches, whatever&lt;br /&gt;
&lt;br /&gt;
=== Naming Convention ===&lt;br /&gt;
 We need names!&lt;br /&gt;
 Can&#039;t keep calling these &amp;quot;Dell&amp;quot;, &amp;quot;HP1&amp;quot;, &amp;quot;HP2&amp;quot; etc.&lt;br /&gt;
 Calling them by their S/Ns is also super boring and cumbersome; &amp;quot;Oh yea, we need to setup 5V6S064&amp;quot;&lt;br /&gt;
 We could even opt for dual names. Internally, when logged in to $shell, the names could be functional &amp;quot;milliways-control-node-1&amp;quot; so it&#039;s clear what you&#039;re doing, but externally, the Asset Tag could be a Hitchhiker&#039;s Guide to the Galaxy character or a DiscWorld town or something. That way, if we do ever show this off at events, we can do cool shit with light up tags, make stuff funny and recognizable and cool to talk about - it also makes it way more relatable to market for when asking for donations; &amp;quot;Ya, we&#039;re looking for extra storage for Überwald&amp;quot; sounds much better than &amp;quot;Ya we&#039;re looking for extra storage for 5V6S064 or milliways-control-node-1&amp;quot;&lt;br /&gt;
 Naturally, once we get NetBox going, we can map the Asset names to the actual server name and potentially it&#039;s serial so we don&#039;t get confused internally (if we want to use serials, there&#039;s somethign to be said for not using serials here)&lt;br /&gt;
&lt;br /&gt;
* Functional&lt;br /&gt;
** milliways-control-node-1&lt;br /&gt;
** milliways-control-node-2&lt;br /&gt;
** control-node-1&lt;br /&gt;
** compute-node-1&lt;br /&gt;
** flash-storage-1&lt;br /&gt;
&lt;br /&gt;
* Marketing&lt;br /&gt;
** HGttG characters&lt;br /&gt;
*** Arthur&lt;br /&gt;
*** Ford&lt;br /&gt;
*** Zaphod&lt;br /&gt;
** Discworld locations&lt;br /&gt;
*** Ankh-Morpork&lt;br /&gt;
*** Überwald&lt;br /&gt;
*** Lancre&lt;br /&gt;
***&lt;br /&gt;
&lt;br /&gt;
=== OpenStack ===&lt;br /&gt;
&lt;br /&gt;
 We&#039;re using 2025.1 (epoxy) as 2025.2 (flamingo) has an undocumented breaking change making installation of keystone impossible. We have registered a bug with the documentation on launchpad for this.&lt;br /&gt;
&lt;br /&gt;
* [https://docs.openstack.org/install-guide/ Installation guide]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-2025-1-epoxy Minimal Deployment]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/overview.html#example-architecture Example Architecture]&lt;br /&gt;
 Following installation guide recommendation, passwords are created with &amp;lt;code&amp;gt;openssl rand -hex 10&amp;lt;/code&amp;gt; and saved in a password store.&lt;br /&gt;
&lt;br /&gt;
==== Controller ====&lt;br /&gt;
* [https://docs.openstack.org/keystone/2025.1/install/ Identity service]&lt;br /&gt;
** [https://docs.openstack.org/keystone/2025.2/install/keystone-users-ubuntu.html Broken] in 2025.2&lt;br /&gt;
***[https://opendev.org/openstack/keystone/src/commit/82c80dccf6c2e74e27b90f5204de6da1fc6bd76d/releasenotes/notes/remove-wsgi-scripts-615b97ee4d6e0de2.yaml This] commit removes the WSGI scripts, ``keystone-wsgi-admin`` and ``keystone-wsgi-public``.&lt;br /&gt;
*** Both scripts are still called by the openstack command. This means running any openstack command to create a domain, projects, users, and roles fails with the error&lt;br /&gt;
****&amp;lt;code&amp;gt;Failed to discover available identity versions when contacting http://controller:5000/v3. Attempting to parse version from URL.&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Evidence:&lt;br /&gt;
**** &amp;lt;code&amp;gt;tail /var/log/apache2/keystone.log&amp;lt;/code&amp;gt;&lt;br /&gt;
***** &amp;lt;code&amp;gt;Target WSGI script not found or unable to stat: /usr/bin/keystone-wsgi-public&amp;lt;/code&amp;gt;&lt;br /&gt;
** Workaround, use 2025.1 instead&lt;br /&gt;
** Completed 2025-01-18&lt;br /&gt;
* [https://docs.openstack.org/glance/2025.1/install/ Image service]&lt;br /&gt;
** [https://docs.openstack.org/glance/2025.1/install/install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide has you create 3 API endpoints for the service.&lt;br /&gt;
**** You need to configure access to keystone with one of them, but you are not told which one. Only &amp;lt;code&amp;gt;public&amp;lt;/code&amp;gt; will work.&lt;br /&gt;
*** Configuring glance-api.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** Completed 2025-01-19&lt;br /&gt;
* [https://docs.openstack.org/placement/2025.1/install/ Placement service]&lt;br /&gt;
** [https://docs.openstack.org/placement/2025.1/install/verify.html Bad] Documentation&lt;br /&gt;
*** If you followed the guide, your user account [https://storyboard.openstack.org/#!/story/2008969 does not have the rights] to read &amp;lt;code&amp;gt;/etc/placement/placement.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Running &amp;lt;code&amp;gt;placement-status upgrade check&amp;lt;/code&amp;gt; as root proves the service works.&lt;br /&gt;
*** Undocumented requirement fulfilled; &amp;lt;code&amp;gt;usermod -aG placement&amp;lt;/code&amp;gt;&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portions of [https://docs.openstack.org/nova/2025.1/install/ Compute]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/controller-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring nova.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
*** The guide attempts to make you configure options for the networking service, which you have not installed yet, because the guide makes you install this service first&lt;br /&gt;
*** &amp;lt;code&amp;gt;Due to a packaging bug, remove the log_dir option from the [DEFAULT] section.&amp;lt;/code&amp;gt;&lt;br /&gt;
**** ???? THEN FIX THE PACKAGE?!?!?!!!!&lt;br /&gt;
*** The &amp;lt;code&amp;gt;[glance]&amp;lt;/code&amp;gt; option you are instructed to use is deprecated&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portion of [https://docs.openstack.org/neutron/2025.1/install/ Networking]&lt;br /&gt;
** [https://docs.openstack.org/neutron/2025.1/install/controller-install-option2-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring neutron.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** More [https://docs.openstack.org/neutron/2025.1/admin/deploy-ovs-provider.html Bad] Documentation&lt;br /&gt;
*** The guide refers to configuring the Open vSwitch agent and offers more information which directly contradicts the guide.&lt;br /&gt;
**** The guide says to edit neutron.conf with &amp;lt;code&amp;gt;service_plugins = router&amp;lt;/code&amp;gt;&lt;br /&gt;
**** The Open vSwitch agent example configuration for controllers says: &amp;quot;Disable service plug-ins because provider networks do not require any.&amp;quot;&lt;br /&gt;
*** Configuring openvswitch_agent.ini is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
**** The guide attempts to make you configure the the name of the bridge connected to the underlying provider physical network but you have not yet created this bridge when the guide asks you for the name.&lt;br /&gt;
** Completed 2025-01-21&lt;br /&gt;
* [https://docs.openstack.org/horizon/2025.1/install/ Dashboard]&lt;br /&gt;
** Extremely weird behavior, Dashboard will only load if Debug is set to True and compression is turned on.&lt;br /&gt;
** Completed 2025-01-21&lt;br /&gt;
&lt;br /&gt;
==== Compute ====&lt;br /&gt;
* [https://docs.openstack.org/nova/2025.1/install/ Compute Service]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/compute-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide entirely fails to mention that keeping default config in &amp;lt;code&amp;gt;[api_database]&amp;lt;/code&amp;gt; will result in errors. &lt;br /&gt;
*** The guide entirely fails to mention that keeping default config in &amp;lt;code&amp;gt;[database]&amp;lt;/code&amp;gt; will result in errors.&lt;br /&gt;
**** Yes, that&#039;s basically the same documentation error twice, but for two different options, on the same page. &lt;br /&gt;
* [https://docs.openstack.org/neutron/2025.1/install/ Networking Service]&lt;br /&gt;
* Extremely weird behaviour when linking up with control node. placement service decided password was wrong, it wasn&#039;t, nova scheduler and conducter wouldn&#039;t start, the &amp;quot;fix&amp;quot;  is basically patience, no changes were made between it not working and it working and I have no idea why this now works.&lt;br /&gt;
* Completed 2025-01-21&lt;br /&gt;
&lt;br /&gt;
==== Overall feelings ====&lt;br /&gt;
* Clunky and convoluted.&lt;br /&gt;
** The MVP environment with 1 control node and 1 compute node feels about as capable as a 4-bay enthusiast NAS running proxmox.&lt;br /&gt;
*** If I purely count only the time spent on control and compute nodes, I reckon it took me about 18hrs to do something I did in 20mins on my terramaster. &lt;br /&gt;
* Documentation is unacceptably bad.&lt;br /&gt;
** Not kidding, there is better documentation on running automated piracy software.&lt;br /&gt;
*** Heck, there&#039;s better documentation written by indian scam farms to trick your family members to run TeamViewer to run play store giftcard scams.&lt;br /&gt;
** There can and should no excuse at all for the level of sheer incompetence displayed in these docs.&lt;br /&gt;
** OpenStack&#039;s documentation is abysmal and the responsible parties deserve to be held accountable for this.&lt;br /&gt;
* Super bad initial impression, like, I would not, could not even, recommend this in any professional capacity.&lt;br /&gt;
* I am dearly hoping that with scalability, this thing outrips my terramaster fast, else I wonder if this is worth the time, effort, fuel, money and electricity I have pumped into it.&lt;br /&gt;
** I fully realize OpenStack and proxmox are not the same, Nova is basically comparable to proxmox and all other services or modules are extra functionality that proxmox does not aim to offer. Like, obviously, I am exaggerating out of frustration and emotion, but this MVP is basically an externally managed proxmox and taken at face value, I think I&#039;d recommend proxmox over this. But! Let&#039;s see how this evolves and scales.&lt;br /&gt;
* Tempted to try SUSE&#039; [https://harvesterhci.io/ Harvester HCI]&lt;br /&gt;
** Although perhaps this isn&#039;t comparable&lt;br /&gt;
&lt;br /&gt;
== communications ==&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6993</id>
		<title>MilliwaysStack</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6993"/>
		<updated>2026-01-22T11:03:49Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Overall feelings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We want to run an OpenStack experiment&lt;br /&gt;
&lt;br /&gt;
== The grander idea ==&lt;br /&gt;
&lt;br /&gt;
We want to try out an installation of OpenStack to give people around milliways experience with running (on). &lt;br /&gt;
&lt;br /&gt;
From an unnamed source we got 10 HPE servers. We will use 8 of them to run OpenStack on it. Storage is on a seperate machine. &lt;br /&gt;
&lt;br /&gt;
=== MVP ===&lt;br /&gt;
The MVP would be:&lt;br /&gt;
* Kubernetes / docker &lt;br /&gt;
* object storage&lt;br /&gt;
* file systems&lt;br /&gt;
* Networking&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* Firewalling&lt;br /&gt;
* Databases - mariaDB / PostgreSQL &lt;br /&gt;
* Someone something redis I guess&lt;br /&gt;
* container registry&lt;br /&gt;
&lt;br /&gt;
=== e-MVP ===&lt;br /&gt;
The extended MVP would be:&lt;br /&gt;
* functional Monitoring &amp;amp; alerting&lt;br /&gt;
* autoscaling &lt;br /&gt;
* integration into milliways identity &amp;amp; access management authentik &lt;br /&gt;
* logging &amp;amp; alerting&lt;br /&gt;
&lt;br /&gt;
== the software stack explained ==&lt;br /&gt;
&lt;br /&gt;
OpenStack is a cloud framework stack that offeres AWS / Azure / GC alike services. &lt;br /&gt;
&lt;br /&gt;
Most documentation is availible for Ubuntu &amp;amp; Red Hat. On the longer term an installation under NixOS might be feasable.&lt;br /&gt;
&lt;br /&gt;
== Asset List ==&lt;br /&gt;
=== Rack ===&lt;br /&gt;
* 47U&lt;br /&gt;
* 950mm external depth&lt;br /&gt;
** 915mm internal depth&lt;br /&gt;
=== Consumables &amp;amp; Small Materials ===&lt;br /&gt;
* 1 x Samsung 2.5&amp;quot; 860 EVO 2TB&lt;br /&gt;
* Assorted M2 - M3 screws&lt;br /&gt;
* Assorted mismatched bundle of M5 and M6 cagenuts and bolts&lt;br /&gt;
* SFPs&lt;br /&gt;
=== Switches ===&lt;br /&gt;
* 2 x Dell PowerConnect 7048R-RA&lt;br /&gt;
* 1 x Cisco 3560e&lt;br /&gt;
=== [[MilliwaysStack_Servers | Servers]] ===&lt;br /&gt;
* 1 Dell PowerEdge R710 server as storage&lt;br /&gt;
** 2 x X5570 2,93GHz&lt;br /&gt;
** 192GB RAM&lt;br /&gt;
** 6 x 3,5&amp;quot; bays&lt;br /&gt;
*** 6 x hotswap 3,5&amp;quot; drive sleds/brackets&lt;br /&gt;
** Drives&lt;br /&gt;
*** 1 x Samsung 2.5&amp;quot; 850 EVO 500GB&lt;br /&gt;
**** for OS&lt;br /&gt;
**** Hidden in aftermarket [https://www.amazon.nl/dp/B083XJPCGL &amp;quot;Optical Drive&amp;quot;] adapter.&lt;br /&gt;
***We have more drives than bays, but not enough drives to make a nice or ideal configuration. As such, the Dell storage situation is likely temporary until we can figure out if we add more 12T or 10T or keep as-is.&lt;br /&gt;
**** 2 x Seagate Exos X18 12TB&lt;br /&gt;
**** 1 x Seagate Exos X18 10TB&lt;br /&gt;
**** 4 x WD Red 4TB&lt;br /&gt;
**** 4 x WD Green 3TB&lt;br /&gt;
** no rails&lt;br /&gt;
* 2 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram&lt;br /&gt;
** PCI Riser to 4* NVMe adapter&lt;br /&gt;
*** 1TB Crucial NVMe &lt;br /&gt;
** iLO4&lt;br /&gt;
*** It seems it accepts 35DPH-SVSXJ-HGBJN-C7N5R-2SS4W as activation key for iLO Advanced license?&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** Slide rails&lt;br /&gt;
* 8 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram &lt;br /&gt;
** iLO4&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** 7 x slide rails&lt;br /&gt;
&lt;br /&gt;
== Shopping List ==&lt;br /&gt;
 It&#039;s ofc. sexy as all hell to buy memory, AI cards, flash storage and allsorts, but literally none of that will ever work if we don&#039;t have our Generic basics in order. While we prefer big donations go to big ticket items, many small ticket items unexpectedly add up in the long run. Please do not forget the generic basics!&lt;br /&gt;
* Generic Basics&lt;br /&gt;
** PDU&lt;br /&gt;
*** &amp;lt;s&amp;gt;Temporary 1U unmanaged PDU with 16A/230V C19 input and 1* C19 + 8* Type F outlet.&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Perfect: Managed Rack mountable PDU with CEE Red 16A/20A 400v input to C13/C14 + C19/C20 outlets.&lt;br /&gt;
**** [https://www.eaton.com/us/en-us/skuPage.PDU3XEVSR6G20.html Stupid expensive example]&lt;br /&gt;
*** Alternatively; a &amp;quot;normal&amp;quot; Serverrack PDU (still strong prefer managed) + 16A/20A 400v -&amp;gt; 16A 230V transform&lt;br /&gt;
** Network Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [Color]&lt;br /&gt;
**** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Power Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Screws, Nuts, Bolts&lt;br /&gt;
*** &amp;lt;s&amp;gt;Assorted M2,M2.5,M3 Screws&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Some actual properly matching cage nuts\bolts would be great&lt;br /&gt;
** PCI Risers&lt;br /&gt;
*** &amp;lt;s&amp;gt;Single NVMe adapters&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Multi NVMe adapters&lt;br /&gt;
** KVM&lt;br /&gt;
*** PiKVM?&lt;br /&gt;
* Dell - Storage&lt;br /&gt;
** &amp;lt;s&amp;gt;2* Drive sleds&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;New RAID Card that supports passthrough\JBOD&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;2* SFF-8087 -&amp;gt; SFF-8087 Mini SAS Cable&amp;lt;/s&amp;gt;&lt;br /&gt;
** Drives&lt;br /&gt;
*** &amp;lt;s&amp;gt;500GB SSD for OS&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Bracket and SATA Cable Adapter for SSD&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Technically not shopping, but for historical tracking;&lt;br /&gt;
**** Old Exos X16 2 x 12T and 1 x 10T were RMA&#039;d and replaced with X18&#039;s&amp;lt;/s&amp;gt;&lt;br /&gt;
*** 12T ?&lt;br /&gt;
* HP1 - Control&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* HP2 - Compute&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* Flash Storage&lt;br /&gt;
** We&#039;ll need [https://www.amazon.de/-/en/dp/B07GCDH5D8 Drive Trays] for the HPs if we wanna add 2.5&amp;quot; SSDs&lt;br /&gt;
** Control and Compute servers each have 3 open m.2 NVMe slots&lt;br /&gt;
** &amp;lt;s&amp;gt;1 x 2TB Samsung 860 EVO&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
 nb. this is quick &#039;n&#039; dirty as I go along.&lt;br /&gt;
 In the short-term future I&#039;d much rather replace this adhoc documentation with something like NetBox.&lt;br /&gt;
=== Network ===&lt;br /&gt;
* Supernet 10.42.0.0/16&lt;br /&gt;
** Vlan 42&lt;br /&gt;
*** Interconnect&lt;br /&gt;
*** 10.42.0.0/30&lt;br /&gt;
**** Gateway 10.42.0.1&lt;br /&gt;
**** Milliways Core 10.42.0.2&lt;br /&gt;
** Vlan 5&lt;br /&gt;
*** Mgmt \ OOB&lt;br /&gt;
*** 10.42.1.0/24&lt;br /&gt;
**** Milliways Core 10.42.1.1&lt;br /&gt;
**** Dell iDRAC 10.42.1.5&lt;br /&gt;
**** Dell RAID Controller 10.42.1.6&lt;br /&gt;
**** HP 1 iLO 10.42.1.7&lt;br /&gt;
**** HP 2 iLO 10.42.1.8&lt;br /&gt;
** Vlan 10&lt;br /&gt;
*** Prod&lt;br /&gt;
*** 10.42.10.0/24&lt;br /&gt;
**** Milliways Core 10.42.10.1&lt;br /&gt;
**** Dell 10.42.10.2&lt;br /&gt;
**** HP 1 10.42.10.3&lt;br /&gt;
**** HP 2 10.42.10.5&lt;br /&gt;
** Vlan 15&lt;br /&gt;
*** [https://docs.openstack.org/neutron/2025.1/install/environment-networking-ubuntu.html Provider Network]&lt;br /&gt;
**** This is an OpenStack thing for the secondary Control and Compute node interfaces.&lt;br /&gt;
**** Currently [https://docs.openstack.org/neutron/2025.1/install/environment-networking-controller-ubuntu.html no IP] address assigned.&lt;br /&gt;
**** May change in future if documentation mandates it.&lt;br /&gt;
&lt;br /&gt;
=== Cable Mgmt ===&lt;br /&gt;
 As there are some early ambitions to physically take this environment to events, perhaps we should seriously think about making our lives easier by already thinking about colorcoding connectivity. While this will help us connecting everything again at $event when we&#039;re sleepdeprived\drunk\explaining to newbies, this has the added effect of making it all look slightly more cooler than just a spaghetti of all boring white cables or worse, a spaghetti of whatever the fuck we have lying around.&lt;br /&gt;
&lt;br /&gt;
This is all just made-up without too much thought. This is specifically intended to start a discussion so we can work toward an agreement, it is not intended to be a unilateral decision. Example; You&#039;ll notice 0 thought was put into fiber or not ;)&lt;br /&gt;
* RED&lt;br /&gt;
** Mgmt \ OOB&lt;br /&gt;
*** iDRACs, iLOs, RAID Cards, etc&lt;br /&gt;
* GREEN&lt;br /&gt;
** Storage Prod&lt;br /&gt;
*** At least the Dell, maybe HPs if we get into flash storage&lt;br /&gt;
* BLUE&lt;br /&gt;
** Compute Prod&lt;br /&gt;
*** Likely overwhelmingly the HPs&lt;br /&gt;
* YELLOW&lt;br /&gt;
** Interconnect&lt;br /&gt;
*** Connectivity to $outside, between switches, whatever&lt;br /&gt;
&lt;br /&gt;
=== Naming Convention ===&lt;br /&gt;
 We need names!&lt;br /&gt;
 Can&#039;t keep calling these &amp;quot;Dell&amp;quot;, &amp;quot;HP1&amp;quot;, &amp;quot;HP2&amp;quot; etc.&lt;br /&gt;
 Calling them by their S/Ns is also super boring and cumbersome; &amp;quot;Oh yea, we need to setup 5V6S064&amp;quot;&lt;br /&gt;
 We could even opt for dual names. Internally, when logged in to $shell, the names could be functional &amp;quot;milliways-control-node-1&amp;quot; so it&#039;s clear what you&#039;re doing, but externally, the Asset Tag could be a Hitchhiker&#039;s Guide to the Galaxy character or a DiscWorld town or something. That way, if we do ever show this off at events, we can do cool shit with light up tags, make stuff funny and recognizable and cool to talk about - it also makes it way more relatable to market for when asking for donations; &amp;quot;Ya, we&#039;re looking for extra storage for Überwald&amp;quot; sounds much better than &amp;quot;Ya we&#039;re looking for extra storage for 5V6S064 or milliways-control-node-1&amp;quot;&lt;br /&gt;
 Naturally, once we get NetBox going, we can map the Asset names to the actual server name and potentially it&#039;s serial so we don&#039;t get confused internally (if we want to use serials, there&#039;s somethign to be said for not using serials here)&lt;br /&gt;
&lt;br /&gt;
* Functional&lt;br /&gt;
** milliways-control-node-1&lt;br /&gt;
** milliways-control-node-2&lt;br /&gt;
** control-node-1&lt;br /&gt;
** compute-node-1&lt;br /&gt;
** flash-storage-1&lt;br /&gt;
&lt;br /&gt;
* Marketing&lt;br /&gt;
** HGttG characters&lt;br /&gt;
*** Arthur&lt;br /&gt;
*** Ford&lt;br /&gt;
*** Zaphod&lt;br /&gt;
** Discworld locations&lt;br /&gt;
*** Ankh-Morpork&lt;br /&gt;
*** Überwald&lt;br /&gt;
*** Lancre&lt;br /&gt;
***&lt;br /&gt;
&lt;br /&gt;
=== OpenStack ===&lt;br /&gt;
&lt;br /&gt;
 We&#039;re using 2025.1 (epoxy) as 2025.2 (flamingo) has an undocumented breaking change making installation of keystone impossible. We have registered a bug with the documentation on launchpad for this.&lt;br /&gt;
&lt;br /&gt;
* [https://docs.openstack.org/install-guide/ Installation guide]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-2025-1-epoxy Minimal Deployment]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/overview.html#example-architecture Example Architecture]&lt;br /&gt;
 Following installation guide recommendation, passwords are created with &amp;lt;code&amp;gt;openssl rand -hex 10&amp;lt;/code&amp;gt; and saved in a password store.&lt;br /&gt;
&lt;br /&gt;
==== Controller ====&lt;br /&gt;
* [https://docs.openstack.org/keystone/2025.1/install/ Identity service]&lt;br /&gt;
** [https://docs.openstack.org/keystone/2025.2/install/keystone-users-ubuntu.html Broken] in 2025.2&lt;br /&gt;
***[https://opendev.org/openstack/keystone/src/commit/82c80dccf6c2e74e27b90f5204de6da1fc6bd76d/releasenotes/notes/remove-wsgi-scripts-615b97ee4d6e0de2.yaml This] commit removes the WSGI scripts, ``keystone-wsgi-admin`` and ``keystone-wsgi-public``.&lt;br /&gt;
*** Both scripts are still called by the openstack command. This means running any openstack command to create a domain, projects, users, and roles fails with the error&lt;br /&gt;
****&amp;lt;code&amp;gt;Failed to discover available identity versions when contacting http://controller:5000/v3. Attempting to parse version from URL.&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Evidence:&lt;br /&gt;
**** &amp;lt;code&amp;gt;tail /var/log/apache2/keystone.log&amp;lt;/code&amp;gt;&lt;br /&gt;
***** &amp;lt;code&amp;gt;Target WSGI script not found or unable to stat: /usr/bin/keystone-wsgi-public&amp;lt;/code&amp;gt;&lt;br /&gt;
** Workaround, use 2025.1 instead&lt;br /&gt;
** Completed 2025-01-18&lt;br /&gt;
* [https://docs.openstack.org/glance/2025.1/install/ Image service]&lt;br /&gt;
** [https://docs.openstack.org/glance/2025.1/install/install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide has you create 3 API endpoints for the service.&lt;br /&gt;
**** You need to configure access to keystone with one of them, but you are not told which one. Only &amp;lt;code&amp;gt;public&amp;lt;/code&amp;gt; will work.&lt;br /&gt;
*** Configuring glance-api.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** Completed 2025-01-19&lt;br /&gt;
* [https://docs.openstack.org/placement/2025.1/install/ Placement service]&lt;br /&gt;
** [https://docs.openstack.org/placement/2025.1/install/verify.html Bad] Documentation&lt;br /&gt;
*** If you followed the guide, your user account [https://storyboard.openstack.org/#!/story/2008969 does not have the rights] to read &amp;lt;code&amp;gt;/etc/placement/placement.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Running &amp;lt;code&amp;gt;placement-status upgrade check&amp;lt;/code&amp;gt; as root proves the service works.&lt;br /&gt;
*** Undocumented requirement fulfilled; &amp;lt;code&amp;gt;usermod -aG placement&amp;lt;/code&amp;gt;&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portions of [https://docs.openstack.org/nova/2025.1/install/ Compute]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/controller-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring nova.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
*** The guide attempts to make you configure options for the networking service, which you have not installed yet, because the guide makes you install this service first&lt;br /&gt;
*** &amp;lt;code&amp;gt;Due to a packaging bug, remove the log_dir option from the [DEFAULT] section.&amp;lt;/code&amp;gt;&lt;br /&gt;
**** ???? THEN FIX THE PACKAGE?!?!?!!!!&lt;br /&gt;
*** The &amp;lt;code&amp;gt;[glance]&amp;lt;/code&amp;gt; option you are instructed to use is deprecated&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portion of [https://docs.openstack.org/neutron/2025.1/install/ Networking]&lt;br /&gt;
** [https://docs.openstack.org/neutron/2025.1/install/controller-install-option2-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring neutron.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** More [https://docs.openstack.org/neutron/2025.1/admin/deploy-ovs-provider.html Bad] Documentation&lt;br /&gt;
*** The guide refers to configuring the Open vSwitch agent and offers more information which directly contradicts the guide.&lt;br /&gt;
**** The guide says to edit neutron.conf with &amp;lt;code&amp;gt;service_plugins = router&amp;lt;/code&amp;gt;&lt;br /&gt;
**** The Open vSwitch agent example configuration for controllers says: &amp;quot;Disable service plug-ins because provider networks do not require any.&amp;quot;&lt;br /&gt;
*** Configuring openvswitch_agent.ini is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
**** The guide attempts to make you configure the the name of the bridge connected to the underlying provider physical network but you have not yet created this bridge when the guide asks you for the name.&lt;br /&gt;
** Completed 2025-01-21&lt;br /&gt;
* [https://docs.openstack.org/horizon/2025.1/install/ Dashboard]&lt;br /&gt;
** Extremely weird behavior, Dashboard will only load if Debug is set to True and compression is turned on.&lt;br /&gt;
** Completed 2025-01-21&lt;br /&gt;
&lt;br /&gt;
==== Compute ====&lt;br /&gt;
* [https://docs.openstack.org/nova/2025.1/install/ Compute Service]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/compute-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide entirely fails to mention that keeping default config in &amp;lt;code&amp;gt;[api_database]&amp;lt;/code&amp;gt; will result in errors. &lt;br /&gt;
*** The guide entirely fails to mention that keeping default config in &amp;lt;code&amp;gt;[database]&amp;lt;/code&amp;gt; will result in errors.&lt;br /&gt;
**** Yes, that&#039;s basically the same documentation error twice, but for two different options, on the same page. &lt;br /&gt;
* [https://docs.openstack.org/neutron/2025.1/install/ Networking Service]&lt;br /&gt;
* Extremely weird behaviour when linking up with control node. placement service decided password was wrong, it wasn&#039;t, nova scheduler and conducter wouldn&#039;t start, the &amp;quot;fix&amp;quot;  is basically patience, no changes were made between it not working and it working and I have no idea why this now works.&lt;br /&gt;
* Completed 2025-01-21&lt;br /&gt;
&lt;br /&gt;
==== Overall feelings ====&lt;br /&gt;
* Clunky and convoluted.&lt;br /&gt;
** The MVP environment with 1 control node and 1 compute node feels about as capable as a 4-bay enthusiast NAS running proxmox.&lt;br /&gt;
*** If I purely count only the time spent on control and compute nodes, I reckon it took me about 18hrs to do something I did in 20mins on my terramaster. &lt;br /&gt;
* Documentation is unacceptably bad.&lt;br /&gt;
** Not kidding, there is better documentation on running automated piracy software.&lt;br /&gt;
*** Heck, there&#039;s better documentation written by indian scam farms to trick your family members to run TeamViewer to run play store giftcard scams.&lt;br /&gt;
** There can and should no excuse at all for the level of sheer incompetence displayed in these docs.&lt;br /&gt;
** OpenStack&#039;s documentation is abysmal and the responsible parties deserve to be held accountable for this.&lt;br /&gt;
* Super bad initial impression, like, I would not, could not even, recommend this in any professional capacity.&lt;br /&gt;
* I am dearly hoping that with scalability, this thing outrips my terramaster fast, else I wonder if this is worth the time, effort, fuel, money and electricity I have pumped into it.&lt;br /&gt;
* Tempted to try SUSE&#039; [https://harvesterhci.io/ Harvester HCI]&lt;br /&gt;
&lt;br /&gt;
== communications ==&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6992</id>
		<title>MilliwaysStack</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6992"/>
		<updated>2026-01-21T21:52:25Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Overall feelings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We want to run an OpenStack experiment&lt;br /&gt;
&lt;br /&gt;
== The grander idea ==&lt;br /&gt;
&lt;br /&gt;
We want to try out an installation of OpenStack to give people around milliways experience with running (on). &lt;br /&gt;
&lt;br /&gt;
From an unnamed source we got 10 HPE servers. We will use 8 of them to run OpenStack on it. Storage is on a seperate machine. &lt;br /&gt;
&lt;br /&gt;
=== MVP ===&lt;br /&gt;
The MVP would be:&lt;br /&gt;
* Kubernetes / docker &lt;br /&gt;
* object storage&lt;br /&gt;
* file systems&lt;br /&gt;
* Networking&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* Firewalling&lt;br /&gt;
* Databases - mariaDB / PostgreSQL &lt;br /&gt;
* Someone something redis I guess&lt;br /&gt;
* container registry&lt;br /&gt;
&lt;br /&gt;
=== e-MVP ===&lt;br /&gt;
The extended MVP would be:&lt;br /&gt;
* functional Monitoring &amp;amp; alerting&lt;br /&gt;
* autoscaling &lt;br /&gt;
* integration into milliways identity &amp;amp; access management authentik &lt;br /&gt;
* logging &amp;amp; alerting&lt;br /&gt;
&lt;br /&gt;
== the software stack explained ==&lt;br /&gt;
&lt;br /&gt;
OpenStack is a cloud framework stack that offeres AWS / Azure / GC alike services. &lt;br /&gt;
&lt;br /&gt;
Most documentation is availible for Ubuntu &amp;amp; Red Hat. On the longer term an installation under NixOS might be feasable.&lt;br /&gt;
&lt;br /&gt;
== Asset List ==&lt;br /&gt;
=== Rack ===&lt;br /&gt;
* 47U&lt;br /&gt;
* 950mm external depth&lt;br /&gt;
** 915mm internal depth&lt;br /&gt;
=== Consumables &amp;amp; Small Materials ===&lt;br /&gt;
* 1 x Samsung 2.5&amp;quot; 860 EVO 2TB&lt;br /&gt;
* Assorted M2 - M3 screws&lt;br /&gt;
* Assorted mismatched bundle of M5 and M6 cagenuts and bolts&lt;br /&gt;
* SFPs&lt;br /&gt;
=== Switches ===&lt;br /&gt;
* 2 x Dell PowerConnect 7048R-RA&lt;br /&gt;
* 1 x Cisco 3560e&lt;br /&gt;
=== [[MilliwaysStack_Servers | Servers]] ===&lt;br /&gt;
* 1 Dell PowerEdge R710 server as storage&lt;br /&gt;
** 2 x X5570 2,93GHz&lt;br /&gt;
** 192GB RAM&lt;br /&gt;
** 6 x 3,5&amp;quot; bays&lt;br /&gt;
*** 6 x hotswap 3,5&amp;quot; drive sleds/brackets&lt;br /&gt;
** Drives&lt;br /&gt;
*** 1 x Samsung 2.5&amp;quot; 850 EVO 500GB&lt;br /&gt;
**** for OS&lt;br /&gt;
**** Hidden in aftermarket [https://www.amazon.nl/dp/B083XJPCGL &amp;quot;Optical Drive&amp;quot;] adapter.&lt;br /&gt;
***We have more drives than bays, but not enough drives to make a nice or ideal configuration. As such, the Dell storage situation is likely temporary until we can figure out if we add more 12T or 10T or keep as-is.&lt;br /&gt;
**** 2 x Seagate Exos X18 12TB&lt;br /&gt;
**** 1 x Seagate Exos X18 10TB&lt;br /&gt;
**** 4 x WD Red 4TB&lt;br /&gt;
**** 4 x WD Green 3TB&lt;br /&gt;
** no rails&lt;br /&gt;
* 2 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram&lt;br /&gt;
** PCI Riser to 4* NVMe adapter&lt;br /&gt;
*** 1TB Crucial NVMe &lt;br /&gt;
** iLO4&lt;br /&gt;
*** It seems it accepts 35DPH-SVSXJ-HGBJN-C7N5R-2SS4W as activation key for iLO Advanced license?&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** Slide rails&lt;br /&gt;
* 8 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram &lt;br /&gt;
** iLO4&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** 7 x slide rails&lt;br /&gt;
&lt;br /&gt;
== Shopping List ==&lt;br /&gt;
 It&#039;s ofc. sexy as all hell to buy memory, AI cards, flash storage and allsorts, but literally none of that will ever work if we don&#039;t have our Generic basics in order. While we prefer big donations go to big ticket items, many small ticket items unexpectedly add up in the long run. Please do not forget the generic basics!&lt;br /&gt;
* Generic Basics&lt;br /&gt;
** PDU&lt;br /&gt;
*** &amp;lt;s&amp;gt;Temporary 1U unmanaged PDU with 16A/230V C19 input and 1* C19 + 8* Type F outlet.&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Perfect: Managed Rack mountable PDU with CEE Red 16A/20A 400v input to C13/C14 + C19/C20 outlets.&lt;br /&gt;
**** [https://www.eaton.com/us/en-us/skuPage.PDU3XEVSR6G20.html Stupid expensive example]&lt;br /&gt;
*** Alternatively; a &amp;quot;normal&amp;quot; Serverrack PDU (still strong prefer managed) + 16A/20A 400v -&amp;gt; 16A 230V transform&lt;br /&gt;
** Network Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [Color]&lt;br /&gt;
**** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Power Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Screws, Nuts, Bolts&lt;br /&gt;
*** &amp;lt;s&amp;gt;Assorted M2,M2.5,M3 Screws&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Some actual properly matching cage nuts\bolts would be great&lt;br /&gt;
** PCI Risers&lt;br /&gt;
*** &amp;lt;s&amp;gt;Single NVMe adapters&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Multi NVMe adapters&lt;br /&gt;
** KVM&lt;br /&gt;
*** PiKVM?&lt;br /&gt;
* Dell - Storage&lt;br /&gt;
** &amp;lt;s&amp;gt;2* Drive sleds&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;New RAID Card that supports passthrough\JBOD&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;2* SFF-8087 -&amp;gt; SFF-8087 Mini SAS Cable&amp;lt;/s&amp;gt;&lt;br /&gt;
** Drives&lt;br /&gt;
*** &amp;lt;s&amp;gt;500GB SSD for OS&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Bracket and SATA Cable Adapter for SSD&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Technically not shopping, but for historical tracking;&lt;br /&gt;
**** Old Exos X16 2 x 12T and 1 x 10T were RMA&#039;d and replaced with X18&#039;s&amp;lt;/s&amp;gt;&lt;br /&gt;
*** 12T ?&lt;br /&gt;
* HP1 - Control&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* HP2 - Compute&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* Flash Storage&lt;br /&gt;
** We&#039;ll need [https://www.amazon.de/-/en/dp/B07GCDH5D8 Drive Trays] for the HPs if we wanna add 2.5&amp;quot; SSDs&lt;br /&gt;
** Control and Compute servers each have 3 open m.2 NVMe slots&lt;br /&gt;
** &amp;lt;s&amp;gt;1 x 2TB Samsung 860 EVO&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
 nb. this is quick &#039;n&#039; dirty as I go along.&lt;br /&gt;
 In the short-term future I&#039;d much rather replace this adhoc documentation with something like NetBox.&lt;br /&gt;
=== Network ===&lt;br /&gt;
* Supernet 10.42.0.0/16&lt;br /&gt;
** Vlan 42&lt;br /&gt;
*** Interconnect&lt;br /&gt;
*** 10.42.0.0/30&lt;br /&gt;
**** Gateway 10.42.0.1&lt;br /&gt;
**** Milliways Core 10.42.0.2&lt;br /&gt;
** Vlan 5&lt;br /&gt;
*** Mgmt \ OOB&lt;br /&gt;
*** 10.42.1.0/24&lt;br /&gt;
**** Milliways Core 10.42.1.1&lt;br /&gt;
**** Dell iDRAC 10.42.1.5&lt;br /&gt;
**** Dell RAID Controller 10.42.1.6&lt;br /&gt;
**** HP 1 iLO 10.42.1.7&lt;br /&gt;
**** HP 2 iLO 10.42.1.8&lt;br /&gt;
** Vlan 10&lt;br /&gt;
*** Prod&lt;br /&gt;
*** 10.42.10.0/24&lt;br /&gt;
**** Milliways Core 10.42.10.1&lt;br /&gt;
**** Dell 10.42.10.2&lt;br /&gt;
**** HP 1 10.42.10.3&lt;br /&gt;
**** HP 2 10.42.10.5&lt;br /&gt;
** Vlan 15&lt;br /&gt;
*** [https://docs.openstack.org/neutron/2025.1/install/environment-networking-ubuntu.html Provider Network]&lt;br /&gt;
**** This is an OpenStack thing for the secondary Control and Compute node interfaces.&lt;br /&gt;
**** Currently [https://docs.openstack.org/neutron/2025.1/install/environment-networking-controller-ubuntu.html no IP] address assigned.&lt;br /&gt;
**** May change in future if documentation mandates it.&lt;br /&gt;
&lt;br /&gt;
=== Cable Mgmt ===&lt;br /&gt;
 As there are some early ambitions to physically take this environment to events, perhaps we should seriously think about making our lives easier by already thinking about colorcoding connectivity. While this will help us connecting everything again at $event when we&#039;re sleepdeprived\drunk\explaining to newbies, this has the added effect of making it all look slightly more cooler than just a spaghetti of all boring white cables or worse, a spaghetti of whatever the fuck we have lying around.&lt;br /&gt;
&lt;br /&gt;
This is all just made-up without too much thought. This is specifically intended to start a discussion so we can work toward an agreement, it is not intended to be a unilateral decision. Example; You&#039;ll notice 0 thought was put into fiber or not ;)&lt;br /&gt;
* RED&lt;br /&gt;
** Mgmt \ OOB&lt;br /&gt;
*** iDRACs, iLOs, RAID Cards, etc&lt;br /&gt;
* GREEN&lt;br /&gt;
** Storage Prod&lt;br /&gt;
*** At least the Dell, maybe HPs if we get into flash storage&lt;br /&gt;
* BLUE&lt;br /&gt;
** Compute Prod&lt;br /&gt;
*** Likely overwhelmingly the HPs&lt;br /&gt;
* YELLOW&lt;br /&gt;
** Interconnect&lt;br /&gt;
*** Connectivity to $outside, between switches, whatever&lt;br /&gt;
&lt;br /&gt;
=== Naming Convention ===&lt;br /&gt;
 We need names!&lt;br /&gt;
 Can&#039;t keep calling these &amp;quot;Dell&amp;quot;, &amp;quot;HP1&amp;quot;, &amp;quot;HP2&amp;quot; etc.&lt;br /&gt;
 Calling them by their S/Ns is also super boring and cumbersome; &amp;quot;Oh yea, we need to setup 5V6S064&amp;quot;&lt;br /&gt;
 We could even opt for dual names. Internally, when logged in to $shell, the names could be functional &amp;quot;milliways-control-node-1&amp;quot; so it&#039;s clear what you&#039;re doing, but externally, the Asset Tag could be a Hitchhiker&#039;s Guide to the Galaxy character or a DiscWorld town or something. That way, if we do ever show this off at events, we can do cool shit with light up tags, make stuff funny and recognizable and cool to talk about - it also makes it way more relatable to market for when asking for donations; &amp;quot;Ya, we&#039;re looking for extra storage for Überwald&amp;quot; sounds much better than &amp;quot;Ya we&#039;re looking for extra storage for 5V6S064 or milliways-control-node-1&amp;quot;&lt;br /&gt;
 Naturally, once we get NetBox going, we can map the Asset names to the actual server name and potentially it&#039;s serial so we don&#039;t get confused internally (if we want to use serials, there&#039;s somethign to be said for not using serials here)&lt;br /&gt;
&lt;br /&gt;
* Functional&lt;br /&gt;
** milliways-control-node-1&lt;br /&gt;
** milliways-control-node-2&lt;br /&gt;
** control-node-1&lt;br /&gt;
** compute-node-1&lt;br /&gt;
** flash-storage-1&lt;br /&gt;
&lt;br /&gt;
* Marketing&lt;br /&gt;
** HGttG characters&lt;br /&gt;
*** Arthur&lt;br /&gt;
*** Ford&lt;br /&gt;
*** Zaphod&lt;br /&gt;
** Discworld locations&lt;br /&gt;
*** Ankh-Morpork&lt;br /&gt;
*** Überwald&lt;br /&gt;
*** Lancre&lt;br /&gt;
***&lt;br /&gt;
&lt;br /&gt;
=== OpenStack ===&lt;br /&gt;
&lt;br /&gt;
 We&#039;re using 2025.1 (epoxy) as 2025.2 (flamingo) has an undocumented breaking change making installation of keystone impossible. We have registered a bug with the documentation on launchpad for this.&lt;br /&gt;
&lt;br /&gt;
* [https://docs.openstack.org/install-guide/ Installation guide]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-2025-1-epoxy Minimal Deployment]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/overview.html#example-architecture Example Architecture]&lt;br /&gt;
 Following installation guide recommendation, passwords are created with &amp;lt;code&amp;gt;openssl rand -hex 10&amp;lt;/code&amp;gt; and saved in a password store.&lt;br /&gt;
&lt;br /&gt;
==== Controller ====&lt;br /&gt;
* [https://docs.openstack.org/keystone/2025.1/install/ Identity service]&lt;br /&gt;
** [https://docs.openstack.org/keystone/2025.2/install/keystone-users-ubuntu.html Broken] in 2025.2&lt;br /&gt;
***[https://opendev.org/openstack/keystone/src/commit/82c80dccf6c2e74e27b90f5204de6da1fc6bd76d/releasenotes/notes/remove-wsgi-scripts-615b97ee4d6e0de2.yaml This] commit removes the WSGI scripts, ``keystone-wsgi-admin`` and ``keystone-wsgi-public``.&lt;br /&gt;
*** Both scripts are still called by the openstack command. This means running any openstack command to create a domain, projects, users, and roles fails with the error&lt;br /&gt;
****&amp;lt;code&amp;gt;Failed to discover available identity versions when contacting http://controller:5000/v3. Attempting to parse version from URL.&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Evidence:&lt;br /&gt;
**** &amp;lt;code&amp;gt;tail /var/log/apache2/keystone.log&amp;lt;/code&amp;gt;&lt;br /&gt;
***** &amp;lt;code&amp;gt;Target WSGI script not found or unable to stat: /usr/bin/keystone-wsgi-public&amp;lt;/code&amp;gt;&lt;br /&gt;
** Workaround, use 2025.1 instead&lt;br /&gt;
** Completed 2025-01-18&lt;br /&gt;
* [https://docs.openstack.org/glance/2025.1/install/ Image service]&lt;br /&gt;
** [https://docs.openstack.org/glance/2025.1/install/install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide has you create 3 API endpoints for the service.&lt;br /&gt;
**** You need to configure access to keystone with one of them, but you are not told which one. Only &amp;lt;code&amp;gt;public&amp;lt;/code&amp;gt; will work.&lt;br /&gt;
*** Configuring glance-api.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** Completed 2025-01-19&lt;br /&gt;
* [https://docs.openstack.org/placement/2025.1/install/ Placement service]&lt;br /&gt;
** [https://docs.openstack.org/placement/2025.1/install/verify.html Bad] Documentation&lt;br /&gt;
*** If you followed the guide, your user account [https://storyboard.openstack.org/#!/story/2008969 does not have the rights] to read &amp;lt;code&amp;gt;/etc/placement/placement.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Running &amp;lt;code&amp;gt;placement-status upgrade check&amp;lt;/code&amp;gt; as root proves the service works.&lt;br /&gt;
*** Undocumented requirement fulfilled; &amp;lt;code&amp;gt;usermod -aG placement&amp;lt;/code&amp;gt;&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portions of [https://docs.openstack.org/nova/2025.1/install/ Compute]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/controller-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring nova.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
*** The guide attempts to make you configure options for the networking service, which you have not installed yet, because the guide makes you install this service first&lt;br /&gt;
*** &amp;lt;code&amp;gt;Due to a packaging bug, remove the log_dir option from the [DEFAULT] section.&amp;lt;/code&amp;gt;&lt;br /&gt;
**** ???? THEN FIX THE PACKAGE?!?!?!!!!&lt;br /&gt;
*** The &amp;lt;code&amp;gt;[glance]&amp;lt;/code&amp;gt; option you are instructed to use is deprecated&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portion of [https://docs.openstack.org/neutron/2025.1/install/ Networking]&lt;br /&gt;
** [https://docs.openstack.org/neutron/2025.1/install/controller-install-option2-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring neutron.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** More [https://docs.openstack.org/neutron/2025.1/admin/deploy-ovs-provider.html Bad] Documentation&lt;br /&gt;
*** The guide refers to configuring the Open vSwitch agent and offers more information which directly contradicts the guide.&lt;br /&gt;
**** The guide says to edit neutron.conf with &amp;lt;code&amp;gt;service_plugins = router&amp;lt;/code&amp;gt;&lt;br /&gt;
**** The Open vSwitch agent example configuration for controllers says: &amp;quot;Disable service plug-ins because provider networks do not require any.&amp;quot;&lt;br /&gt;
*** Configuring openvswitch_agent.ini is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
**** The guide attempts to make you configure the the name of the bridge connected to the underlying provider physical network but you have not yet created this bridge when the guide asks you for the name.&lt;br /&gt;
** Completed 2025-01-21&lt;br /&gt;
* [https://docs.openstack.org/horizon/2025.1/install/ Dashboard]&lt;br /&gt;
** Extremely weird behavior, Dashboard will only load if Debug is set to True and compression is turned on.&lt;br /&gt;
** Completed 2025-01-21&lt;br /&gt;
&lt;br /&gt;
==== Compute ====&lt;br /&gt;
* [https://docs.openstack.org/nova/2025.1/install/ Compute Service]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/compute-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide entirely fails to mention that keeping default config in &amp;lt;code&amp;gt;[api_database]&amp;lt;/code&amp;gt; will result in errors. &lt;br /&gt;
*** The guide entirely fails to mention that keeping default config in &amp;lt;code&amp;gt;[database]&amp;lt;/code&amp;gt; will result in errors.&lt;br /&gt;
**** Yes, that&#039;s basically the same documentation error twice, but for two different options, on the same page. &lt;br /&gt;
* [https://docs.openstack.org/neutron/2025.1/install/ Networking Service]&lt;br /&gt;
* Extremely weird behaviour when linking up with control node. placement service decided password was wrong, it wasn&#039;t, nova scheduler and conducter wouldn&#039;t start, the &amp;quot;fix&amp;quot;  is basically patience, no changes were made between it not working and it working and I have no idea why this now works.&lt;br /&gt;
* Completed 2025-01-21&lt;br /&gt;
&lt;br /&gt;
==== Overall feelings ====&lt;br /&gt;
* Clunky and convoluted.&lt;br /&gt;
** The MVP environment with 1 control node and 1 compute node feels about as capable as a 4-bay enthusiast NAS running proxmox.&lt;br /&gt;
*** If I purely count only the time spent on control and compute nodes, I reckon it took me about 18hrs to do something I did in 20mins on my terramaster. &lt;br /&gt;
* Documentation is unacceptably bad.&lt;br /&gt;
** Not kidding, there is better documentation on running automated piracy software.&lt;br /&gt;
*** Heck, there&#039;s better documentation written by indian scam farms to trick your family members to run TeamViewer to run play store giftcard scams.&lt;br /&gt;
** There can and should no excuse at all for the level of sheer incompetence displayed in these docs.&lt;br /&gt;
** OpenStack&#039;s documentation is abysmal and the responsible parties deserve to be held accountable for this.&lt;br /&gt;
* Super bad initial impression, like, I would not, could not even, recommend this in any professional capacity.&lt;br /&gt;
* I am dearly hoping that with scalability, this thing outrips my terramaster fast, else I wonder if this is worth the time, effort, fuel, money and electricity I have pumped into it.&lt;br /&gt;
&lt;br /&gt;
== communications ==&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6991</id>
		<title>MilliwaysStack</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6991"/>
		<updated>2026-01-21T21:48:31Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Compute */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We want to run an OpenStack experiment&lt;br /&gt;
&lt;br /&gt;
== The grander idea ==&lt;br /&gt;
&lt;br /&gt;
We want to try out an installation of OpenStack to give people around milliways experience with running (on). &lt;br /&gt;
&lt;br /&gt;
From an unnamed source we got 10 HPE servers. We will use 8 of them to run OpenStack on it. Storage is on a seperate machine. &lt;br /&gt;
&lt;br /&gt;
=== MVP ===&lt;br /&gt;
The MVP would be:&lt;br /&gt;
* Kubernetes / docker &lt;br /&gt;
* object storage&lt;br /&gt;
* file systems&lt;br /&gt;
* Networking&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* Firewalling&lt;br /&gt;
* Databases - mariaDB / PostgreSQL &lt;br /&gt;
* Someone something redis I guess&lt;br /&gt;
* container registry&lt;br /&gt;
&lt;br /&gt;
=== e-MVP ===&lt;br /&gt;
The extended MVP would be:&lt;br /&gt;
* functional Monitoring &amp;amp; alerting&lt;br /&gt;
* autoscaling &lt;br /&gt;
* integration into milliways identity &amp;amp; access management authentik &lt;br /&gt;
* logging &amp;amp; alerting&lt;br /&gt;
&lt;br /&gt;
== the software stack explained ==&lt;br /&gt;
&lt;br /&gt;
OpenStack is a cloud framework stack that offeres AWS / Azure / GC alike services. &lt;br /&gt;
&lt;br /&gt;
Most documentation is availible for Ubuntu &amp;amp; Red Hat. On the longer term an installation under NixOS might be feasable.&lt;br /&gt;
&lt;br /&gt;
== Asset List ==&lt;br /&gt;
=== Rack ===&lt;br /&gt;
* 47U&lt;br /&gt;
* 950mm external depth&lt;br /&gt;
** 915mm internal depth&lt;br /&gt;
=== Consumables &amp;amp; Small Materials ===&lt;br /&gt;
* 1 x Samsung 2.5&amp;quot; 860 EVO 2TB&lt;br /&gt;
* Assorted M2 - M3 screws&lt;br /&gt;
* Assorted mismatched bundle of M5 and M6 cagenuts and bolts&lt;br /&gt;
* SFPs&lt;br /&gt;
=== Switches ===&lt;br /&gt;
* 2 x Dell PowerConnect 7048R-RA&lt;br /&gt;
* 1 x Cisco 3560e&lt;br /&gt;
=== [[MilliwaysStack_Servers | Servers]] ===&lt;br /&gt;
* 1 Dell PowerEdge R710 server as storage&lt;br /&gt;
** 2 x X5570 2,93GHz&lt;br /&gt;
** 192GB RAM&lt;br /&gt;
** 6 x 3,5&amp;quot; bays&lt;br /&gt;
*** 6 x hotswap 3,5&amp;quot; drive sleds/brackets&lt;br /&gt;
** Drives&lt;br /&gt;
*** 1 x Samsung 2.5&amp;quot; 850 EVO 500GB&lt;br /&gt;
**** for OS&lt;br /&gt;
**** Hidden in aftermarket [https://www.amazon.nl/dp/B083XJPCGL &amp;quot;Optical Drive&amp;quot;] adapter.&lt;br /&gt;
***We have more drives than bays, but not enough drives to make a nice or ideal configuration. As such, the Dell storage situation is likely temporary until we can figure out if we add more 12T or 10T or keep as-is.&lt;br /&gt;
**** 2 x Seagate Exos X18 12TB&lt;br /&gt;
**** 1 x Seagate Exos X18 10TB&lt;br /&gt;
**** 4 x WD Red 4TB&lt;br /&gt;
**** 4 x WD Green 3TB&lt;br /&gt;
** no rails&lt;br /&gt;
* 2 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram&lt;br /&gt;
** PCI Riser to 4* NVMe adapter&lt;br /&gt;
*** 1TB Crucial NVMe &lt;br /&gt;
** iLO4&lt;br /&gt;
*** It seems it accepts 35DPH-SVSXJ-HGBJN-C7N5R-2SS4W as activation key for iLO Advanced license?&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** Slide rails&lt;br /&gt;
* 8 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram &lt;br /&gt;
** iLO4&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** 7 x slide rails&lt;br /&gt;
&lt;br /&gt;
== Shopping List ==&lt;br /&gt;
 It&#039;s ofc. sexy as all hell to buy memory, AI cards, flash storage and allsorts, but literally none of that will ever work if we don&#039;t have our Generic basics in order. While we prefer big donations go to big ticket items, many small ticket items unexpectedly add up in the long run. Please do not forget the generic basics!&lt;br /&gt;
* Generic Basics&lt;br /&gt;
** PDU&lt;br /&gt;
*** &amp;lt;s&amp;gt;Temporary 1U unmanaged PDU with 16A/230V C19 input and 1* C19 + 8* Type F outlet.&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Perfect: Managed Rack mountable PDU with CEE Red 16A/20A 400v input to C13/C14 + C19/C20 outlets.&lt;br /&gt;
**** [https://www.eaton.com/us/en-us/skuPage.PDU3XEVSR6G20.html Stupid expensive example]&lt;br /&gt;
*** Alternatively; a &amp;quot;normal&amp;quot; Serverrack PDU (still strong prefer managed) + 16A/20A 400v -&amp;gt; 16A 230V transform&lt;br /&gt;
** Network Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [Color]&lt;br /&gt;
**** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Power Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Screws, Nuts, Bolts&lt;br /&gt;
*** &amp;lt;s&amp;gt;Assorted M2,M2.5,M3 Screws&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Some actual properly matching cage nuts\bolts would be great&lt;br /&gt;
** PCI Risers&lt;br /&gt;
*** &amp;lt;s&amp;gt;Single NVMe adapters&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Multi NVMe adapters&lt;br /&gt;
** KVM&lt;br /&gt;
*** PiKVM?&lt;br /&gt;
* Dell - Storage&lt;br /&gt;
** &amp;lt;s&amp;gt;2* Drive sleds&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;New RAID Card that supports passthrough\JBOD&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;2* SFF-8087 -&amp;gt; SFF-8087 Mini SAS Cable&amp;lt;/s&amp;gt;&lt;br /&gt;
** Drives&lt;br /&gt;
*** &amp;lt;s&amp;gt;500GB SSD for OS&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Bracket and SATA Cable Adapter for SSD&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Technically not shopping, but for historical tracking;&lt;br /&gt;
**** Old Exos X16 2 x 12T and 1 x 10T were RMA&#039;d and replaced with X18&#039;s&amp;lt;/s&amp;gt;&lt;br /&gt;
*** 12T ?&lt;br /&gt;
* HP1 - Control&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* HP2 - Compute&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* Flash Storage&lt;br /&gt;
** We&#039;ll need [https://www.amazon.de/-/en/dp/B07GCDH5D8 Drive Trays] for the HPs if we wanna add 2.5&amp;quot; SSDs&lt;br /&gt;
** Control and Compute servers each have 3 open m.2 NVMe slots&lt;br /&gt;
** &amp;lt;s&amp;gt;1 x 2TB Samsung 860 EVO&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
 nb. this is quick &#039;n&#039; dirty as I go along.&lt;br /&gt;
 In the short-term future I&#039;d much rather replace this adhoc documentation with something like NetBox.&lt;br /&gt;
=== Network ===&lt;br /&gt;
* Supernet 10.42.0.0/16&lt;br /&gt;
** Vlan 42&lt;br /&gt;
*** Interconnect&lt;br /&gt;
*** 10.42.0.0/30&lt;br /&gt;
**** Gateway 10.42.0.1&lt;br /&gt;
**** Milliways Core 10.42.0.2&lt;br /&gt;
** Vlan 5&lt;br /&gt;
*** Mgmt \ OOB&lt;br /&gt;
*** 10.42.1.0/24&lt;br /&gt;
**** Milliways Core 10.42.1.1&lt;br /&gt;
**** Dell iDRAC 10.42.1.5&lt;br /&gt;
**** Dell RAID Controller 10.42.1.6&lt;br /&gt;
**** HP 1 iLO 10.42.1.7&lt;br /&gt;
**** HP 2 iLO 10.42.1.8&lt;br /&gt;
** Vlan 10&lt;br /&gt;
*** Prod&lt;br /&gt;
*** 10.42.10.0/24&lt;br /&gt;
**** Milliways Core 10.42.10.1&lt;br /&gt;
**** Dell 10.42.10.2&lt;br /&gt;
**** HP 1 10.42.10.3&lt;br /&gt;
**** HP 2 10.42.10.5&lt;br /&gt;
** Vlan 15&lt;br /&gt;
*** [https://docs.openstack.org/neutron/2025.1/install/environment-networking-ubuntu.html Provider Network]&lt;br /&gt;
**** This is an OpenStack thing for the secondary Control and Compute node interfaces.&lt;br /&gt;
**** Currently [https://docs.openstack.org/neutron/2025.1/install/environment-networking-controller-ubuntu.html no IP] address assigned.&lt;br /&gt;
**** May change in future if documentation mandates it.&lt;br /&gt;
&lt;br /&gt;
=== Cable Mgmt ===&lt;br /&gt;
 As there are some early ambitions to physically take this environment to events, perhaps we should seriously think about making our lives easier by already thinking about colorcoding connectivity. While this will help us connecting everything again at $event when we&#039;re sleepdeprived\drunk\explaining to newbies, this has the added effect of making it all look slightly more cooler than just a spaghetti of all boring white cables or worse, a spaghetti of whatever the fuck we have lying around.&lt;br /&gt;
&lt;br /&gt;
This is all just made-up without too much thought. This is specifically intended to start a discussion so we can work toward an agreement, it is not intended to be a unilateral decision. Example; You&#039;ll notice 0 thought was put into fiber or not ;)&lt;br /&gt;
* RED&lt;br /&gt;
** Mgmt \ OOB&lt;br /&gt;
*** iDRACs, iLOs, RAID Cards, etc&lt;br /&gt;
* GREEN&lt;br /&gt;
** Storage Prod&lt;br /&gt;
*** At least the Dell, maybe HPs if we get into flash storage&lt;br /&gt;
* BLUE&lt;br /&gt;
** Compute Prod&lt;br /&gt;
*** Likely overwhelmingly the HPs&lt;br /&gt;
* YELLOW&lt;br /&gt;
** Interconnect&lt;br /&gt;
*** Connectivity to $outside, between switches, whatever&lt;br /&gt;
&lt;br /&gt;
=== Naming Convention ===&lt;br /&gt;
 We need names!&lt;br /&gt;
 Can&#039;t keep calling these &amp;quot;Dell&amp;quot;, &amp;quot;HP1&amp;quot;, &amp;quot;HP2&amp;quot; etc.&lt;br /&gt;
 Calling them by their S/Ns is also super boring and cumbersome; &amp;quot;Oh yea, we need to setup 5V6S064&amp;quot;&lt;br /&gt;
 We could even opt for dual names. Internally, when logged in to $shell, the names could be functional &amp;quot;milliways-control-node-1&amp;quot; so it&#039;s clear what you&#039;re doing, but externally, the Asset Tag could be a Hitchhiker&#039;s Guide to the Galaxy character or a DiscWorld town or something. That way, if we do ever show this off at events, we can do cool shit with light up tags, make stuff funny and recognizable and cool to talk about - it also makes it way more relatable to market for when asking for donations; &amp;quot;Ya, we&#039;re looking for extra storage for Überwald&amp;quot; sounds much better than &amp;quot;Ya we&#039;re looking for extra storage for 5V6S064 or milliways-control-node-1&amp;quot;&lt;br /&gt;
 Naturally, once we get NetBox going, we can map the Asset names to the actual server name and potentially it&#039;s serial so we don&#039;t get confused internally (if we want to use serials, there&#039;s somethign to be said for not using serials here)&lt;br /&gt;
&lt;br /&gt;
* Functional&lt;br /&gt;
** milliways-control-node-1&lt;br /&gt;
** milliways-control-node-2&lt;br /&gt;
** control-node-1&lt;br /&gt;
** compute-node-1&lt;br /&gt;
** flash-storage-1&lt;br /&gt;
&lt;br /&gt;
* Marketing&lt;br /&gt;
** HGttG characters&lt;br /&gt;
*** Arthur&lt;br /&gt;
*** Ford&lt;br /&gt;
*** Zaphod&lt;br /&gt;
** Discworld locations&lt;br /&gt;
*** Ankh-Morpork&lt;br /&gt;
*** Überwald&lt;br /&gt;
*** Lancre&lt;br /&gt;
***&lt;br /&gt;
&lt;br /&gt;
=== OpenStack ===&lt;br /&gt;
&lt;br /&gt;
 We&#039;re using 2025.1 (epoxy) as 2025.2 (flamingo) has an undocumented breaking change making installation of keystone impossible. We have registered a bug with the documentation on launchpad for this.&lt;br /&gt;
&lt;br /&gt;
* [https://docs.openstack.org/install-guide/ Installation guide]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-2025-1-epoxy Minimal Deployment]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/overview.html#example-architecture Example Architecture]&lt;br /&gt;
 Following installation guide recommendation, passwords are created with &amp;lt;code&amp;gt;openssl rand -hex 10&amp;lt;/code&amp;gt; and saved in a password store.&lt;br /&gt;
&lt;br /&gt;
==== Controller ====&lt;br /&gt;
* [https://docs.openstack.org/keystone/2025.1/install/ Identity service]&lt;br /&gt;
** [https://docs.openstack.org/keystone/2025.2/install/keystone-users-ubuntu.html Broken] in 2025.2&lt;br /&gt;
***[https://opendev.org/openstack/keystone/src/commit/82c80dccf6c2e74e27b90f5204de6da1fc6bd76d/releasenotes/notes/remove-wsgi-scripts-615b97ee4d6e0de2.yaml This] commit removes the WSGI scripts, ``keystone-wsgi-admin`` and ``keystone-wsgi-public``.&lt;br /&gt;
*** Both scripts are still called by the openstack command. This means running any openstack command to create a domain, projects, users, and roles fails with the error&lt;br /&gt;
****&amp;lt;code&amp;gt;Failed to discover available identity versions when contacting http://controller:5000/v3. Attempting to parse version from URL.&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Evidence:&lt;br /&gt;
**** &amp;lt;code&amp;gt;tail /var/log/apache2/keystone.log&amp;lt;/code&amp;gt;&lt;br /&gt;
***** &amp;lt;code&amp;gt;Target WSGI script not found or unable to stat: /usr/bin/keystone-wsgi-public&amp;lt;/code&amp;gt;&lt;br /&gt;
** Workaround, use 2025.1 instead&lt;br /&gt;
** Completed 2025-01-18&lt;br /&gt;
* [https://docs.openstack.org/glance/2025.1/install/ Image service]&lt;br /&gt;
** [https://docs.openstack.org/glance/2025.1/install/install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide has you create 3 API endpoints for the service.&lt;br /&gt;
**** You need to configure access to keystone with one of them, but you are not told which one. Only &amp;lt;code&amp;gt;public&amp;lt;/code&amp;gt; will work.&lt;br /&gt;
*** Configuring glance-api.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** Completed 2025-01-19&lt;br /&gt;
* [https://docs.openstack.org/placement/2025.1/install/ Placement service]&lt;br /&gt;
** [https://docs.openstack.org/placement/2025.1/install/verify.html Bad] Documentation&lt;br /&gt;
*** If you followed the guide, your user account [https://storyboard.openstack.org/#!/story/2008969 does not have the rights] to read &amp;lt;code&amp;gt;/etc/placement/placement.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Running &amp;lt;code&amp;gt;placement-status upgrade check&amp;lt;/code&amp;gt; as root proves the service works.&lt;br /&gt;
*** Undocumented requirement fulfilled; &amp;lt;code&amp;gt;usermod -aG placement&amp;lt;/code&amp;gt;&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portions of [https://docs.openstack.org/nova/2025.1/install/ Compute]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/controller-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring nova.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
*** The guide attempts to make you configure options for the networking service, which you have not installed yet, because the guide makes you install this service first&lt;br /&gt;
*** &amp;lt;code&amp;gt;Due to a packaging bug, remove the log_dir option from the [DEFAULT] section.&amp;lt;/code&amp;gt;&lt;br /&gt;
**** ???? THEN FIX THE PACKAGE?!?!?!!!!&lt;br /&gt;
*** The &amp;lt;code&amp;gt;[glance]&amp;lt;/code&amp;gt; option you are instructed to use is deprecated&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portion of [https://docs.openstack.org/neutron/2025.1/install/ Networking]&lt;br /&gt;
** [https://docs.openstack.org/neutron/2025.1/install/controller-install-option2-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring neutron.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** More [https://docs.openstack.org/neutron/2025.1/admin/deploy-ovs-provider.html Bad] Documentation&lt;br /&gt;
*** The guide refers to configuring the Open vSwitch agent and offers more information which directly contradicts the guide.&lt;br /&gt;
**** The guide says to edit neutron.conf with &amp;lt;code&amp;gt;service_plugins = router&amp;lt;/code&amp;gt;&lt;br /&gt;
**** The Open vSwitch agent example configuration for controllers says: &amp;quot;Disable service plug-ins because provider networks do not require any.&amp;quot;&lt;br /&gt;
*** Configuring openvswitch_agent.ini is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
**** The guide attempts to make you configure the the name of the bridge connected to the underlying provider physical network but you have not yet created this bridge when the guide asks you for the name.&lt;br /&gt;
** Completed 2025-01-21&lt;br /&gt;
* [https://docs.openstack.org/horizon/2025.1/install/ Dashboard]&lt;br /&gt;
** Extremely weird behavior, Dashboard will only load if Debug is set to True and compression is turned on.&lt;br /&gt;
** Completed 2025-01-21&lt;br /&gt;
&lt;br /&gt;
==== Compute ====&lt;br /&gt;
* [https://docs.openstack.org/nova/2025.1/install/ Compute Service]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/compute-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide entirely fails to mention that keeping default config in &amp;lt;code&amp;gt;[api_database]&amp;lt;/code&amp;gt; will result in errors. &lt;br /&gt;
*** The guide entirely fails to mention that keeping default config in &amp;lt;code&amp;gt;[database]&amp;lt;/code&amp;gt; will result in errors.&lt;br /&gt;
**** Yes, that&#039;s basically the same documentation error twice, but for two different options, on the same page. &lt;br /&gt;
* [https://docs.openstack.org/neutron/2025.1/install/ Networking Service]&lt;br /&gt;
* Extremely weird behaviour when linking up with control node. placement service decided password was wrong, it wasn&#039;t, nova scheduler and conducter wouldn&#039;t start, the &amp;quot;fix&amp;quot;  is basically patience, no changes were made between it not working and it working and I have no idea why this now works.&lt;br /&gt;
* Completed 2025-01-21&lt;br /&gt;
&lt;br /&gt;
==== Overall feelings ====&lt;br /&gt;
* Clunky and convoluted.&lt;br /&gt;
** The MVP environment with 1 control node and 1 compute node feels about as capable as a 4-bay enthusiast NAS running proxmox.&lt;br /&gt;
* Documentation is unacceptably bad.&lt;br /&gt;
** Not kidding, there is better documentation on running automated piracy software.&lt;br /&gt;
*** Heck, there&#039;s better documentation written by indian scam farms to trick your family members to run TeamViewer to run play store giftcard scams.&lt;br /&gt;
** There can and should no excuse at all for the level of sheer incompetence displayed in these docs.&lt;br /&gt;
** OpenStack&#039;s documentation is abysmal and the responsible parties deserve to be held accountable for this.&lt;br /&gt;
* Super bad initial impression, like, I would not, could not even, recommend this in any professional capacity.&lt;br /&gt;
* I am dearly hoping that with scalability, this thing outrips my terramaster fast, else I wonder if this is worth the time, effort, fuel, money and electricity I have pumped into it.&lt;br /&gt;
&lt;br /&gt;
== communications ==&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6990</id>
		<title>MilliwaysStack</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6990"/>
		<updated>2026-01-21T20:07:48Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Compute */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We want to run an OpenStack experiment&lt;br /&gt;
&lt;br /&gt;
== The grander idea ==&lt;br /&gt;
&lt;br /&gt;
We want to try out an installation of OpenStack to give people around milliways experience with running (on). &lt;br /&gt;
&lt;br /&gt;
From an unnamed source we got 10 HPE servers. We will use 8 of them to run OpenStack on it. Storage is on a seperate machine. &lt;br /&gt;
&lt;br /&gt;
=== MVP ===&lt;br /&gt;
The MVP would be:&lt;br /&gt;
* Kubernetes / docker &lt;br /&gt;
* object storage&lt;br /&gt;
* file systems&lt;br /&gt;
* Networking&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* Firewalling&lt;br /&gt;
* Databases - mariaDB / PostgreSQL &lt;br /&gt;
* Someone something redis I guess&lt;br /&gt;
* container registry&lt;br /&gt;
&lt;br /&gt;
=== e-MVP ===&lt;br /&gt;
The extended MVP would be:&lt;br /&gt;
* functional Monitoring &amp;amp; alerting&lt;br /&gt;
* autoscaling &lt;br /&gt;
* integration into milliways identity &amp;amp; access management authentik &lt;br /&gt;
* logging &amp;amp; alerting&lt;br /&gt;
&lt;br /&gt;
== the software stack explained ==&lt;br /&gt;
&lt;br /&gt;
OpenStack is a cloud framework stack that offeres AWS / Azure / GC alike services. &lt;br /&gt;
&lt;br /&gt;
Most documentation is availible for Ubuntu &amp;amp; Red Hat. On the longer term an installation under NixOS might be feasable.&lt;br /&gt;
&lt;br /&gt;
== Asset List ==&lt;br /&gt;
=== Rack ===&lt;br /&gt;
* 47U&lt;br /&gt;
* 950mm external depth&lt;br /&gt;
** 915mm internal depth&lt;br /&gt;
=== Consumables &amp;amp; Small Materials ===&lt;br /&gt;
* 1 x Samsung 2.5&amp;quot; 860 EVO 2TB&lt;br /&gt;
* Assorted M2 - M3 screws&lt;br /&gt;
* Assorted mismatched bundle of M5 and M6 cagenuts and bolts&lt;br /&gt;
* SFPs&lt;br /&gt;
=== Switches ===&lt;br /&gt;
* 2 x Dell PowerConnect 7048R-RA&lt;br /&gt;
* 1 x Cisco 3560e&lt;br /&gt;
=== [[MilliwaysStack_Servers | Servers]] ===&lt;br /&gt;
* 1 Dell PowerEdge R710 server as storage&lt;br /&gt;
** 2 x X5570 2,93GHz&lt;br /&gt;
** 192GB RAM&lt;br /&gt;
** 6 x 3,5&amp;quot; bays&lt;br /&gt;
*** 6 x hotswap 3,5&amp;quot; drive sleds/brackets&lt;br /&gt;
** Drives&lt;br /&gt;
*** 1 x Samsung 2.5&amp;quot; 850 EVO 500GB&lt;br /&gt;
**** for OS&lt;br /&gt;
**** Hidden in aftermarket [https://www.amazon.nl/dp/B083XJPCGL &amp;quot;Optical Drive&amp;quot;] adapter.&lt;br /&gt;
***We have more drives than bays, but not enough drives to make a nice or ideal configuration. As such, the Dell storage situation is likely temporary until we can figure out if we add more 12T or 10T or keep as-is.&lt;br /&gt;
**** 2 x Seagate Exos X18 12TB&lt;br /&gt;
**** 1 x Seagate Exos X18 10TB&lt;br /&gt;
**** 4 x WD Red 4TB&lt;br /&gt;
**** 4 x WD Green 3TB&lt;br /&gt;
** no rails&lt;br /&gt;
* 2 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram&lt;br /&gt;
** PCI Riser to 4* NVMe adapter&lt;br /&gt;
*** 1TB Crucial NVMe &lt;br /&gt;
** iLO4&lt;br /&gt;
*** It seems it accepts 35DPH-SVSXJ-HGBJN-C7N5R-2SS4W as activation key for iLO Advanced license?&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** Slide rails&lt;br /&gt;
* 8 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram &lt;br /&gt;
** iLO4&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** 7 x slide rails&lt;br /&gt;
&lt;br /&gt;
== Shopping List ==&lt;br /&gt;
 It&#039;s ofc. sexy as all hell to buy memory, AI cards, flash storage and allsorts, but literally none of that will ever work if we don&#039;t have our Generic basics in order. While we prefer big donations go to big ticket items, many small ticket items unexpectedly add up in the long run. Please do not forget the generic basics!&lt;br /&gt;
* Generic Basics&lt;br /&gt;
** PDU&lt;br /&gt;
*** &amp;lt;s&amp;gt;Temporary 1U unmanaged PDU with 16A/230V C19 input and 1* C19 + 8* Type F outlet.&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Perfect: Managed Rack mountable PDU with CEE Red 16A/20A 400v input to C13/C14 + C19/C20 outlets.&lt;br /&gt;
**** [https://www.eaton.com/us/en-us/skuPage.PDU3XEVSR6G20.html Stupid expensive example]&lt;br /&gt;
*** Alternatively; a &amp;quot;normal&amp;quot; Serverrack PDU (still strong prefer managed) + 16A/20A 400v -&amp;gt; 16A 230V transform&lt;br /&gt;
** Network Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [Color]&lt;br /&gt;
**** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Power Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Screws, Nuts, Bolts&lt;br /&gt;
*** &amp;lt;s&amp;gt;Assorted M2,M2.5,M3 Screws&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Some actual properly matching cage nuts\bolts would be great&lt;br /&gt;
** PCI Risers&lt;br /&gt;
*** &amp;lt;s&amp;gt;Single NVMe adapters&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Multi NVMe adapters&lt;br /&gt;
** KVM&lt;br /&gt;
*** PiKVM?&lt;br /&gt;
* Dell - Storage&lt;br /&gt;
** &amp;lt;s&amp;gt;2* Drive sleds&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;New RAID Card that supports passthrough\JBOD&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;2* SFF-8087 -&amp;gt; SFF-8087 Mini SAS Cable&amp;lt;/s&amp;gt;&lt;br /&gt;
** Drives&lt;br /&gt;
*** &amp;lt;s&amp;gt;500GB SSD for OS&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Bracket and SATA Cable Adapter for SSD&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Technically not shopping, but for historical tracking;&lt;br /&gt;
**** Old Exos X16 2 x 12T and 1 x 10T were RMA&#039;d and replaced with X18&#039;s&amp;lt;/s&amp;gt;&lt;br /&gt;
*** 12T ?&lt;br /&gt;
* HP1 - Control&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* HP2 - Compute&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* Flash Storage&lt;br /&gt;
** We&#039;ll need [https://www.amazon.de/-/en/dp/B07GCDH5D8 Drive Trays] for the HPs if we wanna add 2.5&amp;quot; SSDs&lt;br /&gt;
** Control and Compute servers each have 3 open m.2 NVMe slots&lt;br /&gt;
** &amp;lt;s&amp;gt;1 x 2TB Samsung 860 EVO&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
 nb. this is quick &#039;n&#039; dirty as I go along.&lt;br /&gt;
 In the short-term future I&#039;d much rather replace this adhoc documentation with something like NetBox.&lt;br /&gt;
=== Network ===&lt;br /&gt;
* Supernet 10.42.0.0/16&lt;br /&gt;
** Vlan 42&lt;br /&gt;
*** Interconnect&lt;br /&gt;
*** 10.42.0.0/30&lt;br /&gt;
**** Gateway 10.42.0.1&lt;br /&gt;
**** Milliways Core 10.42.0.2&lt;br /&gt;
** Vlan 5&lt;br /&gt;
*** Mgmt \ OOB&lt;br /&gt;
*** 10.42.1.0/24&lt;br /&gt;
**** Milliways Core 10.42.1.1&lt;br /&gt;
**** Dell iDRAC 10.42.1.5&lt;br /&gt;
**** Dell RAID Controller 10.42.1.6&lt;br /&gt;
**** HP 1 iLO 10.42.1.7&lt;br /&gt;
**** HP 2 iLO 10.42.1.8&lt;br /&gt;
** Vlan 10&lt;br /&gt;
*** Prod&lt;br /&gt;
*** 10.42.10.0/24&lt;br /&gt;
**** Milliways Core 10.42.10.1&lt;br /&gt;
**** Dell 10.42.10.2&lt;br /&gt;
**** HP 1 10.42.10.3&lt;br /&gt;
**** HP 2 10.42.10.5&lt;br /&gt;
** Vlan 15&lt;br /&gt;
*** [https://docs.openstack.org/neutron/2025.1/install/environment-networking-ubuntu.html Provider Network]&lt;br /&gt;
**** This is an OpenStack thing for the secondary Control and Compute node interfaces.&lt;br /&gt;
**** Currently [https://docs.openstack.org/neutron/2025.1/install/environment-networking-controller-ubuntu.html no IP] address assigned.&lt;br /&gt;
**** May change in future if documentation mandates it.&lt;br /&gt;
&lt;br /&gt;
=== Cable Mgmt ===&lt;br /&gt;
 As there are some early ambitions to physically take this environment to events, perhaps we should seriously think about making our lives easier by already thinking about colorcoding connectivity. While this will help us connecting everything again at $event when we&#039;re sleepdeprived\drunk\explaining to newbies, this has the added effect of making it all look slightly more cooler than just a spaghetti of all boring white cables or worse, a spaghetti of whatever the fuck we have lying around.&lt;br /&gt;
&lt;br /&gt;
This is all just made-up without too much thought. This is specifically intended to start a discussion so we can work toward an agreement, it is not intended to be a unilateral decision. Example; You&#039;ll notice 0 thought was put into fiber or not ;)&lt;br /&gt;
* RED&lt;br /&gt;
** Mgmt \ OOB&lt;br /&gt;
*** iDRACs, iLOs, RAID Cards, etc&lt;br /&gt;
* GREEN&lt;br /&gt;
** Storage Prod&lt;br /&gt;
*** At least the Dell, maybe HPs if we get into flash storage&lt;br /&gt;
* BLUE&lt;br /&gt;
** Compute Prod&lt;br /&gt;
*** Likely overwhelmingly the HPs&lt;br /&gt;
* YELLOW&lt;br /&gt;
** Interconnect&lt;br /&gt;
*** Connectivity to $outside, between switches, whatever&lt;br /&gt;
&lt;br /&gt;
=== Naming Convention ===&lt;br /&gt;
 We need names!&lt;br /&gt;
 Can&#039;t keep calling these &amp;quot;Dell&amp;quot;, &amp;quot;HP1&amp;quot;, &amp;quot;HP2&amp;quot; etc.&lt;br /&gt;
 Calling them by their S/Ns is also super boring and cumbersome; &amp;quot;Oh yea, we need to setup 5V6S064&amp;quot;&lt;br /&gt;
 We could even opt for dual names. Internally, when logged in to $shell, the names could be functional &amp;quot;milliways-control-node-1&amp;quot; so it&#039;s clear what you&#039;re doing, but externally, the Asset Tag could be a Hitchhiker&#039;s Guide to the Galaxy character or a DiscWorld town or something. That way, if we do ever show this off at events, we can do cool shit with light up tags, make stuff funny and recognizable and cool to talk about - it also makes it way more relatable to market for when asking for donations; &amp;quot;Ya, we&#039;re looking for extra storage for Überwald&amp;quot; sounds much better than &amp;quot;Ya we&#039;re looking for extra storage for 5V6S064 or milliways-control-node-1&amp;quot;&lt;br /&gt;
 Naturally, once we get NetBox going, we can map the Asset names to the actual server name and potentially it&#039;s serial so we don&#039;t get confused internally (if we want to use serials, there&#039;s somethign to be said for not using serials here)&lt;br /&gt;
&lt;br /&gt;
* Functional&lt;br /&gt;
** milliways-control-node-1&lt;br /&gt;
** milliways-control-node-2&lt;br /&gt;
** control-node-1&lt;br /&gt;
** compute-node-1&lt;br /&gt;
** flash-storage-1&lt;br /&gt;
&lt;br /&gt;
* Marketing&lt;br /&gt;
** HGttG characters&lt;br /&gt;
*** Arthur&lt;br /&gt;
*** Ford&lt;br /&gt;
*** Zaphod&lt;br /&gt;
** Discworld locations&lt;br /&gt;
*** Ankh-Morpork&lt;br /&gt;
*** Überwald&lt;br /&gt;
*** Lancre&lt;br /&gt;
***&lt;br /&gt;
&lt;br /&gt;
=== OpenStack ===&lt;br /&gt;
&lt;br /&gt;
 We&#039;re using 2025.1 (epoxy) as 2025.2 (flamingo) has an undocumented breaking change making installation of keystone impossible. We have registered a bug with the documentation on launchpad for this.&lt;br /&gt;
&lt;br /&gt;
* [https://docs.openstack.org/install-guide/ Installation guide]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-2025-1-epoxy Minimal Deployment]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/overview.html#example-architecture Example Architecture]&lt;br /&gt;
 Following installation guide recommendation, passwords are created with &amp;lt;code&amp;gt;openssl rand -hex 10&amp;lt;/code&amp;gt; and saved in a password store.&lt;br /&gt;
&lt;br /&gt;
==== Controller ====&lt;br /&gt;
* [https://docs.openstack.org/keystone/2025.1/install/ Identity service]&lt;br /&gt;
** [https://docs.openstack.org/keystone/2025.2/install/keystone-users-ubuntu.html Broken] in 2025.2&lt;br /&gt;
***[https://opendev.org/openstack/keystone/src/commit/82c80dccf6c2e74e27b90f5204de6da1fc6bd76d/releasenotes/notes/remove-wsgi-scripts-615b97ee4d6e0de2.yaml This] commit removes the WSGI scripts, ``keystone-wsgi-admin`` and ``keystone-wsgi-public``.&lt;br /&gt;
*** Both scripts are still called by the openstack command. This means running any openstack command to create a domain, projects, users, and roles fails with the error&lt;br /&gt;
****&amp;lt;code&amp;gt;Failed to discover available identity versions when contacting http://controller:5000/v3. Attempting to parse version from URL.&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Evidence:&lt;br /&gt;
**** &amp;lt;code&amp;gt;tail /var/log/apache2/keystone.log&amp;lt;/code&amp;gt;&lt;br /&gt;
***** &amp;lt;code&amp;gt;Target WSGI script not found or unable to stat: /usr/bin/keystone-wsgi-public&amp;lt;/code&amp;gt;&lt;br /&gt;
** Workaround, use 2025.1 instead&lt;br /&gt;
** Completed 2025-01-18&lt;br /&gt;
* [https://docs.openstack.org/glance/2025.1/install/ Image service]&lt;br /&gt;
** [https://docs.openstack.org/glance/2025.1/install/install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide has you create 3 API endpoints for the service.&lt;br /&gt;
**** You need to configure access to keystone with one of them, but you are not told which one. Only &amp;lt;code&amp;gt;public&amp;lt;/code&amp;gt; will work.&lt;br /&gt;
*** Configuring glance-api.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** Completed 2025-01-19&lt;br /&gt;
* [https://docs.openstack.org/placement/2025.1/install/ Placement service]&lt;br /&gt;
** [https://docs.openstack.org/placement/2025.1/install/verify.html Bad] Documentation&lt;br /&gt;
*** If you followed the guide, your user account [https://storyboard.openstack.org/#!/story/2008969 does not have the rights] to read &amp;lt;code&amp;gt;/etc/placement/placement.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Running &amp;lt;code&amp;gt;placement-status upgrade check&amp;lt;/code&amp;gt; as root proves the service works.&lt;br /&gt;
*** Undocumented requirement fulfilled; &amp;lt;code&amp;gt;usermod -aG placement&amp;lt;/code&amp;gt;&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portions of [https://docs.openstack.org/nova/2025.1/install/ Compute]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/controller-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring nova.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
*** The guide attempts to make you configure options for the networking service, which you have not installed yet, because the guide makes you install this service first&lt;br /&gt;
*** &amp;lt;code&amp;gt;Due to a packaging bug, remove the log_dir option from the [DEFAULT] section.&amp;lt;/code&amp;gt;&lt;br /&gt;
**** ???? THEN FIX THE PACKAGE?!?!?!!!!&lt;br /&gt;
*** The &amp;lt;code&amp;gt;[glance]&amp;lt;/code&amp;gt; option you are instructed to use is deprecated&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portion of [https://docs.openstack.org/neutron/2025.1/install/ Networking]&lt;br /&gt;
** [https://docs.openstack.org/neutron/2025.1/install/controller-install-option2-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring neutron.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** More [https://docs.openstack.org/neutron/2025.1/admin/deploy-ovs-provider.html Bad] Documentation&lt;br /&gt;
*** The guide refers to configuring the Open vSwitch agent and offers more information which directly contradicts the guide.&lt;br /&gt;
**** The guide says to edit neutron.conf with &amp;lt;code&amp;gt;service_plugins = router&amp;lt;/code&amp;gt;&lt;br /&gt;
**** The Open vSwitch agent example configuration for controllers says: &amp;quot;Disable service plug-ins because provider networks do not require any.&amp;quot;&lt;br /&gt;
*** Configuring openvswitch_agent.ini is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
**** The guide attempts to make you configure the the name of the bridge connected to the underlying provider physical network but you have not yet created this bridge when the guide asks you for the name.&lt;br /&gt;
** Completed 2025-01-21&lt;br /&gt;
* [https://docs.openstack.org/horizon/2025.1/install/ Dashboard]&lt;br /&gt;
** Extremely weird behavior, Dashboard will only load if Debug is set to True and compression is turned on.&lt;br /&gt;
** Completed 2025-01-21&lt;br /&gt;
&lt;br /&gt;
==== Compute ====&lt;br /&gt;
* [https://docs.openstack.org/nova/2025.1/install/ Compute Service]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/compute-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide entirely fails to mention that keeping default config in &amp;lt;code&amp;gt;[api_database]&amp;lt;/code&amp;gt; will result in errors. &lt;br /&gt;
*** The guide entirely fails to mention that keeping default config in &amp;lt;code&amp;gt;[database]&amp;lt;/code&amp;gt; will result in errors.&lt;br /&gt;
**** Yes, that&#039;s basically the same documentation error twice, but for two different options, on the same page. &lt;br /&gt;
* [https://docs.openstack.org/neutron/2025.1/install/ Networking Service]&lt;br /&gt;
&lt;br /&gt;
== communications ==&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6989</id>
		<title>MilliwaysStack</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6989"/>
		<updated>2026-01-21T17:24:52Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Controller */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We want to run an OpenStack experiment&lt;br /&gt;
&lt;br /&gt;
== The grander idea ==&lt;br /&gt;
&lt;br /&gt;
We want to try out an installation of OpenStack to give people around milliways experience with running (on). &lt;br /&gt;
&lt;br /&gt;
From an unnamed source we got 10 HPE servers. We will use 8 of them to run OpenStack on it. Storage is on a seperate machine. &lt;br /&gt;
&lt;br /&gt;
=== MVP ===&lt;br /&gt;
The MVP would be:&lt;br /&gt;
* Kubernetes / docker &lt;br /&gt;
* object storage&lt;br /&gt;
* file systems&lt;br /&gt;
* Networking&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* Firewalling&lt;br /&gt;
* Databases - mariaDB / PostgreSQL &lt;br /&gt;
* Someone something redis I guess&lt;br /&gt;
* container registry&lt;br /&gt;
&lt;br /&gt;
=== e-MVP ===&lt;br /&gt;
The extended MVP would be:&lt;br /&gt;
* functional Monitoring &amp;amp; alerting&lt;br /&gt;
* autoscaling &lt;br /&gt;
* integration into milliways identity &amp;amp; access management authentik &lt;br /&gt;
* logging &amp;amp; alerting&lt;br /&gt;
&lt;br /&gt;
== the software stack explained ==&lt;br /&gt;
&lt;br /&gt;
OpenStack is a cloud framework stack that offeres AWS / Azure / GC alike services. &lt;br /&gt;
&lt;br /&gt;
Most documentation is availible for Ubuntu &amp;amp; Red Hat. On the longer term an installation under NixOS might be feasable.&lt;br /&gt;
&lt;br /&gt;
== Asset List ==&lt;br /&gt;
=== Rack ===&lt;br /&gt;
* 47U&lt;br /&gt;
* 950mm external depth&lt;br /&gt;
** 915mm internal depth&lt;br /&gt;
=== Consumables &amp;amp; Small Materials ===&lt;br /&gt;
* 1 x Samsung 2.5&amp;quot; 860 EVO 2TB&lt;br /&gt;
* Assorted M2 - M3 screws&lt;br /&gt;
* Assorted mismatched bundle of M5 and M6 cagenuts and bolts&lt;br /&gt;
* SFPs&lt;br /&gt;
=== Switches ===&lt;br /&gt;
* 2 x Dell PowerConnect 7048R-RA&lt;br /&gt;
* 1 x Cisco 3560e&lt;br /&gt;
=== [[MilliwaysStack_Servers | Servers]] ===&lt;br /&gt;
* 1 Dell PowerEdge R710 server as storage&lt;br /&gt;
** 2 x X5570 2,93GHz&lt;br /&gt;
** 192GB RAM&lt;br /&gt;
** 6 x 3,5&amp;quot; bays&lt;br /&gt;
*** 6 x hotswap 3,5&amp;quot; drive sleds/brackets&lt;br /&gt;
** Drives&lt;br /&gt;
*** 1 x Samsung 2.5&amp;quot; 850 EVO 500GB&lt;br /&gt;
**** for OS&lt;br /&gt;
**** Hidden in aftermarket [https://www.amazon.nl/dp/B083XJPCGL &amp;quot;Optical Drive&amp;quot;] adapter.&lt;br /&gt;
***We have more drives than bays, but not enough drives to make a nice or ideal configuration. As such, the Dell storage situation is likely temporary until we can figure out if we add more 12T or 10T or keep as-is.&lt;br /&gt;
**** 2 x Seagate Exos X18 12TB&lt;br /&gt;
**** 1 x Seagate Exos X18 10TB&lt;br /&gt;
**** 4 x WD Red 4TB&lt;br /&gt;
**** 4 x WD Green 3TB&lt;br /&gt;
** no rails&lt;br /&gt;
* 2 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram&lt;br /&gt;
** PCI Riser to 4* NVMe adapter&lt;br /&gt;
*** 1TB Crucial NVMe &lt;br /&gt;
** iLO4&lt;br /&gt;
*** It seems it accepts 35DPH-SVSXJ-HGBJN-C7N5R-2SS4W as activation key for iLO Advanced license?&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** Slide rails&lt;br /&gt;
* 8 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram &lt;br /&gt;
** iLO4&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** 7 x slide rails&lt;br /&gt;
&lt;br /&gt;
== Shopping List ==&lt;br /&gt;
 It&#039;s ofc. sexy as all hell to buy memory, AI cards, flash storage and allsorts, but literally none of that will ever work if we don&#039;t have our Generic basics in order. While we prefer big donations go to big ticket items, many small ticket items unexpectedly add up in the long run. Please do not forget the generic basics!&lt;br /&gt;
* Generic Basics&lt;br /&gt;
** PDU&lt;br /&gt;
*** &amp;lt;s&amp;gt;Temporary 1U unmanaged PDU with 16A/230V C19 input and 1* C19 + 8* Type F outlet.&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Perfect: Managed Rack mountable PDU with CEE Red 16A/20A 400v input to C13/C14 + C19/C20 outlets.&lt;br /&gt;
**** [https://www.eaton.com/us/en-us/skuPage.PDU3XEVSR6G20.html Stupid expensive example]&lt;br /&gt;
*** Alternatively; a &amp;quot;normal&amp;quot; Serverrack PDU (still strong prefer managed) + 16A/20A 400v -&amp;gt; 16A 230V transform&lt;br /&gt;
** Network Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [Color]&lt;br /&gt;
**** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Power Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Screws, Nuts, Bolts&lt;br /&gt;
*** &amp;lt;s&amp;gt;Assorted M2,M2.5,M3 Screws&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Some actual properly matching cage nuts\bolts would be great&lt;br /&gt;
** PCI Risers&lt;br /&gt;
*** &amp;lt;s&amp;gt;Single NVMe adapters&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Multi NVMe adapters&lt;br /&gt;
** KVM&lt;br /&gt;
*** PiKVM?&lt;br /&gt;
* Dell - Storage&lt;br /&gt;
** &amp;lt;s&amp;gt;2* Drive sleds&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;New RAID Card that supports passthrough\JBOD&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;2* SFF-8087 -&amp;gt; SFF-8087 Mini SAS Cable&amp;lt;/s&amp;gt;&lt;br /&gt;
** Drives&lt;br /&gt;
*** &amp;lt;s&amp;gt;500GB SSD for OS&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Bracket and SATA Cable Adapter for SSD&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Technically not shopping, but for historical tracking;&lt;br /&gt;
**** Old Exos X16 2 x 12T and 1 x 10T were RMA&#039;d and replaced with X18&#039;s&amp;lt;/s&amp;gt;&lt;br /&gt;
*** 12T ?&lt;br /&gt;
* HP1 - Control&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* HP2 - Compute&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* Flash Storage&lt;br /&gt;
** We&#039;ll need [https://www.amazon.de/-/en/dp/B07GCDH5D8 Drive Trays] for the HPs if we wanna add 2.5&amp;quot; SSDs&lt;br /&gt;
** Control and Compute servers each have 3 open m.2 NVMe slots&lt;br /&gt;
** &amp;lt;s&amp;gt;1 x 2TB Samsung 860 EVO&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
 nb. this is quick &#039;n&#039; dirty as I go along.&lt;br /&gt;
 In the short-term future I&#039;d much rather replace this adhoc documentation with something like NetBox.&lt;br /&gt;
=== Network ===&lt;br /&gt;
* Supernet 10.42.0.0/16&lt;br /&gt;
** Vlan 42&lt;br /&gt;
*** Interconnect&lt;br /&gt;
*** 10.42.0.0/30&lt;br /&gt;
**** Gateway 10.42.0.1&lt;br /&gt;
**** Milliways Core 10.42.0.2&lt;br /&gt;
** Vlan 5&lt;br /&gt;
*** Mgmt \ OOB&lt;br /&gt;
*** 10.42.1.0/24&lt;br /&gt;
**** Milliways Core 10.42.1.1&lt;br /&gt;
**** Dell iDRAC 10.42.1.5&lt;br /&gt;
**** Dell RAID Controller 10.42.1.6&lt;br /&gt;
**** HP 1 iLO 10.42.1.7&lt;br /&gt;
**** HP 2 iLO 10.42.1.8&lt;br /&gt;
** Vlan 10&lt;br /&gt;
*** Prod&lt;br /&gt;
*** 10.42.10.0/24&lt;br /&gt;
**** Milliways Core 10.42.10.1&lt;br /&gt;
**** Dell 10.42.10.2&lt;br /&gt;
**** HP 1 10.42.10.3&lt;br /&gt;
**** HP 2 10.42.10.5&lt;br /&gt;
** Vlan 15&lt;br /&gt;
*** [https://docs.openstack.org/neutron/2025.1/install/environment-networking-ubuntu.html Provider Network]&lt;br /&gt;
**** This is an OpenStack thing for the secondary Control and Compute node interfaces.&lt;br /&gt;
**** Currently [https://docs.openstack.org/neutron/2025.1/install/environment-networking-controller-ubuntu.html no IP] address assigned.&lt;br /&gt;
**** May change in future if documentation mandates it.&lt;br /&gt;
&lt;br /&gt;
=== Cable Mgmt ===&lt;br /&gt;
 As there are some early ambitions to physically take this environment to events, perhaps we should seriously think about making our lives easier by already thinking about colorcoding connectivity. While this will help us connecting everything again at $event when we&#039;re sleepdeprived\drunk\explaining to newbies, this has the added effect of making it all look slightly more cooler than just a spaghetti of all boring white cables or worse, a spaghetti of whatever the fuck we have lying around.&lt;br /&gt;
&lt;br /&gt;
This is all just made-up without too much thought. This is specifically intended to start a discussion so we can work toward an agreement, it is not intended to be a unilateral decision. Example; You&#039;ll notice 0 thought was put into fiber or not ;)&lt;br /&gt;
* RED&lt;br /&gt;
** Mgmt \ OOB&lt;br /&gt;
*** iDRACs, iLOs, RAID Cards, etc&lt;br /&gt;
* GREEN&lt;br /&gt;
** Storage Prod&lt;br /&gt;
*** At least the Dell, maybe HPs if we get into flash storage&lt;br /&gt;
* BLUE&lt;br /&gt;
** Compute Prod&lt;br /&gt;
*** Likely overwhelmingly the HPs&lt;br /&gt;
* YELLOW&lt;br /&gt;
** Interconnect&lt;br /&gt;
*** Connectivity to $outside, between switches, whatever&lt;br /&gt;
&lt;br /&gt;
=== Naming Convention ===&lt;br /&gt;
 We need names!&lt;br /&gt;
 Can&#039;t keep calling these &amp;quot;Dell&amp;quot;, &amp;quot;HP1&amp;quot;, &amp;quot;HP2&amp;quot; etc.&lt;br /&gt;
 Calling them by their S/Ns is also super boring and cumbersome; &amp;quot;Oh yea, we need to setup 5V6S064&amp;quot;&lt;br /&gt;
 We could even opt for dual names. Internally, when logged in to $shell, the names could be functional &amp;quot;milliways-control-node-1&amp;quot; so it&#039;s clear what you&#039;re doing, but externally, the Asset Tag could be a Hitchhiker&#039;s Guide to the Galaxy character or a DiscWorld town or something. That way, if we do ever show this off at events, we can do cool shit with light up tags, make stuff funny and recognizable and cool to talk about - it also makes it way more relatable to market for when asking for donations; &amp;quot;Ya, we&#039;re looking for extra storage for Überwald&amp;quot; sounds much better than &amp;quot;Ya we&#039;re looking for extra storage for 5V6S064 or milliways-control-node-1&amp;quot;&lt;br /&gt;
 Naturally, once we get NetBox going, we can map the Asset names to the actual server name and potentially it&#039;s serial so we don&#039;t get confused internally (if we want to use serials, there&#039;s somethign to be said for not using serials here)&lt;br /&gt;
&lt;br /&gt;
* Functional&lt;br /&gt;
** milliways-control-node-1&lt;br /&gt;
** milliways-control-node-2&lt;br /&gt;
** control-node-1&lt;br /&gt;
** compute-node-1&lt;br /&gt;
** flash-storage-1&lt;br /&gt;
&lt;br /&gt;
* Marketing&lt;br /&gt;
** HGttG characters&lt;br /&gt;
*** Arthur&lt;br /&gt;
*** Ford&lt;br /&gt;
*** Zaphod&lt;br /&gt;
** Discworld locations&lt;br /&gt;
*** Ankh-Morpork&lt;br /&gt;
*** Überwald&lt;br /&gt;
*** Lancre&lt;br /&gt;
***&lt;br /&gt;
&lt;br /&gt;
=== OpenStack ===&lt;br /&gt;
&lt;br /&gt;
 We&#039;re using 2025.1 (epoxy) as 2025.2 (flamingo) has an undocumented breaking change making installation of keystone impossible. We have registered a bug with the documentation on launchpad for this.&lt;br /&gt;
&lt;br /&gt;
* [https://docs.openstack.org/install-guide/ Installation guide]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-2025-1-epoxy Minimal Deployment]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/overview.html#example-architecture Example Architecture]&lt;br /&gt;
 Following installation guide recommendation, passwords are created with &amp;lt;code&amp;gt;openssl rand -hex 10&amp;lt;/code&amp;gt; and saved in a password store.&lt;br /&gt;
&lt;br /&gt;
==== Controller ====&lt;br /&gt;
* [https://docs.openstack.org/keystone/2025.1/install/ Identity service]&lt;br /&gt;
** [https://docs.openstack.org/keystone/2025.2/install/keystone-users-ubuntu.html Broken] in 2025.2&lt;br /&gt;
***[https://opendev.org/openstack/keystone/src/commit/82c80dccf6c2e74e27b90f5204de6da1fc6bd76d/releasenotes/notes/remove-wsgi-scripts-615b97ee4d6e0de2.yaml This] commit removes the WSGI scripts, ``keystone-wsgi-admin`` and ``keystone-wsgi-public``.&lt;br /&gt;
*** Both scripts are still called by the openstack command. This means running any openstack command to create a domain, projects, users, and roles fails with the error&lt;br /&gt;
****&amp;lt;code&amp;gt;Failed to discover available identity versions when contacting http://controller:5000/v3. Attempting to parse version from URL.&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Evidence:&lt;br /&gt;
**** &amp;lt;code&amp;gt;tail /var/log/apache2/keystone.log&amp;lt;/code&amp;gt;&lt;br /&gt;
***** &amp;lt;code&amp;gt;Target WSGI script not found or unable to stat: /usr/bin/keystone-wsgi-public&amp;lt;/code&amp;gt;&lt;br /&gt;
** Workaround, use 2025.1 instead&lt;br /&gt;
** Completed 2025-01-18&lt;br /&gt;
* [https://docs.openstack.org/glance/2025.1/install/ Image service]&lt;br /&gt;
** [https://docs.openstack.org/glance/2025.1/install/install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide has you create 3 API endpoints for the service.&lt;br /&gt;
**** You need to configure access to keystone with one of them, but you are not told which one. Only &amp;lt;code&amp;gt;public&amp;lt;/code&amp;gt; will work.&lt;br /&gt;
*** Configuring glance-api.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** Completed 2025-01-19&lt;br /&gt;
* [https://docs.openstack.org/placement/2025.1/install/ Placement service]&lt;br /&gt;
** [https://docs.openstack.org/placement/2025.1/install/verify.html Bad] Documentation&lt;br /&gt;
*** If you followed the guide, your user account [https://storyboard.openstack.org/#!/story/2008969 does not have the rights] to read &amp;lt;code&amp;gt;/etc/placement/placement.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Running &amp;lt;code&amp;gt;placement-status upgrade check&amp;lt;/code&amp;gt; as root proves the service works.&lt;br /&gt;
*** Undocumented requirement fulfilled; &amp;lt;code&amp;gt;usermod -aG placement&amp;lt;/code&amp;gt;&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portions of [https://docs.openstack.org/nova/2025.1/install/ Compute]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/controller-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring nova.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
*** The guide attempts to make you configure options for the networking service, which you have not installed yet, because the guide makes you install this service first&lt;br /&gt;
*** &amp;lt;code&amp;gt;Due to a packaging bug, remove the log_dir option from the [DEFAULT] section.&amp;lt;/code&amp;gt;&lt;br /&gt;
**** ???? THEN FIX THE PACKAGE?!?!?!!!!&lt;br /&gt;
*** The &amp;lt;code&amp;gt;[glance]&amp;lt;/code&amp;gt; option you are instructed to use is deprecated&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portion of [https://docs.openstack.org/neutron/2025.1/install/ Networking]&lt;br /&gt;
** [https://docs.openstack.org/neutron/2025.1/install/controller-install-option2-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring neutron.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** More [https://docs.openstack.org/neutron/2025.1/admin/deploy-ovs-provider.html Bad] Documentation&lt;br /&gt;
*** The guide refers to configuring the Open vSwitch agent and offers more information which directly contradicts the guide.&lt;br /&gt;
**** The guide says to edit neutron.conf with &amp;lt;code&amp;gt;service_plugins = router&amp;lt;/code&amp;gt;&lt;br /&gt;
**** The Open vSwitch agent example configuration for controllers says: &amp;quot;Disable service plug-ins because provider networks do not require any.&amp;quot;&lt;br /&gt;
*** Configuring openvswitch_agent.ini is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
**** The guide attempts to make you configure the the name of the bridge connected to the underlying provider physical network but you have not yet created this bridge when the guide asks you for the name.&lt;br /&gt;
** Completed 2025-01-21&lt;br /&gt;
* [https://docs.openstack.org/horizon/2025.1/install/ Dashboard]&lt;br /&gt;
** Extremely weird behavior, Dashboard will only load if Debug is set to True and compression is turned on.&lt;br /&gt;
** Completed 2025-01-21&lt;br /&gt;
&lt;br /&gt;
==== Compute ====&lt;br /&gt;
* [https://docs.openstack.org/nova/2025.1/install/ Compute Service]&lt;br /&gt;
** &lt;br /&gt;
* [https://docs.openstack.org/neutron/2025.1/install/ Networking Service]&lt;br /&gt;
&lt;br /&gt;
== communications ==&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6988</id>
		<title>MilliwaysStack</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6988"/>
		<updated>2026-01-21T16:10:12Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Controller */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We want to run an OpenStack experiment&lt;br /&gt;
&lt;br /&gt;
== The grander idea ==&lt;br /&gt;
&lt;br /&gt;
We want to try out an installation of OpenStack to give people around milliways experience with running (on). &lt;br /&gt;
&lt;br /&gt;
From an unnamed source we got 10 HPE servers. We will use 8 of them to run OpenStack on it. Storage is on a seperate machine. &lt;br /&gt;
&lt;br /&gt;
=== MVP ===&lt;br /&gt;
The MVP would be:&lt;br /&gt;
* Kubernetes / docker &lt;br /&gt;
* object storage&lt;br /&gt;
* file systems&lt;br /&gt;
* Networking&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* Firewalling&lt;br /&gt;
* Databases - mariaDB / PostgreSQL &lt;br /&gt;
* Someone something redis I guess&lt;br /&gt;
* container registry&lt;br /&gt;
&lt;br /&gt;
=== e-MVP ===&lt;br /&gt;
The extended MVP would be:&lt;br /&gt;
* functional Monitoring &amp;amp; alerting&lt;br /&gt;
* autoscaling &lt;br /&gt;
* integration into milliways identity &amp;amp; access management authentik &lt;br /&gt;
* logging &amp;amp; alerting&lt;br /&gt;
&lt;br /&gt;
== the software stack explained ==&lt;br /&gt;
&lt;br /&gt;
OpenStack is a cloud framework stack that offeres AWS / Azure / GC alike services. &lt;br /&gt;
&lt;br /&gt;
Most documentation is availible for Ubuntu &amp;amp; Red Hat. On the longer term an installation under NixOS might be feasable.&lt;br /&gt;
&lt;br /&gt;
== Asset List ==&lt;br /&gt;
=== Rack ===&lt;br /&gt;
* 47U&lt;br /&gt;
* 950mm external depth&lt;br /&gt;
** 915mm internal depth&lt;br /&gt;
=== Consumables &amp;amp; Small Materials ===&lt;br /&gt;
* 1 x Samsung 2.5&amp;quot; 860 EVO 2TB&lt;br /&gt;
* Assorted M2 - M3 screws&lt;br /&gt;
* Assorted mismatched bundle of M5 and M6 cagenuts and bolts&lt;br /&gt;
* SFPs&lt;br /&gt;
=== Switches ===&lt;br /&gt;
* 2 x Dell PowerConnect 7048R-RA&lt;br /&gt;
* 1 x Cisco 3560e&lt;br /&gt;
=== [[MilliwaysStack_Servers | Servers]] ===&lt;br /&gt;
* 1 Dell PowerEdge R710 server as storage&lt;br /&gt;
** 2 x X5570 2,93GHz&lt;br /&gt;
** 192GB RAM&lt;br /&gt;
** 6 x 3,5&amp;quot; bays&lt;br /&gt;
*** 6 x hotswap 3,5&amp;quot; drive sleds/brackets&lt;br /&gt;
** Drives&lt;br /&gt;
*** 1 x Samsung 2.5&amp;quot; 850 EVO 500GB&lt;br /&gt;
**** for OS&lt;br /&gt;
**** Hidden in aftermarket [https://www.amazon.nl/dp/B083XJPCGL &amp;quot;Optical Drive&amp;quot;] adapter.&lt;br /&gt;
***We have more drives than bays, but not enough drives to make a nice or ideal configuration. As such, the Dell storage situation is likely temporary until we can figure out if we add more 12T or 10T or keep as-is.&lt;br /&gt;
**** 2 x Seagate Exos X18 12TB&lt;br /&gt;
**** 1 x Seagate Exos X18 10TB&lt;br /&gt;
**** 4 x WD Red 4TB&lt;br /&gt;
**** 4 x WD Green 3TB&lt;br /&gt;
** no rails&lt;br /&gt;
* 2 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram&lt;br /&gt;
** PCI Riser to 4* NVMe adapter&lt;br /&gt;
*** 1TB Crucial NVMe &lt;br /&gt;
** iLO4&lt;br /&gt;
*** It seems it accepts 35DPH-SVSXJ-HGBJN-C7N5R-2SS4W as activation key for iLO Advanced license?&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** Slide rails&lt;br /&gt;
* 8 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram &lt;br /&gt;
** iLO4&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** 7 x slide rails&lt;br /&gt;
&lt;br /&gt;
== Shopping List ==&lt;br /&gt;
 It&#039;s ofc. sexy as all hell to buy memory, AI cards, flash storage and allsorts, but literally none of that will ever work if we don&#039;t have our Generic basics in order. While we prefer big donations go to big ticket items, many small ticket items unexpectedly add up in the long run. Please do not forget the generic basics!&lt;br /&gt;
* Generic Basics&lt;br /&gt;
** PDU&lt;br /&gt;
*** &amp;lt;s&amp;gt;Temporary 1U unmanaged PDU with 16A/230V C19 input and 1* C19 + 8* Type F outlet.&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Perfect: Managed Rack mountable PDU with CEE Red 16A/20A 400v input to C13/C14 + C19/C20 outlets.&lt;br /&gt;
**** [https://www.eaton.com/us/en-us/skuPage.PDU3XEVSR6G20.html Stupid expensive example]&lt;br /&gt;
*** Alternatively; a &amp;quot;normal&amp;quot; Serverrack PDU (still strong prefer managed) + 16A/20A 400v -&amp;gt; 16A 230V transform&lt;br /&gt;
** Network Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [Color]&lt;br /&gt;
**** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Power Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Screws, Nuts, Bolts&lt;br /&gt;
*** &amp;lt;s&amp;gt;Assorted M2,M2.5,M3 Screws&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Some actual properly matching cage nuts\bolts would be great&lt;br /&gt;
** PCI Risers&lt;br /&gt;
*** &amp;lt;s&amp;gt;Single NVMe adapters&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Multi NVMe adapters&lt;br /&gt;
** KVM&lt;br /&gt;
*** PiKVM?&lt;br /&gt;
* Dell - Storage&lt;br /&gt;
** &amp;lt;s&amp;gt;2* Drive sleds&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;New RAID Card that supports passthrough\JBOD&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;2* SFF-8087 -&amp;gt; SFF-8087 Mini SAS Cable&amp;lt;/s&amp;gt;&lt;br /&gt;
** Drives&lt;br /&gt;
*** &amp;lt;s&amp;gt;500GB SSD for OS&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Bracket and SATA Cable Adapter for SSD&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Technically not shopping, but for historical tracking;&lt;br /&gt;
**** Old Exos X16 2 x 12T and 1 x 10T were RMA&#039;d and replaced with X18&#039;s&amp;lt;/s&amp;gt;&lt;br /&gt;
*** 12T ?&lt;br /&gt;
* HP1 - Control&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* HP2 - Compute&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* Flash Storage&lt;br /&gt;
** We&#039;ll need [https://www.amazon.de/-/en/dp/B07GCDH5D8 Drive Trays] for the HPs if we wanna add 2.5&amp;quot; SSDs&lt;br /&gt;
** Control and Compute servers each have 3 open m.2 NVMe slots&lt;br /&gt;
** &amp;lt;s&amp;gt;1 x 2TB Samsung 860 EVO&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
 nb. this is quick &#039;n&#039; dirty as I go along.&lt;br /&gt;
 In the short-term future I&#039;d much rather replace this adhoc documentation with something like NetBox.&lt;br /&gt;
=== Network ===&lt;br /&gt;
* Supernet 10.42.0.0/16&lt;br /&gt;
** Vlan 42&lt;br /&gt;
*** Interconnect&lt;br /&gt;
*** 10.42.0.0/30&lt;br /&gt;
**** Gateway 10.42.0.1&lt;br /&gt;
**** Milliways Core 10.42.0.2&lt;br /&gt;
** Vlan 5&lt;br /&gt;
*** Mgmt \ OOB&lt;br /&gt;
*** 10.42.1.0/24&lt;br /&gt;
**** Milliways Core 10.42.1.1&lt;br /&gt;
**** Dell iDRAC 10.42.1.5&lt;br /&gt;
**** Dell RAID Controller 10.42.1.6&lt;br /&gt;
**** HP 1 iLO 10.42.1.7&lt;br /&gt;
**** HP 2 iLO 10.42.1.8&lt;br /&gt;
** Vlan 10&lt;br /&gt;
*** Prod&lt;br /&gt;
*** 10.42.10.0/24&lt;br /&gt;
**** Milliways Core 10.42.10.1&lt;br /&gt;
**** Dell 10.42.10.2&lt;br /&gt;
**** HP 1 10.42.10.3&lt;br /&gt;
**** HP 2 10.42.10.5&lt;br /&gt;
** Vlan 15&lt;br /&gt;
*** [https://docs.openstack.org/neutron/2025.1/install/environment-networking-ubuntu.html Provider Network]&lt;br /&gt;
**** This is an OpenStack thing for the secondary Control and Compute node interfaces.&lt;br /&gt;
**** Currently [https://docs.openstack.org/neutron/2025.1/install/environment-networking-controller-ubuntu.html no IP] address assigned.&lt;br /&gt;
**** May change in future if documentation mandates it.&lt;br /&gt;
&lt;br /&gt;
=== Cable Mgmt ===&lt;br /&gt;
 As there are some early ambitions to physically take this environment to events, perhaps we should seriously think about making our lives easier by already thinking about colorcoding connectivity. While this will help us connecting everything again at $event when we&#039;re sleepdeprived\drunk\explaining to newbies, this has the added effect of making it all look slightly more cooler than just a spaghetti of all boring white cables or worse, a spaghetti of whatever the fuck we have lying around.&lt;br /&gt;
&lt;br /&gt;
This is all just made-up without too much thought. This is specifically intended to start a discussion so we can work toward an agreement, it is not intended to be a unilateral decision. Example; You&#039;ll notice 0 thought was put into fiber or not ;)&lt;br /&gt;
* RED&lt;br /&gt;
** Mgmt \ OOB&lt;br /&gt;
*** iDRACs, iLOs, RAID Cards, etc&lt;br /&gt;
* GREEN&lt;br /&gt;
** Storage Prod&lt;br /&gt;
*** At least the Dell, maybe HPs if we get into flash storage&lt;br /&gt;
* BLUE&lt;br /&gt;
** Compute Prod&lt;br /&gt;
*** Likely overwhelmingly the HPs&lt;br /&gt;
* YELLOW&lt;br /&gt;
** Interconnect&lt;br /&gt;
*** Connectivity to $outside, between switches, whatever&lt;br /&gt;
&lt;br /&gt;
=== Naming Convention ===&lt;br /&gt;
 We need names!&lt;br /&gt;
 Can&#039;t keep calling these &amp;quot;Dell&amp;quot;, &amp;quot;HP1&amp;quot;, &amp;quot;HP2&amp;quot; etc.&lt;br /&gt;
 Calling them by their S/Ns is also super boring and cumbersome; &amp;quot;Oh yea, we need to setup 5V6S064&amp;quot;&lt;br /&gt;
 We could even opt for dual names. Internally, when logged in to $shell, the names could be functional &amp;quot;milliways-control-node-1&amp;quot; so it&#039;s clear what you&#039;re doing, but externally, the Asset Tag could be a Hitchhiker&#039;s Guide to the Galaxy character or a DiscWorld town or something. That way, if we do ever show this off at events, we can do cool shit with light up tags, make stuff funny and recognizable and cool to talk about - it also makes it way more relatable to market for when asking for donations; &amp;quot;Ya, we&#039;re looking for extra storage for Überwald&amp;quot; sounds much better than &amp;quot;Ya we&#039;re looking for extra storage for 5V6S064 or milliways-control-node-1&amp;quot;&lt;br /&gt;
 Naturally, once we get NetBox going, we can map the Asset names to the actual server name and potentially it&#039;s serial so we don&#039;t get confused internally (if we want to use serials, there&#039;s somethign to be said for not using serials here)&lt;br /&gt;
&lt;br /&gt;
* Functional&lt;br /&gt;
** milliways-control-node-1&lt;br /&gt;
** milliways-control-node-2&lt;br /&gt;
** control-node-1&lt;br /&gt;
** compute-node-1&lt;br /&gt;
** flash-storage-1&lt;br /&gt;
&lt;br /&gt;
* Marketing&lt;br /&gt;
** HGttG characters&lt;br /&gt;
*** Arthur&lt;br /&gt;
*** Ford&lt;br /&gt;
*** Zaphod&lt;br /&gt;
** Discworld locations&lt;br /&gt;
*** Ankh-Morpork&lt;br /&gt;
*** Überwald&lt;br /&gt;
*** Lancre&lt;br /&gt;
***&lt;br /&gt;
&lt;br /&gt;
=== OpenStack ===&lt;br /&gt;
&lt;br /&gt;
 We&#039;re using 2025.1 (epoxy) as 2025.2 (flamingo) has an undocumented breaking change making installation of keystone impossible. We have registered a bug with the documentation on launchpad for this.&lt;br /&gt;
&lt;br /&gt;
* [https://docs.openstack.org/install-guide/ Installation guide]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-2025-1-epoxy Minimal Deployment]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/overview.html#example-architecture Example Architecture]&lt;br /&gt;
 Following installation guide recommendation, passwords are created with &amp;lt;code&amp;gt;openssl rand -hex 10&amp;lt;/code&amp;gt; and saved in a password store.&lt;br /&gt;
&lt;br /&gt;
==== Controller ====&lt;br /&gt;
* [https://docs.openstack.org/keystone/2025.1/install/ Identity service]&lt;br /&gt;
** [https://docs.openstack.org/keystone/2025.2/install/keystone-users-ubuntu.html Broken] in 2025.2&lt;br /&gt;
***[https://opendev.org/openstack/keystone/src/commit/82c80dccf6c2e74e27b90f5204de6da1fc6bd76d/releasenotes/notes/remove-wsgi-scripts-615b97ee4d6e0de2.yaml This] commit removes the WSGI scripts, ``keystone-wsgi-admin`` and ``keystone-wsgi-public``.&lt;br /&gt;
*** Both scripts are still called by the openstack command. This means running any openstack command to create a domain, projects, users, and roles fails with the error&lt;br /&gt;
****&amp;lt;code&amp;gt;Failed to discover available identity versions when contacting http://controller:5000/v3. Attempting to parse version from URL.&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Evidence:&lt;br /&gt;
**** &amp;lt;code&amp;gt;tail /var/log/apache2/keystone.log&amp;lt;/code&amp;gt;&lt;br /&gt;
***** &amp;lt;code&amp;gt;Target WSGI script not found or unable to stat: /usr/bin/keystone-wsgi-public&amp;lt;/code&amp;gt;&lt;br /&gt;
** Workaround, use 2025.1 instead&lt;br /&gt;
** Completed 2025-01-18&lt;br /&gt;
* [https://docs.openstack.org/glance/2025.1/install/ Image service]&lt;br /&gt;
** [https://docs.openstack.org/glance/2025.1/install/install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide has you create 3 API endpoints for the service.&lt;br /&gt;
**** You need to configure access to keystone with one of them, but you are not told which one. Only &amp;lt;code&amp;gt;public&amp;lt;/code&amp;gt; will work.&lt;br /&gt;
*** Configuring glance-api.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** Completed 2025-01-19&lt;br /&gt;
* [https://docs.openstack.org/placement/2025.1/install/ Placement service]&lt;br /&gt;
** [https://docs.openstack.org/placement/2025.1/install/verify.html Bad] Documentation&lt;br /&gt;
*** If you followed the guide, your user account [https://storyboard.openstack.org/#!/story/2008969 does not have the rights] to read &amp;lt;code&amp;gt;/etc/placement/placement.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Running &amp;lt;code&amp;gt;placement-status upgrade check&amp;lt;/code&amp;gt; as root proves the service works.&lt;br /&gt;
*** Undocumented requirement fulfilled; &amp;lt;code&amp;gt;usermod -aG placement&amp;lt;/code&amp;gt;&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portions of [https://docs.openstack.org/nova/2025.1/install/ Compute]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/controller-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring nova.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
*** The guide attempts to make you configure options for the networking service, which you have not installed yet, because the guide makes you install this service first&lt;br /&gt;
*** &amp;lt;code&amp;gt;Due to a packaging bug, remove the log_dir option from the [DEFAULT] section.&amp;lt;/code&amp;gt;&lt;br /&gt;
**** ???? THEN FIX THE PACKAGE?!?!?!!!!&lt;br /&gt;
*** The &amp;lt;code&amp;gt;[glance]&amp;lt;/code&amp;gt; option you are instructed to use is deprecated&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portion of [https://docs.openstack.org/neutron/2025.1/install/ Networking]&lt;br /&gt;
** [https://docs.openstack.org/neutron/2025.1/install/controller-install-option2-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring neutron.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** More [https://docs.openstack.org/neutron/2025.1/admin/deploy-ovs-provider.html Bad] Documentation&lt;br /&gt;
*** The guide refers to configuring the Open vSwitch agent and offers more information which directly contradicts the guide.&lt;br /&gt;
**** The guide says to edit neutron.conf with &amp;lt;code&amp;gt;service_plugins = router&amp;lt;/code&amp;gt;&lt;br /&gt;
**** The Open vSwitch agent example configuration for controllers says: &amp;quot;Disable service plug-ins because provider networks do not require any.&amp;quot;&lt;br /&gt;
*** Configuring openvswitch_agent.ini is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
**** The guide attempts to make you configure the the name of the bridge connected to the underlying provider physical network but you have not yet created this bridge when the guide asks you for the name.&lt;br /&gt;
** Completed 2025-01-21&lt;br /&gt;
* various Networking agents&lt;br /&gt;
* [https://docs.openstack.org/horizon/2025.1/install/ Dashboard]&lt;br /&gt;
** Extremely weird behavior, Dashboard will only load if Debug is set to True and compression is turned on.&lt;br /&gt;
** Completed 2025-01-21&lt;br /&gt;
&lt;br /&gt;
== communications ==&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6987</id>
		<title>MilliwaysStack</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6987"/>
		<updated>2026-01-21T15:12:46Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Controller */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We want to run an OpenStack experiment&lt;br /&gt;
&lt;br /&gt;
== The grander idea ==&lt;br /&gt;
&lt;br /&gt;
We want to try out an installation of OpenStack to give people around milliways experience with running (on). &lt;br /&gt;
&lt;br /&gt;
From an unnamed source we got 10 HPE servers. We will use 8 of them to run OpenStack on it. Storage is on a seperate machine. &lt;br /&gt;
&lt;br /&gt;
=== MVP ===&lt;br /&gt;
The MVP would be:&lt;br /&gt;
* Kubernetes / docker &lt;br /&gt;
* object storage&lt;br /&gt;
* file systems&lt;br /&gt;
* Networking&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* Firewalling&lt;br /&gt;
* Databases - mariaDB / PostgreSQL &lt;br /&gt;
* Someone something redis I guess&lt;br /&gt;
* container registry&lt;br /&gt;
&lt;br /&gt;
=== e-MVP ===&lt;br /&gt;
The extended MVP would be:&lt;br /&gt;
* functional Monitoring &amp;amp; alerting&lt;br /&gt;
* autoscaling &lt;br /&gt;
* integration into milliways identity &amp;amp; access management authentik &lt;br /&gt;
* logging &amp;amp; alerting&lt;br /&gt;
&lt;br /&gt;
== the software stack explained ==&lt;br /&gt;
&lt;br /&gt;
OpenStack is a cloud framework stack that offeres AWS / Azure / GC alike services. &lt;br /&gt;
&lt;br /&gt;
Most documentation is availible for Ubuntu &amp;amp; Red Hat. On the longer term an installation under NixOS might be feasable.&lt;br /&gt;
&lt;br /&gt;
== Asset List ==&lt;br /&gt;
=== Rack ===&lt;br /&gt;
* 47U&lt;br /&gt;
* 950mm external depth&lt;br /&gt;
** 915mm internal depth&lt;br /&gt;
=== Consumables &amp;amp; Small Materials ===&lt;br /&gt;
* 1 x Samsung 2.5&amp;quot; 860 EVO 2TB&lt;br /&gt;
* Assorted M2 - M3 screws&lt;br /&gt;
* Assorted mismatched bundle of M5 and M6 cagenuts and bolts&lt;br /&gt;
* SFPs&lt;br /&gt;
=== Switches ===&lt;br /&gt;
* 2 x Dell PowerConnect 7048R-RA&lt;br /&gt;
* 1 x Cisco 3560e&lt;br /&gt;
=== [[MilliwaysStack_Servers | Servers]] ===&lt;br /&gt;
* 1 Dell PowerEdge R710 server as storage&lt;br /&gt;
** 2 x X5570 2,93GHz&lt;br /&gt;
** 192GB RAM&lt;br /&gt;
** 6 x 3,5&amp;quot; bays&lt;br /&gt;
*** 6 x hotswap 3,5&amp;quot; drive sleds/brackets&lt;br /&gt;
** Drives&lt;br /&gt;
*** 1 x Samsung 2.5&amp;quot; 850 EVO 500GB&lt;br /&gt;
**** for OS&lt;br /&gt;
**** Hidden in aftermarket [https://www.amazon.nl/dp/B083XJPCGL &amp;quot;Optical Drive&amp;quot;] adapter.&lt;br /&gt;
***We have more drives than bays, but not enough drives to make a nice or ideal configuration. As such, the Dell storage situation is likely temporary until we can figure out if we add more 12T or 10T or keep as-is.&lt;br /&gt;
**** 2 x Seagate Exos X18 12TB&lt;br /&gt;
**** 1 x Seagate Exos X18 10TB&lt;br /&gt;
**** 4 x WD Red 4TB&lt;br /&gt;
**** 4 x WD Green 3TB&lt;br /&gt;
** no rails&lt;br /&gt;
* 2 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram&lt;br /&gt;
** PCI Riser to 4* NVMe adapter&lt;br /&gt;
*** 1TB Crucial NVMe &lt;br /&gt;
** iLO4&lt;br /&gt;
*** It seems it accepts 35DPH-SVSXJ-HGBJN-C7N5R-2SS4W as activation key for iLO Advanced license?&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** Slide rails&lt;br /&gt;
* 8 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram &lt;br /&gt;
** iLO4&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** 7 x slide rails&lt;br /&gt;
&lt;br /&gt;
== Shopping List ==&lt;br /&gt;
 It&#039;s ofc. sexy as all hell to buy memory, AI cards, flash storage and allsorts, but literally none of that will ever work if we don&#039;t have our Generic basics in order. While we prefer big donations go to big ticket items, many small ticket items unexpectedly add up in the long run. Please do not forget the generic basics!&lt;br /&gt;
* Generic Basics&lt;br /&gt;
** PDU&lt;br /&gt;
*** &amp;lt;s&amp;gt;Temporary 1U unmanaged PDU with 16A/230V C19 input and 1* C19 + 8* Type F outlet.&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Perfect: Managed Rack mountable PDU with CEE Red 16A/20A 400v input to C13/C14 + C19/C20 outlets.&lt;br /&gt;
**** [https://www.eaton.com/us/en-us/skuPage.PDU3XEVSR6G20.html Stupid expensive example]&lt;br /&gt;
*** Alternatively; a &amp;quot;normal&amp;quot; Serverrack PDU (still strong prefer managed) + 16A/20A 400v -&amp;gt; 16A 230V transform&lt;br /&gt;
** Network Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [Color]&lt;br /&gt;
**** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Power Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Screws, Nuts, Bolts&lt;br /&gt;
*** &amp;lt;s&amp;gt;Assorted M2,M2.5,M3 Screws&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Some actual properly matching cage nuts\bolts would be great&lt;br /&gt;
** PCI Risers&lt;br /&gt;
*** &amp;lt;s&amp;gt;Single NVMe adapters&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Multi NVMe adapters&lt;br /&gt;
** KVM&lt;br /&gt;
*** PiKVM?&lt;br /&gt;
* Dell - Storage&lt;br /&gt;
** &amp;lt;s&amp;gt;2* Drive sleds&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;New RAID Card that supports passthrough\JBOD&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;2* SFF-8087 -&amp;gt; SFF-8087 Mini SAS Cable&amp;lt;/s&amp;gt;&lt;br /&gt;
** Drives&lt;br /&gt;
*** &amp;lt;s&amp;gt;500GB SSD for OS&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Bracket and SATA Cable Adapter for SSD&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Technically not shopping, but for historical tracking;&lt;br /&gt;
**** Old Exos X16 2 x 12T and 1 x 10T were RMA&#039;d and replaced with X18&#039;s&amp;lt;/s&amp;gt;&lt;br /&gt;
*** 12T ?&lt;br /&gt;
* HP1 - Control&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* HP2 - Compute&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* Flash Storage&lt;br /&gt;
** We&#039;ll need [https://www.amazon.de/-/en/dp/B07GCDH5D8 Drive Trays] for the HPs if we wanna add 2.5&amp;quot; SSDs&lt;br /&gt;
** Control and Compute servers each have 3 open m.2 NVMe slots&lt;br /&gt;
** &amp;lt;s&amp;gt;1 x 2TB Samsung 860 EVO&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
 nb. this is quick &#039;n&#039; dirty as I go along.&lt;br /&gt;
 In the short-term future I&#039;d much rather replace this adhoc documentation with something like NetBox.&lt;br /&gt;
=== Network ===&lt;br /&gt;
* Supernet 10.42.0.0/16&lt;br /&gt;
** Vlan 42&lt;br /&gt;
*** Interconnect&lt;br /&gt;
*** 10.42.0.0/30&lt;br /&gt;
**** Gateway 10.42.0.1&lt;br /&gt;
**** Milliways Core 10.42.0.2&lt;br /&gt;
** Vlan 5&lt;br /&gt;
*** Mgmt \ OOB&lt;br /&gt;
*** 10.42.1.0/24&lt;br /&gt;
**** Milliways Core 10.42.1.1&lt;br /&gt;
**** Dell iDRAC 10.42.1.5&lt;br /&gt;
**** Dell RAID Controller 10.42.1.6&lt;br /&gt;
**** HP 1 iLO 10.42.1.7&lt;br /&gt;
**** HP 2 iLO 10.42.1.8&lt;br /&gt;
** Vlan 10&lt;br /&gt;
*** Prod&lt;br /&gt;
*** 10.42.10.0/24&lt;br /&gt;
**** Milliways Core 10.42.10.1&lt;br /&gt;
**** Dell 10.42.10.2&lt;br /&gt;
**** HP 1 10.42.10.3&lt;br /&gt;
**** HP 2 10.42.10.5&lt;br /&gt;
** Vlan 15&lt;br /&gt;
*** [https://docs.openstack.org/neutron/2025.1/install/environment-networking-ubuntu.html Provider Network]&lt;br /&gt;
**** This is an OpenStack thing for the secondary Control and Compute node interfaces.&lt;br /&gt;
**** Currently [https://docs.openstack.org/neutron/2025.1/install/environment-networking-controller-ubuntu.html no IP] address assigned.&lt;br /&gt;
**** May change in future if documentation mandates it.&lt;br /&gt;
&lt;br /&gt;
=== Cable Mgmt ===&lt;br /&gt;
 As there are some early ambitions to physically take this environment to events, perhaps we should seriously think about making our lives easier by already thinking about colorcoding connectivity. While this will help us connecting everything again at $event when we&#039;re sleepdeprived\drunk\explaining to newbies, this has the added effect of making it all look slightly more cooler than just a spaghetti of all boring white cables or worse, a spaghetti of whatever the fuck we have lying around.&lt;br /&gt;
&lt;br /&gt;
This is all just made-up without too much thought. This is specifically intended to start a discussion so we can work toward an agreement, it is not intended to be a unilateral decision. Example; You&#039;ll notice 0 thought was put into fiber or not ;)&lt;br /&gt;
* RED&lt;br /&gt;
** Mgmt \ OOB&lt;br /&gt;
*** iDRACs, iLOs, RAID Cards, etc&lt;br /&gt;
* GREEN&lt;br /&gt;
** Storage Prod&lt;br /&gt;
*** At least the Dell, maybe HPs if we get into flash storage&lt;br /&gt;
* BLUE&lt;br /&gt;
** Compute Prod&lt;br /&gt;
*** Likely overwhelmingly the HPs&lt;br /&gt;
* YELLOW&lt;br /&gt;
** Interconnect&lt;br /&gt;
*** Connectivity to $outside, between switches, whatever&lt;br /&gt;
&lt;br /&gt;
=== Naming Convention ===&lt;br /&gt;
 We need names!&lt;br /&gt;
 Can&#039;t keep calling these &amp;quot;Dell&amp;quot;, &amp;quot;HP1&amp;quot;, &amp;quot;HP2&amp;quot; etc.&lt;br /&gt;
 Calling them by their S/Ns is also super boring and cumbersome; &amp;quot;Oh yea, we need to setup 5V6S064&amp;quot;&lt;br /&gt;
 We could even opt for dual names. Internally, when logged in to $shell, the names could be functional &amp;quot;milliways-control-node-1&amp;quot; so it&#039;s clear what you&#039;re doing, but externally, the Asset Tag could be a Hitchhiker&#039;s Guide to the Galaxy character or a DiscWorld town or something. That way, if we do ever show this off at events, we can do cool shit with light up tags, make stuff funny and recognizable and cool to talk about - it also makes it way more relatable to market for when asking for donations; &amp;quot;Ya, we&#039;re looking for extra storage for Überwald&amp;quot; sounds much better than &amp;quot;Ya we&#039;re looking for extra storage for 5V6S064 or milliways-control-node-1&amp;quot;&lt;br /&gt;
 Naturally, once we get NetBox going, we can map the Asset names to the actual server name and potentially it&#039;s serial so we don&#039;t get confused internally (if we want to use serials, there&#039;s somethign to be said for not using serials here)&lt;br /&gt;
&lt;br /&gt;
* Functional&lt;br /&gt;
** milliways-control-node-1&lt;br /&gt;
** milliways-control-node-2&lt;br /&gt;
** control-node-1&lt;br /&gt;
** compute-node-1&lt;br /&gt;
** flash-storage-1&lt;br /&gt;
&lt;br /&gt;
* Marketing&lt;br /&gt;
** HGttG characters&lt;br /&gt;
*** Arthur&lt;br /&gt;
*** Ford&lt;br /&gt;
*** Zaphod&lt;br /&gt;
** Discworld locations&lt;br /&gt;
*** Ankh-Morpork&lt;br /&gt;
*** Überwald&lt;br /&gt;
*** Lancre&lt;br /&gt;
***&lt;br /&gt;
&lt;br /&gt;
=== OpenStack ===&lt;br /&gt;
&lt;br /&gt;
 We&#039;re using 2025.1 (epoxy) as 2025.2 (flamingo) has an undocumented breaking change making installation of keystone impossible. We have registered a bug with the documentation on launchpad for this.&lt;br /&gt;
&lt;br /&gt;
* [https://docs.openstack.org/install-guide/ Installation guide]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-2025-1-epoxy Minimal Deployment]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/overview.html#example-architecture Example Architecture]&lt;br /&gt;
 Following installation guide recommendation, passwords are created with &amp;lt;code&amp;gt;openssl rand -hex 10&amp;lt;/code&amp;gt; and saved in a password store.&lt;br /&gt;
&lt;br /&gt;
==== Controller ====&lt;br /&gt;
* [https://docs.openstack.org/keystone/2025.1/install/ Identity service]&lt;br /&gt;
** [https://docs.openstack.org/keystone/2025.2/install/keystone-users-ubuntu.html Broken] in 2025.2&lt;br /&gt;
***[https://opendev.org/openstack/keystone/src/commit/82c80dccf6c2e74e27b90f5204de6da1fc6bd76d/releasenotes/notes/remove-wsgi-scripts-615b97ee4d6e0de2.yaml This] commit removes the WSGI scripts, ``keystone-wsgi-admin`` and ``keystone-wsgi-public``.&lt;br /&gt;
*** Both scripts are still called by the openstack command. This means running any openstack command to create a domain, projects, users, and roles fails with the error&lt;br /&gt;
****&amp;lt;code&amp;gt;Failed to discover available identity versions when contacting http://controller:5000/v3. Attempting to parse version from URL.&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Evidence:&lt;br /&gt;
**** &amp;lt;code&amp;gt;tail /var/log/apache2/keystone.log&amp;lt;/code&amp;gt;&lt;br /&gt;
***** &amp;lt;code&amp;gt;Target WSGI script not found or unable to stat: /usr/bin/keystone-wsgi-public&amp;lt;/code&amp;gt;&lt;br /&gt;
** Workaround, use 2025.1 instead&lt;br /&gt;
** Completed 2025-01-18&lt;br /&gt;
* [https://docs.openstack.org/glance/2025.1/install/ Image service]&lt;br /&gt;
** [https://docs.openstack.org/glance/2025.1/install/install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide has you create 3 API endpoints for the service.&lt;br /&gt;
**** You need to configure access to keystone with one of them, but you are not told which one. Only &amp;lt;code&amp;gt;public&amp;lt;/code&amp;gt; will work.&lt;br /&gt;
*** Configuring glance-api.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** Completed 2025-01-19&lt;br /&gt;
* [https://docs.openstack.org/placement/2025.1/install/ Placement service]&lt;br /&gt;
** [https://docs.openstack.org/placement/2025.1/install/verify.html Bad] Documentation&lt;br /&gt;
*** If you followed the guide, your user account [https://storyboard.openstack.org/#!/story/2008969 does not have the rights] to read &amp;lt;code&amp;gt;/etc/placement/placement.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Running &amp;lt;code&amp;gt;placement-status upgrade check&amp;lt;/code&amp;gt; as root proves the service works.&lt;br /&gt;
*** Undocumented requirement fulfilled; &amp;lt;code&amp;gt;usermod -aG placement&amp;lt;/code&amp;gt;&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portions of [https://docs.openstack.org/nova/2025.1/install/ Compute]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/controller-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring nova.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
*** The guide attempts to make you configure options for the networking service, which you have not installed yet, because the guide makes you install this service first&lt;br /&gt;
*** &amp;lt;code&amp;gt;Due to a packaging bug, remove the log_dir option from the [DEFAULT] section.&amp;lt;/code&amp;gt;&lt;br /&gt;
**** ???? THEN FIX THE PACKAGE?!?!?!!!!&lt;br /&gt;
*** The &amp;lt;code&amp;gt;[glance]&amp;lt;/code&amp;gt; option you are instructed to use is deprecated&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portion of [https://docs.openstack.org/neutron/2025.1/install/ Networking]&lt;br /&gt;
** [https://docs.openstack.org/neutron/2025.1/install/controller-install-option2-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring neutron.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** More [https://docs.openstack.org/neutron/2025.1/admin/deploy-ovs-provider.html Bad] Documentation&lt;br /&gt;
*** The guide refers to configuring the Open vSwitch agent and offers more information which directly contradicts the guide.&lt;br /&gt;
**** The guide says to edit neutron.conf with &amp;lt;code&amp;gt;service_plugins = router&amp;lt;/code&amp;gt;&lt;br /&gt;
**** The Open vSwitch agent example configuration for controllers says: &amp;quot;Disable service plug-ins because provider networks do not require any.&amp;quot;&lt;br /&gt;
*** Configuring openvswitch_agent.ini is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
**** The guide attempts to make you configure the the name of the bridge connected to the underlying provider physical network but you have not yet created this bridge when the guide asks you for the name.&lt;br /&gt;
** Completed 2025-01-21&lt;br /&gt;
* various Networking agents&lt;br /&gt;
* [https://docs.openstack.org/horizon/2025.1/install/ Dashboard]&lt;br /&gt;
&lt;br /&gt;
== communications ==&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6986</id>
		<title>MilliwaysStack</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6986"/>
		<updated>2026-01-21T14:52:47Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Controller */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We want to run an OpenStack experiment&lt;br /&gt;
&lt;br /&gt;
== The grander idea ==&lt;br /&gt;
&lt;br /&gt;
We want to try out an installation of OpenStack to give people around milliways experience with running (on). &lt;br /&gt;
&lt;br /&gt;
From an unnamed source we got 10 HPE servers. We will use 8 of them to run OpenStack on it. Storage is on a seperate machine. &lt;br /&gt;
&lt;br /&gt;
=== MVP ===&lt;br /&gt;
The MVP would be:&lt;br /&gt;
* Kubernetes / docker &lt;br /&gt;
* object storage&lt;br /&gt;
* file systems&lt;br /&gt;
* Networking&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* Firewalling&lt;br /&gt;
* Databases - mariaDB / PostgreSQL &lt;br /&gt;
* Someone something redis I guess&lt;br /&gt;
* container registry&lt;br /&gt;
&lt;br /&gt;
=== e-MVP ===&lt;br /&gt;
The extended MVP would be:&lt;br /&gt;
* functional Monitoring &amp;amp; alerting&lt;br /&gt;
* autoscaling &lt;br /&gt;
* integration into milliways identity &amp;amp; access management authentik &lt;br /&gt;
* logging &amp;amp; alerting&lt;br /&gt;
&lt;br /&gt;
== the software stack explained ==&lt;br /&gt;
&lt;br /&gt;
OpenStack is a cloud framework stack that offeres AWS / Azure / GC alike services. &lt;br /&gt;
&lt;br /&gt;
Most documentation is availible for Ubuntu &amp;amp; Red Hat. On the longer term an installation under NixOS might be feasable.&lt;br /&gt;
&lt;br /&gt;
== Asset List ==&lt;br /&gt;
=== Rack ===&lt;br /&gt;
* 47U&lt;br /&gt;
* 950mm external depth&lt;br /&gt;
** 915mm internal depth&lt;br /&gt;
=== Consumables &amp;amp; Small Materials ===&lt;br /&gt;
* 1 x Samsung 2.5&amp;quot; 860 EVO 2TB&lt;br /&gt;
* Assorted M2 - M3 screws&lt;br /&gt;
* Assorted mismatched bundle of M5 and M6 cagenuts and bolts&lt;br /&gt;
* SFPs&lt;br /&gt;
=== Switches ===&lt;br /&gt;
* 2 x Dell PowerConnect 7048R-RA&lt;br /&gt;
* 1 x Cisco 3560e&lt;br /&gt;
=== [[MilliwaysStack_Servers | Servers]] ===&lt;br /&gt;
* 1 Dell PowerEdge R710 server as storage&lt;br /&gt;
** 2 x X5570 2,93GHz&lt;br /&gt;
** 192GB RAM&lt;br /&gt;
** 6 x 3,5&amp;quot; bays&lt;br /&gt;
*** 6 x hotswap 3,5&amp;quot; drive sleds/brackets&lt;br /&gt;
** Drives&lt;br /&gt;
*** 1 x Samsung 2.5&amp;quot; 850 EVO 500GB&lt;br /&gt;
**** for OS&lt;br /&gt;
**** Hidden in aftermarket [https://www.amazon.nl/dp/B083XJPCGL &amp;quot;Optical Drive&amp;quot;] adapter.&lt;br /&gt;
***We have more drives than bays, but not enough drives to make a nice or ideal configuration. As such, the Dell storage situation is likely temporary until we can figure out if we add more 12T or 10T or keep as-is.&lt;br /&gt;
**** 2 x Seagate Exos X18 12TB&lt;br /&gt;
**** 1 x Seagate Exos X18 10TB&lt;br /&gt;
**** 4 x WD Red 4TB&lt;br /&gt;
**** 4 x WD Green 3TB&lt;br /&gt;
** no rails&lt;br /&gt;
* 2 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram&lt;br /&gt;
** PCI Riser to 4* NVMe adapter&lt;br /&gt;
*** 1TB Crucial NVMe &lt;br /&gt;
** iLO4&lt;br /&gt;
*** It seems it accepts 35DPH-SVSXJ-HGBJN-C7N5R-2SS4W as activation key for iLO Advanced license?&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** Slide rails&lt;br /&gt;
* 8 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram &lt;br /&gt;
** iLO4&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** 7 x slide rails&lt;br /&gt;
&lt;br /&gt;
== Shopping List ==&lt;br /&gt;
 It&#039;s ofc. sexy as all hell to buy memory, AI cards, flash storage and allsorts, but literally none of that will ever work if we don&#039;t have our Generic basics in order. While we prefer big donations go to big ticket items, many small ticket items unexpectedly add up in the long run. Please do not forget the generic basics!&lt;br /&gt;
* Generic Basics&lt;br /&gt;
** PDU&lt;br /&gt;
*** &amp;lt;s&amp;gt;Temporary 1U unmanaged PDU with 16A/230V C19 input and 1* C19 + 8* Type F outlet.&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Perfect: Managed Rack mountable PDU with CEE Red 16A/20A 400v input to C13/C14 + C19/C20 outlets.&lt;br /&gt;
**** [https://www.eaton.com/us/en-us/skuPage.PDU3XEVSR6G20.html Stupid expensive example]&lt;br /&gt;
*** Alternatively; a &amp;quot;normal&amp;quot; Serverrack PDU (still strong prefer managed) + 16A/20A 400v -&amp;gt; 16A 230V transform&lt;br /&gt;
** Network Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [Color]&lt;br /&gt;
**** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Power Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Screws, Nuts, Bolts&lt;br /&gt;
*** &amp;lt;s&amp;gt;Assorted M2,M2.5,M3 Screws&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Some actual properly matching cage nuts\bolts would be great&lt;br /&gt;
** PCI Risers&lt;br /&gt;
*** &amp;lt;s&amp;gt;Single NVMe adapters&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Multi NVMe adapters&lt;br /&gt;
** KVM&lt;br /&gt;
*** PiKVM?&lt;br /&gt;
* Dell - Storage&lt;br /&gt;
** &amp;lt;s&amp;gt;2* Drive sleds&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;New RAID Card that supports passthrough\JBOD&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;2* SFF-8087 -&amp;gt; SFF-8087 Mini SAS Cable&amp;lt;/s&amp;gt;&lt;br /&gt;
** Drives&lt;br /&gt;
*** &amp;lt;s&amp;gt;500GB SSD for OS&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Bracket and SATA Cable Adapter for SSD&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Technically not shopping, but for historical tracking;&lt;br /&gt;
**** Old Exos X16 2 x 12T and 1 x 10T were RMA&#039;d and replaced with X18&#039;s&amp;lt;/s&amp;gt;&lt;br /&gt;
*** 12T ?&lt;br /&gt;
* HP1 - Control&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* HP2 - Compute&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* Flash Storage&lt;br /&gt;
** We&#039;ll need [https://www.amazon.de/-/en/dp/B07GCDH5D8 Drive Trays] for the HPs if we wanna add 2.5&amp;quot; SSDs&lt;br /&gt;
** Control and Compute servers each have 3 open m.2 NVMe slots&lt;br /&gt;
** &amp;lt;s&amp;gt;1 x 2TB Samsung 860 EVO&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
 nb. this is quick &#039;n&#039; dirty as I go along.&lt;br /&gt;
 In the short-term future I&#039;d much rather replace this adhoc documentation with something like NetBox.&lt;br /&gt;
=== Network ===&lt;br /&gt;
* Supernet 10.42.0.0/16&lt;br /&gt;
** Vlan 42&lt;br /&gt;
*** Interconnect&lt;br /&gt;
*** 10.42.0.0/30&lt;br /&gt;
**** Gateway 10.42.0.1&lt;br /&gt;
**** Milliways Core 10.42.0.2&lt;br /&gt;
** Vlan 5&lt;br /&gt;
*** Mgmt \ OOB&lt;br /&gt;
*** 10.42.1.0/24&lt;br /&gt;
**** Milliways Core 10.42.1.1&lt;br /&gt;
**** Dell iDRAC 10.42.1.5&lt;br /&gt;
**** Dell RAID Controller 10.42.1.6&lt;br /&gt;
**** HP 1 iLO 10.42.1.7&lt;br /&gt;
**** HP 2 iLO 10.42.1.8&lt;br /&gt;
** Vlan 10&lt;br /&gt;
*** Prod&lt;br /&gt;
*** 10.42.10.0/24&lt;br /&gt;
**** Milliways Core 10.42.10.1&lt;br /&gt;
**** Dell 10.42.10.2&lt;br /&gt;
**** HP 1 10.42.10.3&lt;br /&gt;
**** HP 2 10.42.10.5&lt;br /&gt;
** Vlan 15&lt;br /&gt;
*** [https://docs.openstack.org/neutron/2025.1/install/environment-networking-ubuntu.html Provider Network]&lt;br /&gt;
**** This is an OpenStack thing for the secondary Control and Compute node interfaces.&lt;br /&gt;
**** Currently [https://docs.openstack.org/neutron/2025.1/install/environment-networking-controller-ubuntu.html no IP] address assigned.&lt;br /&gt;
**** May change in future if documentation mandates it.&lt;br /&gt;
&lt;br /&gt;
=== Cable Mgmt ===&lt;br /&gt;
 As there are some early ambitions to physically take this environment to events, perhaps we should seriously think about making our lives easier by already thinking about colorcoding connectivity. While this will help us connecting everything again at $event when we&#039;re sleepdeprived\drunk\explaining to newbies, this has the added effect of making it all look slightly more cooler than just a spaghetti of all boring white cables or worse, a spaghetti of whatever the fuck we have lying around.&lt;br /&gt;
&lt;br /&gt;
This is all just made-up without too much thought. This is specifically intended to start a discussion so we can work toward an agreement, it is not intended to be a unilateral decision. Example; You&#039;ll notice 0 thought was put into fiber or not ;)&lt;br /&gt;
* RED&lt;br /&gt;
** Mgmt \ OOB&lt;br /&gt;
*** iDRACs, iLOs, RAID Cards, etc&lt;br /&gt;
* GREEN&lt;br /&gt;
** Storage Prod&lt;br /&gt;
*** At least the Dell, maybe HPs if we get into flash storage&lt;br /&gt;
* BLUE&lt;br /&gt;
** Compute Prod&lt;br /&gt;
*** Likely overwhelmingly the HPs&lt;br /&gt;
* YELLOW&lt;br /&gt;
** Interconnect&lt;br /&gt;
*** Connectivity to $outside, between switches, whatever&lt;br /&gt;
&lt;br /&gt;
=== Naming Convention ===&lt;br /&gt;
 We need names!&lt;br /&gt;
 Can&#039;t keep calling these &amp;quot;Dell&amp;quot;, &amp;quot;HP1&amp;quot;, &amp;quot;HP2&amp;quot; etc.&lt;br /&gt;
 Calling them by their S/Ns is also super boring and cumbersome; &amp;quot;Oh yea, we need to setup 5V6S064&amp;quot;&lt;br /&gt;
 We could even opt for dual names. Internally, when logged in to $shell, the names could be functional &amp;quot;milliways-control-node-1&amp;quot; so it&#039;s clear what you&#039;re doing, but externally, the Asset Tag could be a Hitchhiker&#039;s Guide to the Galaxy character or a DiscWorld town or something. That way, if we do ever show this off at events, we can do cool shit with light up tags, make stuff funny and recognizable and cool to talk about - it also makes it way more relatable to market for when asking for donations; &amp;quot;Ya, we&#039;re looking for extra storage for Überwald&amp;quot; sounds much better than &amp;quot;Ya we&#039;re looking for extra storage for 5V6S064 or milliways-control-node-1&amp;quot;&lt;br /&gt;
 Naturally, once we get NetBox going, we can map the Asset names to the actual server name and potentially it&#039;s serial so we don&#039;t get confused internally (if we want to use serials, there&#039;s somethign to be said for not using serials here)&lt;br /&gt;
&lt;br /&gt;
* Functional&lt;br /&gt;
** milliways-control-node-1&lt;br /&gt;
** milliways-control-node-2&lt;br /&gt;
** control-node-1&lt;br /&gt;
** compute-node-1&lt;br /&gt;
** flash-storage-1&lt;br /&gt;
&lt;br /&gt;
* Marketing&lt;br /&gt;
** HGttG characters&lt;br /&gt;
*** Arthur&lt;br /&gt;
*** Ford&lt;br /&gt;
*** Zaphod&lt;br /&gt;
** Discworld locations&lt;br /&gt;
*** Ankh-Morpork&lt;br /&gt;
*** Überwald&lt;br /&gt;
*** Lancre&lt;br /&gt;
***&lt;br /&gt;
&lt;br /&gt;
=== OpenStack ===&lt;br /&gt;
&lt;br /&gt;
 We&#039;re using 2025.1 (epoxy) as 2025.2 (flamingo) has an undocumented breaking change making installation of keystone impossible. We have registered a bug with the documentation on launchpad for this.&lt;br /&gt;
&lt;br /&gt;
* [https://docs.openstack.org/install-guide/ Installation guide]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-2025-1-epoxy Minimal Deployment]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/overview.html#example-architecture Example Architecture]&lt;br /&gt;
 Following installation guide recommendation, passwords are created with &amp;lt;code&amp;gt;openssl rand -hex 10&amp;lt;/code&amp;gt; and saved in a password store.&lt;br /&gt;
&lt;br /&gt;
==== Controller ====&lt;br /&gt;
* [https://docs.openstack.org/keystone/2025.1/install/ Identity service]&lt;br /&gt;
** [https://docs.openstack.org/keystone/2025.2/install/keystone-users-ubuntu.html Broken] in 2025.2&lt;br /&gt;
***[https://opendev.org/openstack/keystone/src/commit/82c80dccf6c2e74e27b90f5204de6da1fc6bd76d/releasenotes/notes/remove-wsgi-scripts-615b97ee4d6e0de2.yaml This] commit removes the WSGI scripts, ``keystone-wsgi-admin`` and ``keystone-wsgi-public``.&lt;br /&gt;
*** Both scripts are still called by the openstack command. This means running any openstack command to create a domain, projects, users, and roles fails with the error&lt;br /&gt;
****&amp;lt;code&amp;gt;Failed to discover available identity versions when contacting http://controller:5000/v3. Attempting to parse version from URL.&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Evidence:&lt;br /&gt;
**** &amp;lt;code&amp;gt;tail /var/log/apache2/keystone.log&amp;lt;/code&amp;gt;&lt;br /&gt;
***** &amp;lt;code&amp;gt;Target WSGI script not found or unable to stat: /usr/bin/keystone-wsgi-public&amp;lt;/code&amp;gt;&lt;br /&gt;
** Workaround, use 2025.1 instead&lt;br /&gt;
** Completed 2025-01-18&lt;br /&gt;
* [https://docs.openstack.org/glance/2025.1/install/ Image service]&lt;br /&gt;
** [https://docs.openstack.org/glance/2025.1/install/install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide has you create 3 API endpoints for the service.&lt;br /&gt;
**** You need to configure access to keystone with one of them, but you are not told which one. Only &amp;lt;code&amp;gt;public&amp;lt;/code&amp;gt; will work.&lt;br /&gt;
*** Configuring glance-api.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** Completed 2025-01-19&lt;br /&gt;
* [https://docs.openstack.org/placement/2025.1/install/ Placement service]&lt;br /&gt;
** [https://docs.openstack.org/placement/2025.1/install/verify.html Bad] Documentation&lt;br /&gt;
*** If you followed the guide, your user account [https://storyboard.openstack.org/#!/story/2008969 does not have the rights] to read &amp;lt;code&amp;gt;/etc/placement/placement.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Running &amp;lt;code&amp;gt;placement-status upgrade check&amp;lt;/code&amp;gt; as root proves the service works.&lt;br /&gt;
*** Undocumented requirement fulfilled; &amp;lt;code&amp;gt;usermod -aG placement&amp;lt;/code&amp;gt;&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portions of [https://docs.openstack.org/nova/2025.1/install/ Compute]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/controller-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring nova.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
*** The guide attempts to make you configure options for the networking service, which you have not installed yet, because the guide makes you install this service first&lt;br /&gt;
*** &amp;lt;code&amp;gt;Due to a packaging bug, remove the log_dir option from the [DEFAULT] section.&amp;lt;/code&amp;gt;&lt;br /&gt;
**** ???? THEN FIX THE PACKAGE?!?!?!!!!&lt;br /&gt;
*** The &amp;lt;code&amp;gt;[glance]&amp;lt;/code&amp;gt; option you are instructed to use is deprecated&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portion of [https://docs.openstack.org/neutron/2025.1/install/ Networking]&lt;br /&gt;
** [https://docs.openstack.org/neutron/2025.1/install/controller-install-option2-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring neutron.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** More [https://docs.openstack.org/neutron/2025.1/admin/deploy-ovs-provider.html Bad] Documentation&lt;br /&gt;
*** The guide refers to configuring the Open vSwitch agent and offers more information which directly contradicts the guide.&lt;br /&gt;
**** The guide says to edit neutron.conf with &amp;lt;code&amp;gt;service_plugins = router&amp;lt;/code&amp;gt;&lt;br /&gt;
**** The Open vSwitch agent example configuration for controllers says: &amp;quot;Disable service plug-ins because provider networks do not require any.&amp;quot;&lt;br /&gt;
*** Configuring openvswitch_agent.ini is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
**** The guide attempts to make you configure the the name of the bridge connected to the underlying provider physical network but you have not yet created this bridge when the guide asks you for the name.&lt;br /&gt;
* various Networking agents&lt;br /&gt;
* [https://docs.openstack.org/horizon/2025.1/install/ Dashboard]&lt;br /&gt;
&lt;br /&gt;
== communications ==&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6985</id>
		<title>MilliwaysStack</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6985"/>
		<updated>2026-01-21T14:47:28Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Controller */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We want to run an OpenStack experiment&lt;br /&gt;
&lt;br /&gt;
== The grander idea ==&lt;br /&gt;
&lt;br /&gt;
We want to try out an installation of OpenStack to give people around milliways experience with running (on). &lt;br /&gt;
&lt;br /&gt;
From an unnamed source we got 10 HPE servers. We will use 8 of them to run OpenStack on it. Storage is on a seperate machine. &lt;br /&gt;
&lt;br /&gt;
=== MVP ===&lt;br /&gt;
The MVP would be:&lt;br /&gt;
* Kubernetes / docker &lt;br /&gt;
* object storage&lt;br /&gt;
* file systems&lt;br /&gt;
* Networking&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* Firewalling&lt;br /&gt;
* Databases - mariaDB / PostgreSQL &lt;br /&gt;
* Someone something redis I guess&lt;br /&gt;
* container registry&lt;br /&gt;
&lt;br /&gt;
=== e-MVP ===&lt;br /&gt;
The extended MVP would be:&lt;br /&gt;
* functional Monitoring &amp;amp; alerting&lt;br /&gt;
* autoscaling &lt;br /&gt;
* integration into milliways identity &amp;amp; access management authentik &lt;br /&gt;
* logging &amp;amp; alerting&lt;br /&gt;
&lt;br /&gt;
== the software stack explained ==&lt;br /&gt;
&lt;br /&gt;
OpenStack is a cloud framework stack that offeres AWS / Azure / GC alike services. &lt;br /&gt;
&lt;br /&gt;
Most documentation is availible for Ubuntu &amp;amp; Red Hat. On the longer term an installation under NixOS might be feasable.&lt;br /&gt;
&lt;br /&gt;
== Asset List ==&lt;br /&gt;
=== Rack ===&lt;br /&gt;
* 47U&lt;br /&gt;
* 950mm external depth&lt;br /&gt;
** 915mm internal depth&lt;br /&gt;
=== Consumables &amp;amp; Small Materials ===&lt;br /&gt;
* 1 x Samsung 2.5&amp;quot; 860 EVO 2TB&lt;br /&gt;
* Assorted M2 - M3 screws&lt;br /&gt;
* Assorted mismatched bundle of M5 and M6 cagenuts and bolts&lt;br /&gt;
* SFPs&lt;br /&gt;
=== Switches ===&lt;br /&gt;
* 2 x Dell PowerConnect 7048R-RA&lt;br /&gt;
* 1 x Cisco 3560e&lt;br /&gt;
=== [[MilliwaysStack_Servers | Servers]] ===&lt;br /&gt;
* 1 Dell PowerEdge R710 server as storage&lt;br /&gt;
** 2 x X5570 2,93GHz&lt;br /&gt;
** 192GB RAM&lt;br /&gt;
** 6 x 3,5&amp;quot; bays&lt;br /&gt;
*** 6 x hotswap 3,5&amp;quot; drive sleds/brackets&lt;br /&gt;
** Drives&lt;br /&gt;
*** 1 x Samsung 2.5&amp;quot; 850 EVO 500GB&lt;br /&gt;
**** for OS&lt;br /&gt;
**** Hidden in aftermarket [https://www.amazon.nl/dp/B083XJPCGL &amp;quot;Optical Drive&amp;quot;] adapter.&lt;br /&gt;
***We have more drives than bays, but not enough drives to make a nice or ideal configuration. As such, the Dell storage situation is likely temporary until we can figure out if we add more 12T or 10T or keep as-is.&lt;br /&gt;
**** 2 x Seagate Exos X18 12TB&lt;br /&gt;
**** 1 x Seagate Exos X18 10TB&lt;br /&gt;
**** 4 x WD Red 4TB&lt;br /&gt;
**** 4 x WD Green 3TB&lt;br /&gt;
** no rails&lt;br /&gt;
* 2 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram&lt;br /&gt;
** PCI Riser to 4* NVMe adapter&lt;br /&gt;
*** 1TB Crucial NVMe &lt;br /&gt;
** iLO4&lt;br /&gt;
*** It seems it accepts 35DPH-SVSXJ-HGBJN-C7N5R-2SS4W as activation key for iLO Advanced license?&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** Slide rails&lt;br /&gt;
* 8 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram &lt;br /&gt;
** iLO4&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** 7 x slide rails&lt;br /&gt;
&lt;br /&gt;
== Shopping List ==&lt;br /&gt;
 It&#039;s ofc. sexy as all hell to buy memory, AI cards, flash storage and allsorts, but literally none of that will ever work if we don&#039;t have our Generic basics in order. While we prefer big donations go to big ticket items, many small ticket items unexpectedly add up in the long run. Please do not forget the generic basics!&lt;br /&gt;
* Generic Basics&lt;br /&gt;
** PDU&lt;br /&gt;
*** &amp;lt;s&amp;gt;Temporary 1U unmanaged PDU with 16A/230V C19 input and 1* C19 + 8* Type F outlet.&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Perfect: Managed Rack mountable PDU with CEE Red 16A/20A 400v input to C13/C14 + C19/C20 outlets.&lt;br /&gt;
**** [https://www.eaton.com/us/en-us/skuPage.PDU3XEVSR6G20.html Stupid expensive example]&lt;br /&gt;
*** Alternatively; a &amp;quot;normal&amp;quot; Serverrack PDU (still strong prefer managed) + 16A/20A 400v -&amp;gt; 16A 230V transform&lt;br /&gt;
** Network Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [Color]&lt;br /&gt;
**** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Power Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Screws, Nuts, Bolts&lt;br /&gt;
*** &amp;lt;s&amp;gt;Assorted M2,M2.5,M3 Screws&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Some actual properly matching cage nuts\bolts would be great&lt;br /&gt;
** PCI Risers&lt;br /&gt;
*** &amp;lt;s&amp;gt;Single NVMe adapters&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Multi NVMe adapters&lt;br /&gt;
** KVM&lt;br /&gt;
*** PiKVM?&lt;br /&gt;
* Dell - Storage&lt;br /&gt;
** &amp;lt;s&amp;gt;2* Drive sleds&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;New RAID Card that supports passthrough\JBOD&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;2* SFF-8087 -&amp;gt; SFF-8087 Mini SAS Cable&amp;lt;/s&amp;gt;&lt;br /&gt;
** Drives&lt;br /&gt;
*** &amp;lt;s&amp;gt;500GB SSD for OS&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Bracket and SATA Cable Adapter for SSD&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Technically not shopping, but for historical tracking;&lt;br /&gt;
**** Old Exos X16 2 x 12T and 1 x 10T were RMA&#039;d and replaced with X18&#039;s&amp;lt;/s&amp;gt;&lt;br /&gt;
*** 12T ?&lt;br /&gt;
* HP1 - Control&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* HP2 - Compute&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* Flash Storage&lt;br /&gt;
** We&#039;ll need [https://www.amazon.de/-/en/dp/B07GCDH5D8 Drive Trays] for the HPs if we wanna add 2.5&amp;quot; SSDs&lt;br /&gt;
** Control and Compute servers each have 3 open m.2 NVMe slots&lt;br /&gt;
** &amp;lt;s&amp;gt;1 x 2TB Samsung 860 EVO&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
 nb. this is quick &#039;n&#039; dirty as I go along.&lt;br /&gt;
 In the short-term future I&#039;d much rather replace this adhoc documentation with something like NetBox.&lt;br /&gt;
=== Network ===&lt;br /&gt;
* Supernet 10.42.0.0/16&lt;br /&gt;
** Vlan 42&lt;br /&gt;
*** Interconnect&lt;br /&gt;
*** 10.42.0.0/30&lt;br /&gt;
**** Gateway 10.42.0.1&lt;br /&gt;
**** Milliways Core 10.42.0.2&lt;br /&gt;
** Vlan 5&lt;br /&gt;
*** Mgmt \ OOB&lt;br /&gt;
*** 10.42.1.0/24&lt;br /&gt;
**** Milliways Core 10.42.1.1&lt;br /&gt;
**** Dell iDRAC 10.42.1.5&lt;br /&gt;
**** Dell RAID Controller 10.42.1.6&lt;br /&gt;
**** HP 1 iLO 10.42.1.7&lt;br /&gt;
**** HP 2 iLO 10.42.1.8&lt;br /&gt;
** Vlan 10&lt;br /&gt;
*** Prod&lt;br /&gt;
*** 10.42.10.0/24&lt;br /&gt;
**** Milliways Core 10.42.10.1&lt;br /&gt;
**** Dell 10.42.10.2&lt;br /&gt;
**** HP 1 10.42.10.3&lt;br /&gt;
**** HP 2 10.42.10.5&lt;br /&gt;
** Vlan 15&lt;br /&gt;
*** [https://docs.openstack.org/neutron/2025.1/install/environment-networking-ubuntu.html Provider Network]&lt;br /&gt;
**** This is an OpenStack thing for the secondary Control and Compute node interfaces.&lt;br /&gt;
**** Currently [https://docs.openstack.org/neutron/2025.1/install/environment-networking-controller-ubuntu.html no IP] address assigned.&lt;br /&gt;
**** May change in future if documentation mandates it.&lt;br /&gt;
&lt;br /&gt;
=== Cable Mgmt ===&lt;br /&gt;
 As there are some early ambitions to physically take this environment to events, perhaps we should seriously think about making our lives easier by already thinking about colorcoding connectivity. While this will help us connecting everything again at $event when we&#039;re sleepdeprived\drunk\explaining to newbies, this has the added effect of making it all look slightly more cooler than just a spaghetti of all boring white cables or worse, a spaghetti of whatever the fuck we have lying around.&lt;br /&gt;
&lt;br /&gt;
This is all just made-up without too much thought. This is specifically intended to start a discussion so we can work toward an agreement, it is not intended to be a unilateral decision. Example; You&#039;ll notice 0 thought was put into fiber or not ;)&lt;br /&gt;
* RED&lt;br /&gt;
** Mgmt \ OOB&lt;br /&gt;
*** iDRACs, iLOs, RAID Cards, etc&lt;br /&gt;
* GREEN&lt;br /&gt;
** Storage Prod&lt;br /&gt;
*** At least the Dell, maybe HPs if we get into flash storage&lt;br /&gt;
* BLUE&lt;br /&gt;
** Compute Prod&lt;br /&gt;
*** Likely overwhelmingly the HPs&lt;br /&gt;
* YELLOW&lt;br /&gt;
** Interconnect&lt;br /&gt;
*** Connectivity to $outside, between switches, whatever&lt;br /&gt;
&lt;br /&gt;
=== Naming Convention ===&lt;br /&gt;
 We need names!&lt;br /&gt;
 Can&#039;t keep calling these &amp;quot;Dell&amp;quot;, &amp;quot;HP1&amp;quot;, &amp;quot;HP2&amp;quot; etc.&lt;br /&gt;
 Calling them by their S/Ns is also super boring and cumbersome; &amp;quot;Oh yea, we need to setup 5V6S064&amp;quot;&lt;br /&gt;
 We could even opt for dual names. Internally, when logged in to $shell, the names could be functional &amp;quot;milliways-control-node-1&amp;quot; so it&#039;s clear what you&#039;re doing, but externally, the Asset Tag could be a Hitchhiker&#039;s Guide to the Galaxy character or a DiscWorld town or something. That way, if we do ever show this off at events, we can do cool shit with light up tags, make stuff funny and recognizable and cool to talk about - it also makes it way more relatable to market for when asking for donations; &amp;quot;Ya, we&#039;re looking for extra storage for Überwald&amp;quot; sounds much better than &amp;quot;Ya we&#039;re looking for extra storage for 5V6S064 or milliways-control-node-1&amp;quot;&lt;br /&gt;
 Naturally, once we get NetBox going, we can map the Asset names to the actual server name and potentially it&#039;s serial so we don&#039;t get confused internally (if we want to use serials, there&#039;s somethign to be said for not using serials here)&lt;br /&gt;
&lt;br /&gt;
* Functional&lt;br /&gt;
** milliways-control-node-1&lt;br /&gt;
** milliways-control-node-2&lt;br /&gt;
** control-node-1&lt;br /&gt;
** compute-node-1&lt;br /&gt;
** flash-storage-1&lt;br /&gt;
&lt;br /&gt;
* Marketing&lt;br /&gt;
** HGttG characters&lt;br /&gt;
*** Arthur&lt;br /&gt;
*** Ford&lt;br /&gt;
*** Zaphod&lt;br /&gt;
** Discworld locations&lt;br /&gt;
*** Ankh-Morpork&lt;br /&gt;
*** Überwald&lt;br /&gt;
*** Lancre&lt;br /&gt;
***&lt;br /&gt;
&lt;br /&gt;
=== OpenStack ===&lt;br /&gt;
&lt;br /&gt;
 We&#039;re using 2025.1 (epoxy) as 2025.2 (flamingo) has an undocumented breaking change making installation of keystone impossible. We have registered a bug with the documentation on launchpad for this.&lt;br /&gt;
&lt;br /&gt;
* [https://docs.openstack.org/install-guide/ Installation guide]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-2025-1-epoxy Minimal Deployment]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/overview.html#example-architecture Example Architecture]&lt;br /&gt;
 Following installation guide recommendation, passwords are created with &amp;lt;code&amp;gt;openssl rand -hex 10&amp;lt;/code&amp;gt; and saved in a password store.&lt;br /&gt;
&lt;br /&gt;
==== Controller ====&lt;br /&gt;
* [https://docs.openstack.org/keystone/2025.1/install/ Identity service]&lt;br /&gt;
** [https://docs.openstack.org/keystone/2025.2/install/keystone-users-ubuntu.html Broken] in 2025.2&lt;br /&gt;
***[https://opendev.org/openstack/keystone/src/commit/82c80dccf6c2e74e27b90f5204de6da1fc6bd76d/releasenotes/notes/remove-wsgi-scripts-615b97ee4d6e0de2.yaml This] commit removes the WSGI scripts, ``keystone-wsgi-admin`` and ``keystone-wsgi-public``.&lt;br /&gt;
*** Both scripts are still called by the openstack command. This means running any openstack command to create a domain, projects, users, and roles fails with the error&lt;br /&gt;
****&amp;lt;code&amp;gt;Failed to discover available identity versions when contacting http://controller:5000/v3. Attempting to parse version from URL.&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Evidence:&lt;br /&gt;
**** &amp;lt;code&amp;gt;tail /var/log/apache2/keystone.log&amp;lt;/code&amp;gt;&lt;br /&gt;
***** &amp;lt;code&amp;gt;Target WSGI script not found or unable to stat: /usr/bin/keystone-wsgi-public&amp;lt;/code&amp;gt;&lt;br /&gt;
** Workaround, use 2025.1 instead&lt;br /&gt;
** Completed 2025-01-18&lt;br /&gt;
* [https://docs.openstack.org/glance/2025.1/install/ Image service]&lt;br /&gt;
** [https://docs.openstack.org/glance/2025.1/install/install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide has you create 3 API endpoints for the service.&lt;br /&gt;
**** You need to configure access to keystone with one of them, but you are not told which one. Only &amp;lt;code&amp;gt;public&amp;lt;/code&amp;gt; will work.&lt;br /&gt;
*** Configuring glance-api.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** Completed 2025-01-19&lt;br /&gt;
* [https://docs.openstack.org/placement/2025.1/install/ Placement service]&lt;br /&gt;
** [https://docs.openstack.org/placement/2025.1/install/verify.html Bad] Documentation&lt;br /&gt;
*** If you followed the guide, your user account [https://storyboard.openstack.org/#!/story/2008969 does not have the rights] to read &amp;lt;code&amp;gt;/etc/placement/placement.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Running &amp;lt;code&amp;gt;placement-status upgrade check&amp;lt;/code&amp;gt; as root proves the service works.&lt;br /&gt;
*** Undocumented requirement fulfilled; &amp;lt;code&amp;gt;usermod -aG placement&amp;lt;/code&amp;gt;&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portions of [https://docs.openstack.org/nova/2025.1/install/ Compute]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/controller-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring nova.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
*** The guide attempts to make you configure options for the networking service, which you have not installed yet, because the guide makes you install this service first&lt;br /&gt;
*** &amp;lt;code&amp;gt;Due to a packaging bug, remove the log_dir option from the [DEFAULT] section.&amp;lt;/code&amp;gt;&lt;br /&gt;
**** ???? THEN FIX THE PACKAGE?!?!?!!!!&lt;br /&gt;
*** The &amp;lt;code&amp;gt;[glance]&amp;lt;/code&amp;gt; option you are instructed to use is deprecated&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portion of [https://docs.openstack.org/neutron/2025.1/install/ Networking]&lt;br /&gt;
** [https://docs.openstack.org/neutron/2025.1/install/controller-install-option2-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring neutron.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** More [https://docs.openstack.org/neutron/2025.1/admin/deploy-ovs-provider.html Bad] Documentation&lt;br /&gt;
*** The guide refers to configuring the Open vSwitch agent and offers more information which directly contradicts the guide.&lt;br /&gt;
**** The guide says to edit neutron.conf with &amp;lt;code&amp;gt;service_plugins = router&amp;lt;/code&amp;gt;&lt;br /&gt;
**** The Open vSwitch agent example configuration for controllers says: &amp;quot;Disable service plug-ins because provider networks do not require any.&amp;quot;&lt;br /&gt;
* various Networking agents&lt;br /&gt;
* [https://docs.openstack.org/horizon/2025.1/install/ Dashboard]&lt;br /&gt;
&lt;br /&gt;
== communications ==&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6984</id>
		<title>MilliwaysStack</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6984"/>
		<updated>2026-01-21T11:16:01Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Network */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We want to run an OpenStack experiment&lt;br /&gt;
&lt;br /&gt;
== The grander idea ==&lt;br /&gt;
&lt;br /&gt;
We want to try out an installation of OpenStack to give people around milliways experience with running (on). &lt;br /&gt;
&lt;br /&gt;
From an unnamed source we got 10 HPE servers. We will use 8 of them to run OpenStack on it. Storage is on a seperate machine. &lt;br /&gt;
&lt;br /&gt;
=== MVP ===&lt;br /&gt;
The MVP would be:&lt;br /&gt;
* Kubernetes / docker &lt;br /&gt;
* object storage&lt;br /&gt;
* file systems&lt;br /&gt;
* Networking&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* Firewalling&lt;br /&gt;
* Databases - mariaDB / PostgreSQL &lt;br /&gt;
* Someone something redis I guess&lt;br /&gt;
* container registry&lt;br /&gt;
&lt;br /&gt;
=== e-MVP ===&lt;br /&gt;
The extended MVP would be:&lt;br /&gt;
* functional Monitoring &amp;amp; alerting&lt;br /&gt;
* autoscaling &lt;br /&gt;
* integration into milliways identity &amp;amp; access management authentik &lt;br /&gt;
* logging &amp;amp; alerting&lt;br /&gt;
&lt;br /&gt;
== the software stack explained ==&lt;br /&gt;
&lt;br /&gt;
OpenStack is a cloud framework stack that offeres AWS / Azure / GC alike services. &lt;br /&gt;
&lt;br /&gt;
Most documentation is availible for Ubuntu &amp;amp; Red Hat. On the longer term an installation under NixOS might be feasable.&lt;br /&gt;
&lt;br /&gt;
== Asset List ==&lt;br /&gt;
=== Rack ===&lt;br /&gt;
* 47U&lt;br /&gt;
* 950mm external depth&lt;br /&gt;
** 915mm internal depth&lt;br /&gt;
=== Consumables &amp;amp; Small Materials ===&lt;br /&gt;
* 1 x Samsung 2.5&amp;quot; 860 EVO 2TB&lt;br /&gt;
* Assorted M2 - M3 screws&lt;br /&gt;
* Assorted mismatched bundle of M5 and M6 cagenuts and bolts&lt;br /&gt;
* SFPs&lt;br /&gt;
=== Switches ===&lt;br /&gt;
* 2 x Dell PowerConnect 7048R-RA&lt;br /&gt;
* 1 x Cisco 3560e&lt;br /&gt;
=== [[MilliwaysStack_Servers | Servers]] ===&lt;br /&gt;
* 1 Dell PowerEdge R710 server as storage&lt;br /&gt;
** 2 x X5570 2,93GHz&lt;br /&gt;
** 192GB RAM&lt;br /&gt;
** 6 x 3,5&amp;quot; bays&lt;br /&gt;
*** 6 x hotswap 3,5&amp;quot; drive sleds/brackets&lt;br /&gt;
** Drives&lt;br /&gt;
*** 1 x Samsung 2.5&amp;quot; 850 EVO 500GB&lt;br /&gt;
**** for OS&lt;br /&gt;
**** Hidden in aftermarket [https://www.amazon.nl/dp/B083XJPCGL &amp;quot;Optical Drive&amp;quot;] adapter.&lt;br /&gt;
***We have more drives than bays, but not enough drives to make a nice or ideal configuration. As such, the Dell storage situation is likely temporary until we can figure out if we add more 12T or 10T or keep as-is.&lt;br /&gt;
**** 2 x Seagate Exos X18 12TB&lt;br /&gt;
**** 1 x Seagate Exos X18 10TB&lt;br /&gt;
**** 4 x WD Red 4TB&lt;br /&gt;
**** 4 x WD Green 3TB&lt;br /&gt;
** no rails&lt;br /&gt;
* 2 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram&lt;br /&gt;
** PCI Riser to 4* NVMe adapter&lt;br /&gt;
*** 1TB Crucial NVMe &lt;br /&gt;
** iLO4&lt;br /&gt;
*** It seems it accepts 35DPH-SVSXJ-HGBJN-C7N5R-2SS4W as activation key for iLO Advanced license?&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** Slide rails&lt;br /&gt;
* 8 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram &lt;br /&gt;
** iLO4&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** 7 x slide rails&lt;br /&gt;
&lt;br /&gt;
== Shopping List ==&lt;br /&gt;
 It&#039;s ofc. sexy as all hell to buy memory, AI cards, flash storage and allsorts, but literally none of that will ever work if we don&#039;t have our Generic basics in order. While we prefer big donations go to big ticket items, many small ticket items unexpectedly add up in the long run. Please do not forget the generic basics!&lt;br /&gt;
* Generic Basics&lt;br /&gt;
** PDU&lt;br /&gt;
*** &amp;lt;s&amp;gt;Temporary 1U unmanaged PDU with 16A/230V C19 input and 1* C19 + 8* Type F outlet.&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Perfect: Managed Rack mountable PDU with CEE Red 16A/20A 400v input to C13/C14 + C19/C20 outlets.&lt;br /&gt;
**** [https://www.eaton.com/us/en-us/skuPage.PDU3XEVSR6G20.html Stupid expensive example]&lt;br /&gt;
*** Alternatively; a &amp;quot;normal&amp;quot; Serverrack PDU (still strong prefer managed) + 16A/20A 400v -&amp;gt; 16A 230V transform&lt;br /&gt;
** Network Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [Color]&lt;br /&gt;
**** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Power Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Screws, Nuts, Bolts&lt;br /&gt;
*** &amp;lt;s&amp;gt;Assorted M2,M2.5,M3 Screws&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Some actual properly matching cage nuts\bolts would be great&lt;br /&gt;
** PCI Risers&lt;br /&gt;
*** &amp;lt;s&amp;gt;Single NVMe adapters&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Multi NVMe adapters&lt;br /&gt;
** KVM&lt;br /&gt;
*** PiKVM?&lt;br /&gt;
* Dell - Storage&lt;br /&gt;
** &amp;lt;s&amp;gt;2* Drive sleds&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;New RAID Card that supports passthrough\JBOD&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;2* SFF-8087 -&amp;gt; SFF-8087 Mini SAS Cable&amp;lt;/s&amp;gt;&lt;br /&gt;
** Drives&lt;br /&gt;
*** &amp;lt;s&amp;gt;500GB SSD for OS&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Bracket and SATA Cable Adapter for SSD&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Technically not shopping, but for historical tracking;&lt;br /&gt;
**** Old Exos X16 2 x 12T and 1 x 10T were RMA&#039;d and replaced with X18&#039;s&amp;lt;/s&amp;gt;&lt;br /&gt;
*** 12T ?&lt;br /&gt;
* HP1 - Control&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* HP2 - Compute&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* Flash Storage&lt;br /&gt;
** We&#039;ll need [https://www.amazon.de/-/en/dp/B07GCDH5D8 Drive Trays] for the HPs if we wanna add 2.5&amp;quot; SSDs&lt;br /&gt;
** Control and Compute servers each have 3 open m.2 NVMe slots&lt;br /&gt;
** &amp;lt;s&amp;gt;1 x 2TB Samsung 860 EVO&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
 nb. this is quick &#039;n&#039; dirty as I go along.&lt;br /&gt;
 In the short-term future I&#039;d much rather replace this adhoc documentation with something like NetBox.&lt;br /&gt;
=== Network ===&lt;br /&gt;
* Supernet 10.42.0.0/16&lt;br /&gt;
** Vlan 42&lt;br /&gt;
*** Interconnect&lt;br /&gt;
*** 10.42.0.0/30&lt;br /&gt;
**** Gateway 10.42.0.1&lt;br /&gt;
**** Milliways Core 10.42.0.2&lt;br /&gt;
** Vlan 5&lt;br /&gt;
*** Mgmt \ OOB&lt;br /&gt;
*** 10.42.1.0/24&lt;br /&gt;
**** Milliways Core 10.42.1.1&lt;br /&gt;
**** Dell iDRAC 10.42.1.5&lt;br /&gt;
**** Dell RAID Controller 10.42.1.6&lt;br /&gt;
**** HP 1 iLO 10.42.1.7&lt;br /&gt;
**** HP 2 iLO 10.42.1.8&lt;br /&gt;
** Vlan 10&lt;br /&gt;
*** Prod&lt;br /&gt;
*** 10.42.10.0/24&lt;br /&gt;
**** Milliways Core 10.42.10.1&lt;br /&gt;
**** Dell 10.42.10.2&lt;br /&gt;
**** HP 1 10.42.10.3&lt;br /&gt;
**** HP 2 10.42.10.5&lt;br /&gt;
** Vlan 15&lt;br /&gt;
*** [https://docs.openstack.org/neutron/2025.1/install/environment-networking-ubuntu.html Provider Network]&lt;br /&gt;
**** This is an OpenStack thing for the secondary Control and Compute node interfaces.&lt;br /&gt;
**** Currently [https://docs.openstack.org/neutron/2025.1/install/environment-networking-controller-ubuntu.html no IP] address assigned.&lt;br /&gt;
**** May change in future if documentation mandates it.&lt;br /&gt;
&lt;br /&gt;
=== Cable Mgmt ===&lt;br /&gt;
 As there are some early ambitions to physically take this environment to events, perhaps we should seriously think about making our lives easier by already thinking about colorcoding connectivity. While this will help us connecting everything again at $event when we&#039;re sleepdeprived\drunk\explaining to newbies, this has the added effect of making it all look slightly more cooler than just a spaghetti of all boring white cables or worse, a spaghetti of whatever the fuck we have lying around.&lt;br /&gt;
&lt;br /&gt;
This is all just made-up without too much thought. This is specifically intended to start a discussion so we can work toward an agreement, it is not intended to be a unilateral decision. Example; You&#039;ll notice 0 thought was put into fiber or not ;)&lt;br /&gt;
* RED&lt;br /&gt;
** Mgmt \ OOB&lt;br /&gt;
*** iDRACs, iLOs, RAID Cards, etc&lt;br /&gt;
* GREEN&lt;br /&gt;
** Storage Prod&lt;br /&gt;
*** At least the Dell, maybe HPs if we get into flash storage&lt;br /&gt;
* BLUE&lt;br /&gt;
** Compute Prod&lt;br /&gt;
*** Likely overwhelmingly the HPs&lt;br /&gt;
* YELLOW&lt;br /&gt;
** Interconnect&lt;br /&gt;
*** Connectivity to $outside, between switches, whatever&lt;br /&gt;
&lt;br /&gt;
=== Naming Convention ===&lt;br /&gt;
 We need names!&lt;br /&gt;
 Can&#039;t keep calling these &amp;quot;Dell&amp;quot;, &amp;quot;HP1&amp;quot;, &amp;quot;HP2&amp;quot; etc.&lt;br /&gt;
 Calling them by their S/Ns is also super boring and cumbersome; &amp;quot;Oh yea, we need to setup 5V6S064&amp;quot;&lt;br /&gt;
 We could even opt for dual names. Internally, when logged in to $shell, the names could be functional &amp;quot;milliways-control-node-1&amp;quot; so it&#039;s clear what you&#039;re doing, but externally, the Asset Tag could be a Hitchhiker&#039;s Guide to the Galaxy character or a DiscWorld town or something. That way, if we do ever show this off at events, we can do cool shit with light up tags, make stuff funny and recognizable and cool to talk about - it also makes it way more relatable to market for when asking for donations; &amp;quot;Ya, we&#039;re looking for extra storage for Überwald&amp;quot; sounds much better than &amp;quot;Ya we&#039;re looking for extra storage for 5V6S064 or milliways-control-node-1&amp;quot;&lt;br /&gt;
 Naturally, once we get NetBox going, we can map the Asset names to the actual server name and potentially it&#039;s serial so we don&#039;t get confused internally (if we want to use serials, there&#039;s somethign to be said for not using serials here)&lt;br /&gt;
&lt;br /&gt;
* Functional&lt;br /&gt;
** milliways-control-node-1&lt;br /&gt;
** milliways-control-node-2&lt;br /&gt;
** control-node-1&lt;br /&gt;
** compute-node-1&lt;br /&gt;
** flash-storage-1&lt;br /&gt;
&lt;br /&gt;
* Marketing&lt;br /&gt;
** HGttG characters&lt;br /&gt;
*** Arthur&lt;br /&gt;
*** Ford&lt;br /&gt;
*** Zaphod&lt;br /&gt;
** Discworld locations&lt;br /&gt;
*** Ankh-Morpork&lt;br /&gt;
*** Überwald&lt;br /&gt;
*** Lancre&lt;br /&gt;
***&lt;br /&gt;
&lt;br /&gt;
=== OpenStack ===&lt;br /&gt;
&lt;br /&gt;
 We&#039;re using 2025.1 (epoxy) as 2025.2 (flamingo) has an undocumented breaking change making installation of keystone impossible. We have registered a bug with the documentation on launchpad for this.&lt;br /&gt;
&lt;br /&gt;
* [https://docs.openstack.org/install-guide/ Installation guide]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-2025-1-epoxy Minimal Deployment]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/overview.html#example-architecture Example Architecture]&lt;br /&gt;
 Following installation guide recommendation, passwords are created with &amp;lt;code&amp;gt;openssl rand -hex 10&amp;lt;/code&amp;gt; and saved in a password store.&lt;br /&gt;
&lt;br /&gt;
==== Controller ====&lt;br /&gt;
* [https://docs.openstack.org/keystone/2025.1/install/ Identity service]&lt;br /&gt;
** [https://docs.openstack.org/keystone/2025.2/install/keystone-users-ubuntu.html Broken] in 2025.2&lt;br /&gt;
***[https://opendev.org/openstack/keystone/src/commit/82c80dccf6c2e74e27b90f5204de6da1fc6bd76d/releasenotes/notes/remove-wsgi-scripts-615b97ee4d6e0de2.yaml This] commit removes the WSGI scripts, ``keystone-wsgi-admin`` and ``keystone-wsgi-public``.&lt;br /&gt;
*** Both scripts are still called by the openstack command. This means running any openstack command to create a domain, projects, users, and roles fails with the error&lt;br /&gt;
****&amp;lt;code&amp;gt;Failed to discover available identity versions when contacting http://controller:5000/v3. Attempting to parse version from URL.&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Evidence:&lt;br /&gt;
**** &amp;lt;code&amp;gt;tail /var/log/apache2/keystone.log&amp;lt;/code&amp;gt;&lt;br /&gt;
***** &amp;lt;code&amp;gt;Target WSGI script not found or unable to stat: /usr/bin/keystone-wsgi-public&amp;lt;/code&amp;gt;&lt;br /&gt;
** Workaround, use 2025.1 instead&lt;br /&gt;
** Completed 2025-01-18&lt;br /&gt;
* [https://docs.openstack.org/glance/2025.1/install/ Image service]&lt;br /&gt;
** [https://docs.openstack.org/glance/2025.1/install/install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide has you create 3 API endpoints for the service.&lt;br /&gt;
**** You need to configure access to keystone with one of them, but you are not told which one. Only &amp;lt;code&amp;gt;public&amp;lt;/code&amp;gt; will work.&lt;br /&gt;
*** Configuring glance-api.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** Completed 2025-01-19&lt;br /&gt;
* [https://docs.openstack.org/placement/2025.1/install/ Placement service]&lt;br /&gt;
** [https://docs.openstack.org/placement/2025.1/install/verify.html Bad] Documentation&lt;br /&gt;
*** If you followed the guide, your user account [https://storyboard.openstack.org/#!/story/2008969 does not have the rights] to read &amp;lt;code&amp;gt;/etc/placement/placement.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Running &amp;lt;code&amp;gt;placement-status upgrade check&amp;lt;/code&amp;gt; as root proves the service works.&lt;br /&gt;
*** Undocumented requirement fulfilled; &amp;lt;code&amp;gt;usermod -aG placement&amp;lt;/code&amp;gt;&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portions of [https://docs.openstack.org/nova/2025.1/install/ Compute]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/controller-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring nova.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
*** The guide attempts to make you configure options for the networking service, which you have not installed yet, because the guide makes you install this service first&lt;br /&gt;
*** &amp;lt;code&amp;gt;Due to a packaging bug, remove the log_dir option from the [DEFAULT] section.&amp;lt;/code&amp;gt;&lt;br /&gt;
**** ???? THEN FIX THE PACKAGE?!?!?!!!!&lt;br /&gt;
*** The &amp;lt;code&amp;gt;[glance]&amp;lt;/code&amp;gt; option you are instructed to use is deprecated&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portion of [https://docs.openstack.org/neutron/2025.1/install/ Networking]&lt;br /&gt;
** [https://docs.openstack.org/neutron/2025.1/install/controller-install-option2-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring neutron.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
* various Networking agents&lt;br /&gt;
* [https://docs.openstack.org/horizon/2025.1/install/ Dashboard]&lt;br /&gt;
&lt;br /&gt;
== communications ==&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6983</id>
		<title>MilliwaysStack</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6983"/>
		<updated>2026-01-21T11:15:26Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Network */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We want to run an OpenStack experiment&lt;br /&gt;
&lt;br /&gt;
== The grander idea ==&lt;br /&gt;
&lt;br /&gt;
We want to try out an installation of OpenStack to give people around milliways experience with running (on). &lt;br /&gt;
&lt;br /&gt;
From an unnamed source we got 10 HPE servers. We will use 8 of them to run OpenStack on it. Storage is on a seperate machine. &lt;br /&gt;
&lt;br /&gt;
=== MVP ===&lt;br /&gt;
The MVP would be:&lt;br /&gt;
* Kubernetes / docker &lt;br /&gt;
* object storage&lt;br /&gt;
* file systems&lt;br /&gt;
* Networking&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* Firewalling&lt;br /&gt;
* Databases - mariaDB / PostgreSQL &lt;br /&gt;
* Someone something redis I guess&lt;br /&gt;
* container registry&lt;br /&gt;
&lt;br /&gt;
=== e-MVP ===&lt;br /&gt;
The extended MVP would be:&lt;br /&gt;
* functional Monitoring &amp;amp; alerting&lt;br /&gt;
* autoscaling &lt;br /&gt;
* integration into milliways identity &amp;amp; access management authentik &lt;br /&gt;
* logging &amp;amp; alerting&lt;br /&gt;
&lt;br /&gt;
== the software stack explained ==&lt;br /&gt;
&lt;br /&gt;
OpenStack is a cloud framework stack that offeres AWS / Azure / GC alike services. &lt;br /&gt;
&lt;br /&gt;
Most documentation is availible for Ubuntu &amp;amp; Red Hat. On the longer term an installation under NixOS might be feasable.&lt;br /&gt;
&lt;br /&gt;
== Asset List ==&lt;br /&gt;
=== Rack ===&lt;br /&gt;
* 47U&lt;br /&gt;
* 950mm external depth&lt;br /&gt;
** 915mm internal depth&lt;br /&gt;
=== Consumables &amp;amp; Small Materials ===&lt;br /&gt;
* 1 x Samsung 2.5&amp;quot; 860 EVO 2TB&lt;br /&gt;
* Assorted M2 - M3 screws&lt;br /&gt;
* Assorted mismatched bundle of M5 and M6 cagenuts and bolts&lt;br /&gt;
* SFPs&lt;br /&gt;
=== Switches ===&lt;br /&gt;
* 2 x Dell PowerConnect 7048R-RA&lt;br /&gt;
* 1 x Cisco 3560e&lt;br /&gt;
=== [[MilliwaysStack_Servers | Servers]] ===&lt;br /&gt;
* 1 Dell PowerEdge R710 server as storage&lt;br /&gt;
** 2 x X5570 2,93GHz&lt;br /&gt;
** 192GB RAM&lt;br /&gt;
** 6 x 3,5&amp;quot; bays&lt;br /&gt;
*** 6 x hotswap 3,5&amp;quot; drive sleds/brackets&lt;br /&gt;
** Drives&lt;br /&gt;
*** 1 x Samsung 2.5&amp;quot; 850 EVO 500GB&lt;br /&gt;
**** for OS&lt;br /&gt;
**** Hidden in aftermarket [https://www.amazon.nl/dp/B083XJPCGL &amp;quot;Optical Drive&amp;quot;] adapter.&lt;br /&gt;
***We have more drives than bays, but not enough drives to make a nice or ideal configuration. As such, the Dell storage situation is likely temporary until we can figure out if we add more 12T or 10T or keep as-is.&lt;br /&gt;
**** 2 x Seagate Exos X18 12TB&lt;br /&gt;
**** 1 x Seagate Exos X18 10TB&lt;br /&gt;
**** 4 x WD Red 4TB&lt;br /&gt;
**** 4 x WD Green 3TB&lt;br /&gt;
** no rails&lt;br /&gt;
* 2 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram&lt;br /&gt;
** PCI Riser to 4* NVMe adapter&lt;br /&gt;
*** 1TB Crucial NVMe &lt;br /&gt;
** iLO4&lt;br /&gt;
*** It seems it accepts 35DPH-SVSXJ-HGBJN-C7N5R-2SS4W as activation key for iLO Advanced license?&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** Slide rails&lt;br /&gt;
* 8 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram &lt;br /&gt;
** iLO4&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** 7 x slide rails&lt;br /&gt;
&lt;br /&gt;
== Shopping List ==&lt;br /&gt;
 It&#039;s ofc. sexy as all hell to buy memory, AI cards, flash storage and allsorts, but literally none of that will ever work if we don&#039;t have our Generic basics in order. While we prefer big donations go to big ticket items, many small ticket items unexpectedly add up in the long run. Please do not forget the generic basics!&lt;br /&gt;
* Generic Basics&lt;br /&gt;
** PDU&lt;br /&gt;
*** &amp;lt;s&amp;gt;Temporary 1U unmanaged PDU with 16A/230V C19 input and 1* C19 + 8* Type F outlet.&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Perfect: Managed Rack mountable PDU with CEE Red 16A/20A 400v input to C13/C14 + C19/C20 outlets.&lt;br /&gt;
**** [https://www.eaton.com/us/en-us/skuPage.PDU3XEVSR6G20.html Stupid expensive example]&lt;br /&gt;
*** Alternatively; a &amp;quot;normal&amp;quot; Serverrack PDU (still strong prefer managed) + 16A/20A 400v -&amp;gt; 16A 230V transform&lt;br /&gt;
** Network Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [Color]&lt;br /&gt;
**** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Power Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Screws, Nuts, Bolts&lt;br /&gt;
*** &amp;lt;s&amp;gt;Assorted M2,M2.5,M3 Screws&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Some actual properly matching cage nuts\bolts would be great&lt;br /&gt;
** PCI Risers&lt;br /&gt;
*** &amp;lt;s&amp;gt;Single NVMe adapters&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Multi NVMe adapters&lt;br /&gt;
** KVM&lt;br /&gt;
*** PiKVM?&lt;br /&gt;
* Dell - Storage&lt;br /&gt;
** &amp;lt;s&amp;gt;2* Drive sleds&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;New RAID Card that supports passthrough\JBOD&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;2* SFF-8087 -&amp;gt; SFF-8087 Mini SAS Cable&amp;lt;/s&amp;gt;&lt;br /&gt;
** Drives&lt;br /&gt;
*** &amp;lt;s&amp;gt;500GB SSD for OS&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Bracket and SATA Cable Adapter for SSD&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Technically not shopping, but for historical tracking;&lt;br /&gt;
**** Old Exos X16 2 x 12T and 1 x 10T were RMA&#039;d and replaced with X18&#039;s&amp;lt;/s&amp;gt;&lt;br /&gt;
*** 12T ?&lt;br /&gt;
* HP1 - Control&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* HP2 - Compute&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* Flash Storage&lt;br /&gt;
** We&#039;ll need [https://www.amazon.de/-/en/dp/B07GCDH5D8 Drive Trays] for the HPs if we wanna add 2.5&amp;quot; SSDs&lt;br /&gt;
** Control and Compute servers each have 3 open m.2 NVMe slots&lt;br /&gt;
** &amp;lt;s&amp;gt;1 x 2TB Samsung 860 EVO&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
 nb. this is quick &#039;n&#039; dirty as I go along.&lt;br /&gt;
 In the short-term future I&#039;d much rather replace this adhoc documentation with something like NetBox.&lt;br /&gt;
=== Network ===&lt;br /&gt;
* Supernet 10.42.0.0/16&lt;br /&gt;
** Vlan 42&lt;br /&gt;
*** Interconnect&lt;br /&gt;
*** 10.42.0.0/30&lt;br /&gt;
**** Gateway 10.42.0.1&lt;br /&gt;
**** Milliways Core 10.42.0.2&lt;br /&gt;
** Vlan 5&lt;br /&gt;
*** Mgmt \ OOB&lt;br /&gt;
*** 10.42.1.0/24&lt;br /&gt;
**** Milliways Core 10.42.1.1&lt;br /&gt;
**** Dell iDRAC 10.42.1.5&lt;br /&gt;
**** Dell RAID Controller 10.42.1.6&lt;br /&gt;
**** HP 1 iLO 10.42.1.7&lt;br /&gt;
**** HP 2 iLO 10.42.1.8&lt;br /&gt;
** Vlan 10&lt;br /&gt;
*** Prod&lt;br /&gt;
*** 10.42.10.0/24&lt;br /&gt;
**** Milliways Core 10.42.10.1&lt;br /&gt;
**** Dell 10.42.10.2&lt;br /&gt;
**** HP 1 10.42.10.3&lt;br /&gt;
**** HP 2 10.42.10.5&lt;br /&gt;
** Vlan 15&lt;br /&gt;
*** [https://docs.openstack.org/neutron/2025.1/install/environment-networking-ubuntu.html Provider Network]&lt;br /&gt;
**** This is an OpenStack thing for the secondary Control and Compute node interfaces.&lt;br /&gt;
**** Currently [https://docs.openstack.org/neutron/2025.1/install/environment-networking-controller-ubuntu.html no IP] address assigned.&lt;br /&gt;
**** May change in future if documentation&lt;br /&gt;
&lt;br /&gt;
=== Cable Mgmt ===&lt;br /&gt;
 As there are some early ambitions to physically take this environment to events, perhaps we should seriously think about making our lives easier by already thinking about colorcoding connectivity. While this will help us connecting everything again at $event when we&#039;re sleepdeprived\drunk\explaining to newbies, this has the added effect of making it all look slightly more cooler than just a spaghetti of all boring white cables or worse, a spaghetti of whatever the fuck we have lying around.&lt;br /&gt;
&lt;br /&gt;
This is all just made-up without too much thought. This is specifically intended to start a discussion so we can work toward an agreement, it is not intended to be a unilateral decision. Example; You&#039;ll notice 0 thought was put into fiber or not ;)&lt;br /&gt;
* RED&lt;br /&gt;
** Mgmt \ OOB&lt;br /&gt;
*** iDRACs, iLOs, RAID Cards, etc&lt;br /&gt;
* GREEN&lt;br /&gt;
** Storage Prod&lt;br /&gt;
*** At least the Dell, maybe HPs if we get into flash storage&lt;br /&gt;
* BLUE&lt;br /&gt;
** Compute Prod&lt;br /&gt;
*** Likely overwhelmingly the HPs&lt;br /&gt;
* YELLOW&lt;br /&gt;
** Interconnect&lt;br /&gt;
*** Connectivity to $outside, between switches, whatever&lt;br /&gt;
&lt;br /&gt;
=== Naming Convention ===&lt;br /&gt;
 We need names!&lt;br /&gt;
 Can&#039;t keep calling these &amp;quot;Dell&amp;quot;, &amp;quot;HP1&amp;quot;, &amp;quot;HP2&amp;quot; etc.&lt;br /&gt;
 Calling them by their S/Ns is also super boring and cumbersome; &amp;quot;Oh yea, we need to setup 5V6S064&amp;quot;&lt;br /&gt;
 We could even opt for dual names. Internally, when logged in to $shell, the names could be functional &amp;quot;milliways-control-node-1&amp;quot; so it&#039;s clear what you&#039;re doing, but externally, the Asset Tag could be a Hitchhiker&#039;s Guide to the Galaxy character or a DiscWorld town or something. That way, if we do ever show this off at events, we can do cool shit with light up tags, make stuff funny and recognizable and cool to talk about - it also makes it way more relatable to market for when asking for donations; &amp;quot;Ya, we&#039;re looking for extra storage for Überwald&amp;quot; sounds much better than &amp;quot;Ya we&#039;re looking for extra storage for 5V6S064 or milliways-control-node-1&amp;quot;&lt;br /&gt;
 Naturally, once we get NetBox going, we can map the Asset names to the actual server name and potentially it&#039;s serial so we don&#039;t get confused internally (if we want to use serials, there&#039;s somethign to be said for not using serials here)&lt;br /&gt;
&lt;br /&gt;
* Functional&lt;br /&gt;
** milliways-control-node-1&lt;br /&gt;
** milliways-control-node-2&lt;br /&gt;
** control-node-1&lt;br /&gt;
** compute-node-1&lt;br /&gt;
** flash-storage-1&lt;br /&gt;
&lt;br /&gt;
* Marketing&lt;br /&gt;
** HGttG characters&lt;br /&gt;
*** Arthur&lt;br /&gt;
*** Ford&lt;br /&gt;
*** Zaphod&lt;br /&gt;
** Discworld locations&lt;br /&gt;
*** Ankh-Morpork&lt;br /&gt;
*** Überwald&lt;br /&gt;
*** Lancre&lt;br /&gt;
***&lt;br /&gt;
&lt;br /&gt;
=== OpenStack ===&lt;br /&gt;
&lt;br /&gt;
 We&#039;re using 2025.1 (epoxy) as 2025.2 (flamingo) has an undocumented breaking change making installation of keystone impossible. We have registered a bug with the documentation on launchpad for this.&lt;br /&gt;
&lt;br /&gt;
* [https://docs.openstack.org/install-guide/ Installation guide]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-2025-1-epoxy Minimal Deployment]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/overview.html#example-architecture Example Architecture]&lt;br /&gt;
 Following installation guide recommendation, passwords are created with &amp;lt;code&amp;gt;openssl rand -hex 10&amp;lt;/code&amp;gt; and saved in a password store.&lt;br /&gt;
&lt;br /&gt;
==== Controller ====&lt;br /&gt;
* [https://docs.openstack.org/keystone/2025.1/install/ Identity service]&lt;br /&gt;
** [https://docs.openstack.org/keystone/2025.2/install/keystone-users-ubuntu.html Broken] in 2025.2&lt;br /&gt;
***[https://opendev.org/openstack/keystone/src/commit/82c80dccf6c2e74e27b90f5204de6da1fc6bd76d/releasenotes/notes/remove-wsgi-scripts-615b97ee4d6e0de2.yaml This] commit removes the WSGI scripts, ``keystone-wsgi-admin`` and ``keystone-wsgi-public``.&lt;br /&gt;
*** Both scripts are still called by the openstack command. This means running any openstack command to create a domain, projects, users, and roles fails with the error&lt;br /&gt;
****&amp;lt;code&amp;gt;Failed to discover available identity versions when contacting http://controller:5000/v3. Attempting to parse version from URL.&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Evidence:&lt;br /&gt;
**** &amp;lt;code&amp;gt;tail /var/log/apache2/keystone.log&amp;lt;/code&amp;gt;&lt;br /&gt;
***** &amp;lt;code&amp;gt;Target WSGI script not found or unable to stat: /usr/bin/keystone-wsgi-public&amp;lt;/code&amp;gt;&lt;br /&gt;
** Workaround, use 2025.1 instead&lt;br /&gt;
** Completed 2025-01-18&lt;br /&gt;
* [https://docs.openstack.org/glance/2025.1/install/ Image service]&lt;br /&gt;
** [https://docs.openstack.org/glance/2025.1/install/install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide has you create 3 API endpoints for the service.&lt;br /&gt;
**** You need to configure access to keystone with one of them, but you are not told which one. Only &amp;lt;code&amp;gt;public&amp;lt;/code&amp;gt; will work.&lt;br /&gt;
*** Configuring glance-api.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** Completed 2025-01-19&lt;br /&gt;
* [https://docs.openstack.org/placement/2025.1/install/ Placement service]&lt;br /&gt;
** [https://docs.openstack.org/placement/2025.1/install/verify.html Bad] Documentation&lt;br /&gt;
*** If you followed the guide, your user account [https://storyboard.openstack.org/#!/story/2008969 does not have the rights] to read &amp;lt;code&amp;gt;/etc/placement/placement.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Running &amp;lt;code&amp;gt;placement-status upgrade check&amp;lt;/code&amp;gt; as root proves the service works.&lt;br /&gt;
*** Undocumented requirement fulfilled; &amp;lt;code&amp;gt;usermod -aG placement&amp;lt;/code&amp;gt;&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portions of [https://docs.openstack.org/nova/2025.1/install/ Compute]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/controller-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring nova.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
*** The guide attempts to make you configure options for the networking service, which you have not installed yet, because the guide makes you install this service first&lt;br /&gt;
*** &amp;lt;code&amp;gt;Due to a packaging bug, remove the log_dir option from the [DEFAULT] section.&amp;lt;/code&amp;gt;&lt;br /&gt;
**** ???? THEN FIX THE PACKAGE?!?!?!!!!&lt;br /&gt;
*** The &amp;lt;code&amp;gt;[glance]&amp;lt;/code&amp;gt; option you are instructed to use is deprecated&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portion of [https://docs.openstack.org/neutron/2025.1/install/ Networking]&lt;br /&gt;
** [https://docs.openstack.org/neutron/2025.1/install/controller-install-option2-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring neutron.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
* various Networking agents&lt;br /&gt;
* [https://docs.openstack.org/horizon/2025.1/install/ Dashboard]&lt;br /&gt;
&lt;br /&gt;
== communications ==&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6982</id>
		<title>MilliwaysStack</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6982"/>
		<updated>2026-01-21T11:05:35Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Shopping List */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We want to run an OpenStack experiment&lt;br /&gt;
&lt;br /&gt;
== The grander idea ==&lt;br /&gt;
&lt;br /&gt;
We want to try out an installation of OpenStack to give people around milliways experience with running (on). &lt;br /&gt;
&lt;br /&gt;
From an unnamed source we got 10 HPE servers. We will use 8 of them to run OpenStack on it. Storage is on a seperate machine. &lt;br /&gt;
&lt;br /&gt;
=== MVP ===&lt;br /&gt;
The MVP would be:&lt;br /&gt;
* Kubernetes / docker &lt;br /&gt;
* object storage&lt;br /&gt;
* file systems&lt;br /&gt;
* Networking&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* Firewalling&lt;br /&gt;
* Databases - mariaDB / PostgreSQL &lt;br /&gt;
* Someone something redis I guess&lt;br /&gt;
* container registry&lt;br /&gt;
&lt;br /&gt;
=== e-MVP ===&lt;br /&gt;
The extended MVP would be:&lt;br /&gt;
* functional Monitoring &amp;amp; alerting&lt;br /&gt;
* autoscaling &lt;br /&gt;
* integration into milliways identity &amp;amp; access management authentik &lt;br /&gt;
* logging &amp;amp; alerting&lt;br /&gt;
&lt;br /&gt;
== the software stack explained ==&lt;br /&gt;
&lt;br /&gt;
OpenStack is a cloud framework stack that offeres AWS / Azure / GC alike services. &lt;br /&gt;
&lt;br /&gt;
Most documentation is availible for Ubuntu &amp;amp; Red Hat. On the longer term an installation under NixOS might be feasable.&lt;br /&gt;
&lt;br /&gt;
== Asset List ==&lt;br /&gt;
=== Rack ===&lt;br /&gt;
* 47U&lt;br /&gt;
* 950mm external depth&lt;br /&gt;
** 915mm internal depth&lt;br /&gt;
=== Consumables &amp;amp; Small Materials ===&lt;br /&gt;
* 1 x Samsung 2.5&amp;quot; 860 EVO 2TB&lt;br /&gt;
* Assorted M2 - M3 screws&lt;br /&gt;
* Assorted mismatched bundle of M5 and M6 cagenuts and bolts&lt;br /&gt;
* SFPs&lt;br /&gt;
=== Switches ===&lt;br /&gt;
* 2 x Dell PowerConnect 7048R-RA&lt;br /&gt;
* 1 x Cisco 3560e&lt;br /&gt;
=== [[MilliwaysStack_Servers | Servers]] ===&lt;br /&gt;
* 1 Dell PowerEdge R710 server as storage&lt;br /&gt;
** 2 x X5570 2,93GHz&lt;br /&gt;
** 192GB RAM&lt;br /&gt;
** 6 x 3,5&amp;quot; bays&lt;br /&gt;
*** 6 x hotswap 3,5&amp;quot; drive sleds/brackets&lt;br /&gt;
** Drives&lt;br /&gt;
*** 1 x Samsung 2.5&amp;quot; 850 EVO 500GB&lt;br /&gt;
**** for OS&lt;br /&gt;
**** Hidden in aftermarket [https://www.amazon.nl/dp/B083XJPCGL &amp;quot;Optical Drive&amp;quot;] adapter.&lt;br /&gt;
***We have more drives than bays, but not enough drives to make a nice or ideal configuration. As such, the Dell storage situation is likely temporary until we can figure out if we add more 12T or 10T or keep as-is.&lt;br /&gt;
**** 2 x Seagate Exos X18 12TB&lt;br /&gt;
**** 1 x Seagate Exos X18 10TB&lt;br /&gt;
**** 4 x WD Red 4TB&lt;br /&gt;
**** 4 x WD Green 3TB&lt;br /&gt;
** no rails&lt;br /&gt;
* 2 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram&lt;br /&gt;
** PCI Riser to 4* NVMe adapter&lt;br /&gt;
*** 1TB Crucial NVMe &lt;br /&gt;
** iLO4&lt;br /&gt;
*** It seems it accepts 35DPH-SVSXJ-HGBJN-C7N5R-2SS4W as activation key for iLO Advanced license?&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** Slide rails&lt;br /&gt;
* 8 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram &lt;br /&gt;
** iLO4&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** 7 x slide rails&lt;br /&gt;
&lt;br /&gt;
== Shopping List ==&lt;br /&gt;
 It&#039;s ofc. sexy as all hell to buy memory, AI cards, flash storage and allsorts, but literally none of that will ever work if we don&#039;t have our Generic basics in order. While we prefer big donations go to big ticket items, many small ticket items unexpectedly add up in the long run. Please do not forget the generic basics!&lt;br /&gt;
* Generic Basics&lt;br /&gt;
** PDU&lt;br /&gt;
*** &amp;lt;s&amp;gt;Temporary 1U unmanaged PDU with 16A/230V C19 input and 1* C19 + 8* Type F outlet.&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Perfect: Managed Rack mountable PDU with CEE Red 16A/20A 400v input to C13/C14 + C19/C20 outlets.&lt;br /&gt;
**** [https://www.eaton.com/us/en-us/skuPage.PDU3XEVSR6G20.html Stupid expensive example]&lt;br /&gt;
*** Alternatively; a &amp;quot;normal&amp;quot; Serverrack PDU (still strong prefer managed) + 16A/20A 400v -&amp;gt; 16A 230V transform&lt;br /&gt;
** Network Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [Color]&lt;br /&gt;
**** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Power Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Screws, Nuts, Bolts&lt;br /&gt;
*** &amp;lt;s&amp;gt;Assorted M2,M2.5,M3 Screws&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Some actual properly matching cage nuts\bolts would be great&lt;br /&gt;
** PCI Risers&lt;br /&gt;
*** &amp;lt;s&amp;gt;Single NVMe adapters&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Multi NVMe adapters&lt;br /&gt;
** KVM&lt;br /&gt;
*** PiKVM?&lt;br /&gt;
* Dell - Storage&lt;br /&gt;
** &amp;lt;s&amp;gt;2* Drive sleds&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;New RAID Card that supports passthrough\JBOD&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;2* SFF-8087 -&amp;gt; SFF-8087 Mini SAS Cable&amp;lt;/s&amp;gt;&lt;br /&gt;
** Drives&lt;br /&gt;
*** &amp;lt;s&amp;gt;500GB SSD for OS&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Bracket and SATA Cable Adapter for SSD&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Technically not shopping, but for historical tracking;&lt;br /&gt;
**** Old Exos X16 2 x 12T and 1 x 10T were RMA&#039;d and replaced with X18&#039;s&amp;lt;/s&amp;gt;&lt;br /&gt;
*** 12T ?&lt;br /&gt;
* HP1 - Control&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* HP2 - Compute&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* Flash Storage&lt;br /&gt;
** We&#039;ll need [https://www.amazon.de/-/en/dp/B07GCDH5D8 Drive Trays] for the HPs if we wanna add 2.5&amp;quot; SSDs&lt;br /&gt;
** Control and Compute servers each have 3 open m.2 NVMe slots&lt;br /&gt;
** &amp;lt;s&amp;gt;1 x 2TB Samsung 860 EVO&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
 nb. this is quick &#039;n&#039; dirty as I go along.&lt;br /&gt;
 In the short-term future I&#039;d much rather replace this adhoc documentation with something like NetBox.&lt;br /&gt;
=== Network ===&lt;br /&gt;
* Supernet 10.42.0.0/16&lt;br /&gt;
** Vlan 42&lt;br /&gt;
*** Interconnect&lt;br /&gt;
*** 10.42.0.0/30&lt;br /&gt;
**** Gateway 10.42.0.1&lt;br /&gt;
**** Milliways Core 10.42.0.2&lt;br /&gt;
** Vlan 5&lt;br /&gt;
*** Mgmt \ OOB&lt;br /&gt;
*** 10.42.1.0/24&lt;br /&gt;
**** Milliways Core 10.42.1.1&lt;br /&gt;
**** Dell iDRAC 10.42.1.5&lt;br /&gt;
**** Dell RAID Controller 10.42.1.6&lt;br /&gt;
**** HP 1 iLO 10.42.1.7&lt;br /&gt;
**** HP 2 iLO 10.42.1.8&lt;br /&gt;
** Vlan 10&lt;br /&gt;
*** Prod&lt;br /&gt;
*** 10.42.10.0/24&lt;br /&gt;
**** Milliways Core 10.42.10.1&lt;br /&gt;
**** Dell 10.42.10.2&lt;br /&gt;
**** HP 1 10.42.10.3&lt;br /&gt;
**** HP 2 10.42.10.5&lt;br /&gt;
&lt;br /&gt;
=== Cable Mgmt ===&lt;br /&gt;
 As there are some early ambitions to physically take this environment to events, perhaps we should seriously think about making our lives easier by already thinking about colorcoding connectivity. While this will help us connecting everything again at $event when we&#039;re sleepdeprived\drunk\explaining to newbies, this has the added effect of making it all look slightly more cooler than just a spaghetti of all boring white cables or worse, a spaghetti of whatever the fuck we have lying around.&lt;br /&gt;
&lt;br /&gt;
This is all just made-up without too much thought. This is specifically intended to start a discussion so we can work toward an agreement, it is not intended to be a unilateral decision. Example; You&#039;ll notice 0 thought was put into fiber or not ;)&lt;br /&gt;
* RED&lt;br /&gt;
** Mgmt \ OOB&lt;br /&gt;
*** iDRACs, iLOs, RAID Cards, etc&lt;br /&gt;
* GREEN&lt;br /&gt;
** Storage Prod&lt;br /&gt;
*** At least the Dell, maybe HPs if we get into flash storage&lt;br /&gt;
* BLUE&lt;br /&gt;
** Compute Prod&lt;br /&gt;
*** Likely overwhelmingly the HPs&lt;br /&gt;
* YELLOW&lt;br /&gt;
** Interconnect&lt;br /&gt;
*** Connectivity to $outside, between switches, whatever&lt;br /&gt;
&lt;br /&gt;
=== Naming Convention ===&lt;br /&gt;
 We need names!&lt;br /&gt;
 Can&#039;t keep calling these &amp;quot;Dell&amp;quot;, &amp;quot;HP1&amp;quot;, &amp;quot;HP2&amp;quot; etc.&lt;br /&gt;
 Calling them by their S/Ns is also super boring and cumbersome; &amp;quot;Oh yea, we need to setup 5V6S064&amp;quot;&lt;br /&gt;
 We could even opt for dual names. Internally, when logged in to $shell, the names could be functional &amp;quot;milliways-control-node-1&amp;quot; so it&#039;s clear what you&#039;re doing, but externally, the Asset Tag could be a Hitchhiker&#039;s Guide to the Galaxy character or a DiscWorld town or something. That way, if we do ever show this off at events, we can do cool shit with light up tags, make stuff funny and recognizable and cool to talk about - it also makes it way more relatable to market for when asking for donations; &amp;quot;Ya, we&#039;re looking for extra storage for Überwald&amp;quot; sounds much better than &amp;quot;Ya we&#039;re looking for extra storage for 5V6S064 or milliways-control-node-1&amp;quot;&lt;br /&gt;
 Naturally, once we get NetBox going, we can map the Asset names to the actual server name and potentially it&#039;s serial so we don&#039;t get confused internally (if we want to use serials, there&#039;s somethign to be said for not using serials here)&lt;br /&gt;
&lt;br /&gt;
* Functional&lt;br /&gt;
** milliways-control-node-1&lt;br /&gt;
** milliways-control-node-2&lt;br /&gt;
** control-node-1&lt;br /&gt;
** compute-node-1&lt;br /&gt;
** flash-storage-1&lt;br /&gt;
&lt;br /&gt;
* Marketing&lt;br /&gt;
** HGttG characters&lt;br /&gt;
*** Arthur&lt;br /&gt;
*** Ford&lt;br /&gt;
*** Zaphod&lt;br /&gt;
** Discworld locations&lt;br /&gt;
*** Ankh-Morpork&lt;br /&gt;
*** Überwald&lt;br /&gt;
*** Lancre&lt;br /&gt;
***&lt;br /&gt;
&lt;br /&gt;
=== OpenStack ===&lt;br /&gt;
&lt;br /&gt;
 We&#039;re using 2025.1 (epoxy) as 2025.2 (flamingo) has an undocumented breaking change making installation of keystone impossible. We have registered a bug with the documentation on launchpad for this.&lt;br /&gt;
&lt;br /&gt;
* [https://docs.openstack.org/install-guide/ Installation guide]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-2025-1-epoxy Minimal Deployment]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/overview.html#example-architecture Example Architecture]&lt;br /&gt;
 Following installation guide recommendation, passwords are created with &amp;lt;code&amp;gt;openssl rand -hex 10&amp;lt;/code&amp;gt; and saved in a password store.&lt;br /&gt;
&lt;br /&gt;
==== Controller ====&lt;br /&gt;
* [https://docs.openstack.org/keystone/2025.1/install/ Identity service]&lt;br /&gt;
** [https://docs.openstack.org/keystone/2025.2/install/keystone-users-ubuntu.html Broken] in 2025.2&lt;br /&gt;
***[https://opendev.org/openstack/keystone/src/commit/82c80dccf6c2e74e27b90f5204de6da1fc6bd76d/releasenotes/notes/remove-wsgi-scripts-615b97ee4d6e0de2.yaml This] commit removes the WSGI scripts, ``keystone-wsgi-admin`` and ``keystone-wsgi-public``.&lt;br /&gt;
*** Both scripts are still called by the openstack command. This means running any openstack command to create a domain, projects, users, and roles fails with the error&lt;br /&gt;
****&amp;lt;code&amp;gt;Failed to discover available identity versions when contacting http://controller:5000/v3. Attempting to parse version from URL.&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Evidence:&lt;br /&gt;
**** &amp;lt;code&amp;gt;tail /var/log/apache2/keystone.log&amp;lt;/code&amp;gt;&lt;br /&gt;
***** &amp;lt;code&amp;gt;Target WSGI script not found or unable to stat: /usr/bin/keystone-wsgi-public&amp;lt;/code&amp;gt;&lt;br /&gt;
** Workaround, use 2025.1 instead&lt;br /&gt;
** Completed 2025-01-18&lt;br /&gt;
* [https://docs.openstack.org/glance/2025.1/install/ Image service]&lt;br /&gt;
** [https://docs.openstack.org/glance/2025.1/install/install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide has you create 3 API endpoints for the service.&lt;br /&gt;
**** You need to configure access to keystone with one of them, but you are not told which one. Only &amp;lt;code&amp;gt;public&amp;lt;/code&amp;gt; will work.&lt;br /&gt;
*** Configuring glance-api.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** Completed 2025-01-19&lt;br /&gt;
* [https://docs.openstack.org/placement/2025.1/install/ Placement service]&lt;br /&gt;
** [https://docs.openstack.org/placement/2025.1/install/verify.html Bad] Documentation&lt;br /&gt;
*** If you followed the guide, your user account [https://storyboard.openstack.org/#!/story/2008969 does not have the rights] to read &amp;lt;code&amp;gt;/etc/placement/placement.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Running &amp;lt;code&amp;gt;placement-status upgrade check&amp;lt;/code&amp;gt; as root proves the service works.&lt;br /&gt;
*** Undocumented requirement fulfilled; &amp;lt;code&amp;gt;usermod -aG placement&amp;lt;/code&amp;gt;&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portions of [https://docs.openstack.org/nova/2025.1/install/ Compute]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/controller-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring nova.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
*** The guide attempts to make you configure options for the networking service, which you have not installed yet, because the guide makes you install this service first&lt;br /&gt;
*** &amp;lt;code&amp;gt;Due to a packaging bug, remove the log_dir option from the [DEFAULT] section.&amp;lt;/code&amp;gt;&lt;br /&gt;
**** ???? THEN FIX THE PACKAGE?!?!?!!!!&lt;br /&gt;
*** The &amp;lt;code&amp;gt;[glance]&amp;lt;/code&amp;gt; option you are instructed to use is deprecated&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portion of [https://docs.openstack.org/neutron/2025.1/install/ Networking]&lt;br /&gt;
** [https://docs.openstack.org/neutron/2025.1/install/controller-install-option2-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring neutron.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
* various Networking agents&lt;br /&gt;
* [https://docs.openstack.org/horizon/2025.1/install/ Dashboard]&lt;br /&gt;
&lt;br /&gt;
== communications ==&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6981</id>
		<title>MilliwaysStack</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6981"/>
		<updated>2026-01-21T11:04:30Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Asset List */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We want to run an OpenStack experiment&lt;br /&gt;
&lt;br /&gt;
== The grander idea ==&lt;br /&gt;
&lt;br /&gt;
We want to try out an installation of OpenStack to give people around milliways experience with running (on). &lt;br /&gt;
&lt;br /&gt;
From an unnamed source we got 10 HPE servers. We will use 8 of them to run OpenStack on it. Storage is on a seperate machine. &lt;br /&gt;
&lt;br /&gt;
=== MVP ===&lt;br /&gt;
The MVP would be:&lt;br /&gt;
* Kubernetes / docker &lt;br /&gt;
* object storage&lt;br /&gt;
* file systems&lt;br /&gt;
* Networking&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* Firewalling&lt;br /&gt;
* Databases - mariaDB / PostgreSQL &lt;br /&gt;
* Someone something redis I guess&lt;br /&gt;
* container registry&lt;br /&gt;
&lt;br /&gt;
=== e-MVP ===&lt;br /&gt;
The extended MVP would be:&lt;br /&gt;
* functional Monitoring &amp;amp; alerting&lt;br /&gt;
* autoscaling &lt;br /&gt;
* integration into milliways identity &amp;amp; access management authentik &lt;br /&gt;
* logging &amp;amp; alerting&lt;br /&gt;
&lt;br /&gt;
== the software stack explained ==&lt;br /&gt;
&lt;br /&gt;
OpenStack is a cloud framework stack that offeres AWS / Azure / GC alike services. &lt;br /&gt;
&lt;br /&gt;
Most documentation is availible for Ubuntu &amp;amp; Red Hat. On the longer term an installation under NixOS might be feasable.&lt;br /&gt;
&lt;br /&gt;
== Asset List ==&lt;br /&gt;
=== Rack ===&lt;br /&gt;
* 47U&lt;br /&gt;
* 950mm external depth&lt;br /&gt;
** 915mm internal depth&lt;br /&gt;
=== Consumables &amp;amp; Small Materials ===&lt;br /&gt;
* 1 x Samsung 2.5&amp;quot; 860 EVO 2TB&lt;br /&gt;
* Assorted M2 - M3 screws&lt;br /&gt;
* Assorted mismatched bundle of M5 and M6 cagenuts and bolts&lt;br /&gt;
* SFPs&lt;br /&gt;
=== Switches ===&lt;br /&gt;
* 2 x Dell PowerConnect 7048R-RA&lt;br /&gt;
* 1 x Cisco 3560e&lt;br /&gt;
=== [[MilliwaysStack_Servers | Servers]] ===&lt;br /&gt;
* 1 Dell PowerEdge R710 server as storage&lt;br /&gt;
** 2 x X5570 2,93GHz&lt;br /&gt;
** 192GB RAM&lt;br /&gt;
** 6 x 3,5&amp;quot; bays&lt;br /&gt;
*** 6 x hotswap 3,5&amp;quot; drive sleds/brackets&lt;br /&gt;
** Drives&lt;br /&gt;
*** 1 x Samsung 2.5&amp;quot; 850 EVO 500GB&lt;br /&gt;
**** for OS&lt;br /&gt;
**** Hidden in aftermarket [https://www.amazon.nl/dp/B083XJPCGL &amp;quot;Optical Drive&amp;quot;] adapter.&lt;br /&gt;
***We have more drives than bays, but not enough drives to make a nice or ideal configuration. As such, the Dell storage situation is likely temporary until we can figure out if we add more 12T or 10T or keep as-is.&lt;br /&gt;
**** 2 x Seagate Exos X18 12TB&lt;br /&gt;
**** 1 x Seagate Exos X18 10TB&lt;br /&gt;
**** 4 x WD Red 4TB&lt;br /&gt;
**** 4 x WD Green 3TB&lt;br /&gt;
** no rails&lt;br /&gt;
* 2 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram&lt;br /&gt;
** PCI Riser to 4* NVMe adapter&lt;br /&gt;
*** 1TB Crucial NVMe &lt;br /&gt;
** iLO4&lt;br /&gt;
*** It seems it accepts 35DPH-SVSXJ-HGBJN-C7N5R-2SS4W as activation key for iLO Advanced license?&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** Slide rails&lt;br /&gt;
* 8 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram &lt;br /&gt;
** iLO4&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** 7 x slide rails&lt;br /&gt;
&lt;br /&gt;
== Shopping List ==&lt;br /&gt;
 It&#039;s ofc. sexy as all hell to buy memory, AI cards, flash storage and allsorts, but literally none of that will ever work if we don&#039;t have our Generic basics in order. While we prefer big donations go to big ticket items, many small ticket items unexpectedly add up in the long run. Please do not forget the generic basics!&lt;br /&gt;
* Generic Basics&lt;br /&gt;
** PDU&lt;br /&gt;
*** &amp;lt;s&amp;gt;Temporary 1U unmanaged PDU with 16A/230V C19 input and 1* C19 + 8* Type F outlet.&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Perfect: Managed Rack mountable PDU with CEE Red 16A/20A 400v input to C13/C14 + C19/C20 outlets.&lt;br /&gt;
**** [https://www.eaton.com/us/en-us/skuPage.PDU3XEVSR6G20.html Stupid expensive example]&lt;br /&gt;
*** Alternatively; a &amp;quot;normal&amp;quot; Serverrack PDU (still strong prefer managed) + 16A/20A 400v -&amp;gt; 16A 230V transform&lt;br /&gt;
** Network Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [Color]&lt;br /&gt;
**** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Power Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Screws, Nuts, Bolts&lt;br /&gt;
*** &amp;lt;s&amp;gt;Assorted M2,M2.5,M3 Screws&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Some actual properly matching cage nuts\bolts would be great&lt;br /&gt;
** PCI Risers&lt;br /&gt;
*** &amp;lt;s&amp;gt;Single NVMe adapters&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Multi NVMe adapters&lt;br /&gt;
** KVM&lt;br /&gt;
*** PiKVM?&lt;br /&gt;
* Dell - Storage&lt;br /&gt;
** &amp;lt;s&amp;gt;2* Drive sleds&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;New RAID Card that supports passthrough\JBOD&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;2* SFF-8087 -&amp;gt; SFF-8087 Mini SAS Cable&amp;lt;/s&amp;gt;&lt;br /&gt;
** Drives&lt;br /&gt;
*** &amp;lt;s&amp;gt;500GB SSD for OS&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Bracket and SATA Cable Adapter for SSD&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Technically not shopping, but for historical tracking;&lt;br /&gt;
**** Old Exos X16 2 x 12T and 1 x 10T were RMA&#039;d and replaced with X18&#039;s&amp;lt;/s&amp;gt;&lt;br /&gt;
*** 12T ?&lt;br /&gt;
* HP1 - Control&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* HP2 - Compute&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* Flash Storage&lt;br /&gt;
** We&#039;ll need [https://www.amazon.de/-/en/dp/B07GCDH5D8 Drive Trays] for the HPs if we wanna add 2.5&amp;quot; SSDs&lt;br /&gt;
** &amp;lt;s&amp;gt;1 x 2TB Samsung 860 EVO&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
 nb. this is quick &#039;n&#039; dirty as I go along.&lt;br /&gt;
 In the short-term future I&#039;d much rather replace this adhoc documentation with something like NetBox.&lt;br /&gt;
=== Network ===&lt;br /&gt;
* Supernet 10.42.0.0/16&lt;br /&gt;
** Vlan 42&lt;br /&gt;
*** Interconnect&lt;br /&gt;
*** 10.42.0.0/30&lt;br /&gt;
**** Gateway 10.42.0.1&lt;br /&gt;
**** Milliways Core 10.42.0.2&lt;br /&gt;
** Vlan 5&lt;br /&gt;
*** Mgmt \ OOB&lt;br /&gt;
*** 10.42.1.0/24&lt;br /&gt;
**** Milliways Core 10.42.1.1&lt;br /&gt;
**** Dell iDRAC 10.42.1.5&lt;br /&gt;
**** Dell RAID Controller 10.42.1.6&lt;br /&gt;
**** HP 1 iLO 10.42.1.7&lt;br /&gt;
**** HP 2 iLO 10.42.1.8&lt;br /&gt;
** Vlan 10&lt;br /&gt;
*** Prod&lt;br /&gt;
*** 10.42.10.0/24&lt;br /&gt;
**** Milliways Core 10.42.10.1&lt;br /&gt;
**** Dell 10.42.10.2&lt;br /&gt;
**** HP 1 10.42.10.3&lt;br /&gt;
**** HP 2 10.42.10.5&lt;br /&gt;
&lt;br /&gt;
=== Cable Mgmt ===&lt;br /&gt;
 As there are some early ambitions to physically take this environment to events, perhaps we should seriously think about making our lives easier by already thinking about colorcoding connectivity. While this will help us connecting everything again at $event when we&#039;re sleepdeprived\drunk\explaining to newbies, this has the added effect of making it all look slightly more cooler than just a spaghetti of all boring white cables or worse, a spaghetti of whatever the fuck we have lying around.&lt;br /&gt;
&lt;br /&gt;
This is all just made-up without too much thought. This is specifically intended to start a discussion so we can work toward an agreement, it is not intended to be a unilateral decision. Example; You&#039;ll notice 0 thought was put into fiber or not ;)&lt;br /&gt;
* RED&lt;br /&gt;
** Mgmt \ OOB&lt;br /&gt;
*** iDRACs, iLOs, RAID Cards, etc&lt;br /&gt;
* GREEN&lt;br /&gt;
** Storage Prod&lt;br /&gt;
*** At least the Dell, maybe HPs if we get into flash storage&lt;br /&gt;
* BLUE&lt;br /&gt;
** Compute Prod&lt;br /&gt;
*** Likely overwhelmingly the HPs&lt;br /&gt;
* YELLOW&lt;br /&gt;
** Interconnect&lt;br /&gt;
*** Connectivity to $outside, between switches, whatever&lt;br /&gt;
&lt;br /&gt;
=== Naming Convention ===&lt;br /&gt;
 We need names!&lt;br /&gt;
 Can&#039;t keep calling these &amp;quot;Dell&amp;quot;, &amp;quot;HP1&amp;quot;, &amp;quot;HP2&amp;quot; etc.&lt;br /&gt;
 Calling them by their S/Ns is also super boring and cumbersome; &amp;quot;Oh yea, we need to setup 5V6S064&amp;quot;&lt;br /&gt;
 We could even opt for dual names. Internally, when logged in to $shell, the names could be functional &amp;quot;milliways-control-node-1&amp;quot; so it&#039;s clear what you&#039;re doing, but externally, the Asset Tag could be a Hitchhiker&#039;s Guide to the Galaxy character or a DiscWorld town or something. That way, if we do ever show this off at events, we can do cool shit with light up tags, make stuff funny and recognizable and cool to talk about - it also makes it way more relatable to market for when asking for donations; &amp;quot;Ya, we&#039;re looking for extra storage for Überwald&amp;quot; sounds much better than &amp;quot;Ya we&#039;re looking for extra storage for 5V6S064 or milliways-control-node-1&amp;quot;&lt;br /&gt;
 Naturally, once we get NetBox going, we can map the Asset names to the actual server name and potentially it&#039;s serial so we don&#039;t get confused internally (if we want to use serials, there&#039;s somethign to be said for not using serials here)&lt;br /&gt;
&lt;br /&gt;
* Functional&lt;br /&gt;
** milliways-control-node-1&lt;br /&gt;
** milliways-control-node-2&lt;br /&gt;
** control-node-1&lt;br /&gt;
** compute-node-1&lt;br /&gt;
** flash-storage-1&lt;br /&gt;
&lt;br /&gt;
* Marketing&lt;br /&gt;
** HGttG characters&lt;br /&gt;
*** Arthur&lt;br /&gt;
*** Ford&lt;br /&gt;
*** Zaphod&lt;br /&gt;
** Discworld locations&lt;br /&gt;
*** Ankh-Morpork&lt;br /&gt;
*** Überwald&lt;br /&gt;
*** Lancre&lt;br /&gt;
***&lt;br /&gt;
&lt;br /&gt;
=== OpenStack ===&lt;br /&gt;
&lt;br /&gt;
 We&#039;re using 2025.1 (epoxy) as 2025.2 (flamingo) has an undocumented breaking change making installation of keystone impossible. We have registered a bug with the documentation on launchpad for this.&lt;br /&gt;
&lt;br /&gt;
* [https://docs.openstack.org/install-guide/ Installation guide]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-2025-1-epoxy Minimal Deployment]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/overview.html#example-architecture Example Architecture]&lt;br /&gt;
 Following installation guide recommendation, passwords are created with &amp;lt;code&amp;gt;openssl rand -hex 10&amp;lt;/code&amp;gt; and saved in a password store.&lt;br /&gt;
&lt;br /&gt;
==== Controller ====&lt;br /&gt;
* [https://docs.openstack.org/keystone/2025.1/install/ Identity service]&lt;br /&gt;
** [https://docs.openstack.org/keystone/2025.2/install/keystone-users-ubuntu.html Broken] in 2025.2&lt;br /&gt;
***[https://opendev.org/openstack/keystone/src/commit/82c80dccf6c2e74e27b90f5204de6da1fc6bd76d/releasenotes/notes/remove-wsgi-scripts-615b97ee4d6e0de2.yaml This] commit removes the WSGI scripts, ``keystone-wsgi-admin`` and ``keystone-wsgi-public``.&lt;br /&gt;
*** Both scripts are still called by the openstack command. This means running any openstack command to create a domain, projects, users, and roles fails with the error&lt;br /&gt;
****&amp;lt;code&amp;gt;Failed to discover available identity versions when contacting http://controller:5000/v3. Attempting to parse version from URL.&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Evidence:&lt;br /&gt;
**** &amp;lt;code&amp;gt;tail /var/log/apache2/keystone.log&amp;lt;/code&amp;gt;&lt;br /&gt;
***** &amp;lt;code&amp;gt;Target WSGI script not found or unable to stat: /usr/bin/keystone-wsgi-public&amp;lt;/code&amp;gt;&lt;br /&gt;
** Workaround, use 2025.1 instead&lt;br /&gt;
** Completed 2025-01-18&lt;br /&gt;
* [https://docs.openstack.org/glance/2025.1/install/ Image service]&lt;br /&gt;
** [https://docs.openstack.org/glance/2025.1/install/install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide has you create 3 API endpoints for the service.&lt;br /&gt;
**** You need to configure access to keystone with one of them, but you are not told which one. Only &amp;lt;code&amp;gt;public&amp;lt;/code&amp;gt; will work.&lt;br /&gt;
*** Configuring glance-api.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** Completed 2025-01-19&lt;br /&gt;
* [https://docs.openstack.org/placement/2025.1/install/ Placement service]&lt;br /&gt;
** [https://docs.openstack.org/placement/2025.1/install/verify.html Bad] Documentation&lt;br /&gt;
*** If you followed the guide, your user account [https://storyboard.openstack.org/#!/story/2008969 does not have the rights] to read &amp;lt;code&amp;gt;/etc/placement/placement.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Running &amp;lt;code&amp;gt;placement-status upgrade check&amp;lt;/code&amp;gt; as root proves the service works.&lt;br /&gt;
*** Undocumented requirement fulfilled; &amp;lt;code&amp;gt;usermod -aG placement&amp;lt;/code&amp;gt;&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portions of [https://docs.openstack.org/nova/2025.1/install/ Compute]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/controller-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring nova.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
*** The guide attempts to make you configure options for the networking service, which you have not installed yet, because the guide makes you install this service first&lt;br /&gt;
*** &amp;lt;code&amp;gt;Due to a packaging bug, remove the log_dir option from the [DEFAULT] section.&amp;lt;/code&amp;gt;&lt;br /&gt;
**** ???? THEN FIX THE PACKAGE?!?!?!!!!&lt;br /&gt;
*** The &amp;lt;code&amp;gt;[glance]&amp;lt;/code&amp;gt; option you are instructed to use is deprecated&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portion of [https://docs.openstack.org/neutron/2025.1/install/ Networking]&lt;br /&gt;
** [https://docs.openstack.org/neutron/2025.1/install/controller-install-option2-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring neutron.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
* various Networking agents&lt;br /&gt;
* [https://docs.openstack.org/horizon/2025.1/install/ Dashboard]&lt;br /&gt;
&lt;br /&gt;
== communications ==&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6980</id>
		<title>MilliwaysStack</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6980"/>
		<updated>2026-01-21T11:03:32Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Shopping List */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We want to run an OpenStack experiment&lt;br /&gt;
&lt;br /&gt;
== The grander idea ==&lt;br /&gt;
&lt;br /&gt;
We want to try out an installation of OpenStack to give people around milliways experience with running (on). &lt;br /&gt;
&lt;br /&gt;
From an unnamed source we got 10 HPE servers. We will use 8 of them to run OpenStack on it. Storage is on a seperate machine. &lt;br /&gt;
&lt;br /&gt;
=== MVP ===&lt;br /&gt;
The MVP would be:&lt;br /&gt;
* Kubernetes / docker &lt;br /&gt;
* object storage&lt;br /&gt;
* file systems&lt;br /&gt;
* Networking&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* Firewalling&lt;br /&gt;
* Databases - mariaDB / PostgreSQL &lt;br /&gt;
* Someone something redis I guess&lt;br /&gt;
* container registry&lt;br /&gt;
&lt;br /&gt;
=== e-MVP ===&lt;br /&gt;
The extended MVP would be:&lt;br /&gt;
* functional Monitoring &amp;amp; alerting&lt;br /&gt;
* autoscaling &lt;br /&gt;
* integration into milliways identity &amp;amp; access management authentik &lt;br /&gt;
* logging &amp;amp; alerting&lt;br /&gt;
&lt;br /&gt;
== the software stack explained ==&lt;br /&gt;
&lt;br /&gt;
OpenStack is a cloud framework stack that offeres AWS / Azure / GC alike services. &lt;br /&gt;
&lt;br /&gt;
Most documentation is availible for Ubuntu &amp;amp; Red Hat. On the longer term an installation under NixOS might be feasable.&lt;br /&gt;
&lt;br /&gt;
== Asset List ==&lt;br /&gt;
=== Rack ===&lt;br /&gt;
* 47U&lt;br /&gt;
* 950mm external depth&lt;br /&gt;
** 915mm internal depth&lt;br /&gt;
=== Consumables &amp;amp; Small Materials ===&lt;br /&gt;
* 1 x Samsung 860 EVO 2TB&lt;br /&gt;
* Assorted M2 - M3 screws&lt;br /&gt;
* Assorted mismatched bundle of M5 and M6 cagenuts and bolts&lt;br /&gt;
* SFPs&lt;br /&gt;
=== Switches ===&lt;br /&gt;
* 2 x Dell PowerConnect 7048R-RA&lt;br /&gt;
* 1 x Cisco 3560e&lt;br /&gt;
=== [[MilliwaysStack_Servers | Servers]] ===&lt;br /&gt;
* 1 Dell PowerEdge R710 server as storage&lt;br /&gt;
** 2 x X5570 2,93GHz&lt;br /&gt;
** 192GB RAM&lt;br /&gt;
** 6 x 3,5&amp;quot; bays&lt;br /&gt;
*** 6 x hotswap 3,5&amp;quot; drive sleds/brackets&lt;br /&gt;
** Drives&lt;br /&gt;
*** 1 x Samsung 850 EVO 500GB&lt;br /&gt;
**** for OS&lt;br /&gt;
**** Hidden in aftermarket [https://www.amazon.nl/dp/B083XJPCGL &amp;quot;Optical Drive&amp;quot;] adapter.&lt;br /&gt;
***We have more drives than bays, but not enough drives to make a nice or ideal configuration. As such, the Dell storage situation is likely temporary until we can figure out if we add more 12T or 10T or keep as-is.&lt;br /&gt;
**** 2 x Seagate Exos X18 12TB&lt;br /&gt;
**** 1 x Seagate Exos X18 10TB&lt;br /&gt;
**** 4 x WD Red 4TB&lt;br /&gt;
**** 4 x WD Green 3TB&lt;br /&gt;
** no rails&lt;br /&gt;
* 2 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram&lt;br /&gt;
** PCI Riser to 4* NVMe adapter&lt;br /&gt;
*** 1TB Crucial NVMe &lt;br /&gt;
** iLO4&lt;br /&gt;
*** It seems it accepts 35DPH-SVSXJ-HGBJN-C7N5R-2SS4W as activation key for iLO Advanced license?&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** Slide rails&lt;br /&gt;
* 8 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram &lt;br /&gt;
** iLO4&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** 7 x slide rails&lt;br /&gt;
&lt;br /&gt;
== Shopping List ==&lt;br /&gt;
 It&#039;s ofc. sexy as all hell to buy memory, AI cards, flash storage and allsorts, but literally none of that will ever work if we don&#039;t have our Generic basics in order. While we prefer big donations go to big ticket items, many small ticket items unexpectedly add up in the long run. Please do not forget the generic basics!&lt;br /&gt;
* Generic Basics&lt;br /&gt;
** PDU&lt;br /&gt;
*** &amp;lt;s&amp;gt;Temporary 1U unmanaged PDU with 16A/230V C19 input and 1* C19 + 8* Type F outlet.&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Perfect: Managed Rack mountable PDU with CEE Red 16A/20A 400v input to C13/C14 + C19/C20 outlets.&lt;br /&gt;
**** [https://www.eaton.com/us/en-us/skuPage.PDU3XEVSR6G20.html Stupid expensive example]&lt;br /&gt;
*** Alternatively; a &amp;quot;normal&amp;quot; Serverrack PDU (still strong prefer managed) + 16A/20A 400v -&amp;gt; 16A 230V transform&lt;br /&gt;
** Network Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [Color]&lt;br /&gt;
**** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Power Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Screws, Nuts, Bolts&lt;br /&gt;
*** &amp;lt;s&amp;gt;Assorted M2,M2.5,M3 Screws&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Some actual properly matching cage nuts\bolts would be great&lt;br /&gt;
** PCI Risers&lt;br /&gt;
*** &amp;lt;s&amp;gt;Single NVMe adapters&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Multi NVMe adapters&lt;br /&gt;
** KVM&lt;br /&gt;
*** PiKVM?&lt;br /&gt;
* Dell - Storage&lt;br /&gt;
** &amp;lt;s&amp;gt;2* Drive sleds&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;New RAID Card that supports passthrough\JBOD&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;2* SFF-8087 -&amp;gt; SFF-8087 Mini SAS Cable&amp;lt;/s&amp;gt;&lt;br /&gt;
** Drives&lt;br /&gt;
*** &amp;lt;s&amp;gt;500GB SSD for OS&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Bracket and SATA Cable Adapter for SSD&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Technically not shopping, but for historical tracking;&lt;br /&gt;
**** Old Exos X16 2 x 12T and 1 x 10T were RMA&#039;d and replaced with X18&#039;s&amp;lt;/s&amp;gt;&lt;br /&gt;
*** 12T ?&lt;br /&gt;
* HP1 - Control&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* HP2 - Compute&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* Flash Storage&lt;br /&gt;
** We&#039;ll need [https://www.amazon.de/-/en/dp/B07GCDH5D8 Drive Trays] for the HPs if we wanna add 2.5&amp;quot; SSDs&lt;br /&gt;
** &amp;lt;s&amp;gt;1 x 2TB Samsung 860 EVO&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
 nb. this is quick &#039;n&#039; dirty as I go along.&lt;br /&gt;
 In the short-term future I&#039;d much rather replace this adhoc documentation with something like NetBox.&lt;br /&gt;
=== Network ===&lt;br /&gt;
* Supernet 10.42.0.0/16&lt;br /&gt;
** Vlan 42&lt;br /&gt;
*** Interconnect&lt;br /&gt;
*** 10.42.0.0/30&lt;br /&gt;
**** Gateway 10.42.0.1&lt;br /&gt;
**** Milliways Core 10.42.0.2&lt;br /&gt;
** Vlan 5&lt;br /&gt;
*** Mgmt \ OOB&lt;br /&gt;
*** 10.42.1.0/24&lt;br /&gt;
**** Milliways Core 10.42.1.1&lt;br /&gt;
**** Dell iDRAC 10.42.1.5&lt;br /&gt;
**** Dell RAID Controller 10.42.1.6&lt;br /&gt;
**** HP 1 iLO 10.42.1.7&lt;br /&gt;
**** HP 2 iLO 10.42.1.8&lt;br /&gt;
** Vlan 10&lt;br /&gt;
*** Prod&lt;br /&gt;
*** 10.42.10.0/24&lt;br /&gt;
**** Milliways Core 10.42.10.1&lt;br /&gt;
**** Dell 10.42.10.2&lt;br /&gt;
**** HP 1 10.42.10.3&lt;br /&gt;
**** HP 2 10.42.10.5&lt;br /&gt;
&lt;br /&gt;
=== Cable Mgmt ===&lt;br /&gt;
 As there are some early ambitions to physically take this environment to events, perhaps we should seriously think about making our lives easier by already thinking about colorcoding connectivity. While this will help us connecting everything again at $event when we&#039;re sleepdeprived\drunk\explaining to newbies, this has the added effect of making it all look slightly more cooler than just a spaghetti of all boring white cables or worse, a spaghetti of whatever the fuck we have lying around.&lt;br /&gt;
&lt;br /&gt;
This is all just made-up without too much thought. This is specifically intended to start a discussion so we can work toward an agreement, it is not intended to be a unilateral decision. Example; You&#039;ll notice 0 thought was put into fiber or not ;)&lt;br /&gt;
* RED&lt;br /&gt;
** Mgmt \ OOB&lt;br /&gt;
*** iDRACs, iLOs, RAID Cards, etc&lt;br /&gt;
* GREEN&lt;br /&gt;
** Storage Prod&lt;br /&gt;
*** At least the Dell, maybe HPs if we get into flash storage&lt;br /&gt;
* BLUE&lt;br /&gt;
** Compute Prod&lt;br /&gt;
*** Likely overwhelmingly the HPs&lt;br /&gt;
* YELLOW&lt;br /&gt;
** Interconnect&lt;br /&gt;
*** Connectivity to $outside, between switches, whatever&lt;br /&gt;
&lt;br /&gt;
=== Naming Convention ===&lt;br /&gt;
 We need names!&lt;br /&gt;
 Can&#039;t keep calling these &amp;quot;Dell&amp;quot;, &amp;quot;HP1&amp;quot;, &amp;quot;HP2&amp;quot; etc.&lt;br /&gt;
 Calling them by their S/Ns is also super boring and cumbersome; &amp;quot;Oh yea, we need to setup 5V6S064&amp;quot;&lt;br /&gt;
 We could even opt for dual names. Internally, when logged in to $shell, the names could be functional &amp;quot;milliways-control-node-1&amp;quot; so it&#039;s clear what you&#039;re doing, but externally, the Asset Tag could be a Hitchhiker&#039;s Guide to the Galaxy character or a DiscWorld town or something. That way, if we do ever show this off at events, we can do cool shit with light up tags, make stuff funny and recognizable and cool to talk about - it also makes it way more relatable to market for when asking for donations; &amp;quot;Ya, we&#039;re looking for extra storage for Überwald&amp;quot; sounds much better than &amp;quot;Ya we&#039;re looking for extra storage for 5V6S064 or milliways-control-node-1&amp;quot;&lt;br /&gt;
 Naturally, once we get NetBox going, we can map the Asset names to the actual server name and potentially it&#039;s serial so we don&#039;t get confused internally (if we want to use serials, there&#039;s somethign to be said for not using serials here)&lt;br /&gt;
&lt;br /&gt;
* Functional&lt;br /&gt;
** milliways-control-node-1&lt;br /&gt;
** milliways-control-node-2&lt;br /&gt;
** control-node-1&lt;br /&gt;
** compute-node-1&lt;br /&gt;
** flash-storage-1&lt;br /&gt;
&lt;br /&gt;
* Marketing&lt;br /&gt;
** HGttG characters&lt;br /&gt;
*** Arthur&lt;br /&gt;
*** Ford&lt;br /&gt;
*** Zaphod&lt;br /&gt;
** Discworld locations&lt;br /&gt;
*** Ankh-Morpork&lt;br /&gt;
*** Überwald&lt;br /&gt;
*** Lancre&lt;br /&gt;
***&lt;br /&gt;
&lt;br /&gt;
=== OpenStack ===&lt;br /&gt;
&lt;br /&gt;
 We&#039;re using 2025.1 (epoxy) as 2025.2 (flamingo) has an undocumented breaking change making installation of keystone impossible. We have registered a bug with the documentation on launchpad for this.&lt;br /&gt;
&lt;br /&gt;
* [https://docs.openstack.org/install-guide/ Installation guide]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-2025-1-epoxy Minimal Deployment]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/overview.html#example-architecture Example Architecture]&lt;br /&gt;
 Following installation guide recommendation, passwords are created with &amp;lt;code&amp;gt;openssl rand -hex 10&amp;lt;/code&amp;gt; and saved in a password store.&lt;br /&gt;
&lt;br /&gt;
==== Controller ====&lt;br /&gt;
* [https://docs.openstack.org/keystone/2025.1/install/ Identity service]&lt;br /&gt;
** [https://docs.openstack.org/keystone/2025.2/install/keystone-users-ubuntu.html Broken] in 2025.2&lt;br /&gt;
***[https://opendev.org/openstack/keystone/src/commit/82c80dccf6c2e74e27b90f5204de6da1fc6bd76d/releasenotes/notes/remove-wsgi-scripts-615b97ee4d6e0de2.yaml This] commit removes the WSGI scripts, ``keystone-wsgi-admin`` and ``keystone-wsgi-public``.&lt;br /&gt;
*** Both scripts are still called by the openstack command. This means running any openstack command to create a domain, projects, users, and roles fails with the error&lt;br /&gt;
****&amp;lt;code&amp;gt;Failed to discover available identity versions when contacting http://controller:5000/v3. Attempting to parse version from URL.&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Evidence:&lt;br /&gt;
**** &amp;lt;code&amp;gt;tail /var/log/apache2/keystone.log&amp;lt;/code&amp;gt;&lt;br /&gt;
***** &amp;lt;code&amp;gt;Target WSGI script not found or unable to stat: /usr/bin/keystone-wsgi-public&amp;lt;/code&amp;gt;&lt;br /&gt;
** Workaround, use 2025.1 instead&lt;br /&gt;
** Completed 2025-01-18&lt;br /&gt;
* [https://docs.openstack.org/glance/2025.1/install/ Image service]&lt;br /&gt;
** [https://docs.openstack.org/glance/2025.1/install/install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide has you create 3 API endpoints for the service.&lt;br /&gt;
**** You need to configure access to keystone with one of them, but you are not told which one. Only &amp;lt;code&amp;gt;public&amp;lt;/code&amp;gt; will work.&lt;br /&gt;
*** Configuring glance-api.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** Completed 2025-01-19&lt;br /&gt;
* [https://docs.openstack.org/placement/2025.1/install/ Placement service]&lt;br /&gt;
** [https://docs.openstack.org/placement/2025.1/install/verify.html Bad] Documentation&lt;br /&gt;
*** If you followed the guide, your user account [https://storyboard.openstack.org/#!/story/2008969 does not have the rights] to read &amp;lt;code&amp;gt;/etc/placement/placement.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Running &amp;lt;code&amp;gt;placement-status upgrade check&amp;lt;/code&amp;gt; as root proves the service works.&lt;br /&gt;
*** Undocumented requirement fulfilled; &amp;lt;code&amp;gt;usermod -aG placement&amp;lt;/code&amp;gt;&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portions of [https://docs.openstack.org/nova/2025.1/install/ Compute]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/controller-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring nova.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
*** The guide attempts to make you configure options for the networking service, which you have not installed yet, because the guide makes you install this service first&lt;br /&gt;
*** &amp;lt;code&amp;gt;Due to a packaging bug, remove the log_dir option from the [DEFAULT] section.&amp;lt;/code&amp;gt;&lt;br /&gt;
**** ???? THEN FIX THE PACKAGE?!?!?!!!!&lt;br /&gt;
*** The &amp;lt;code&amp;gt;[glance]&amp;lt;/code&amp;gt; option you are instructed to use is deprecated&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portion of [https://docs.openstack.org/neutron/2025.1/install/ Networking]&lt;br /&gt;
** [https://docs.openstack.org/neutron/2025.1/install/controller-install-option2-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring neutron.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
* various Networking agents&lt;br /&gt;
* [https://docs.openstack.org/horizon/2025.1/install/ Dashboard]&lt;br /&gt;
&lt;br /&gt;
== communications ==&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
	<entry>
		<id>https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6979</id>
		<title>MilliwaysStack</title>
		<link rel="alternate" type="text/html" href="https://wiki.milliways.info/index.php?title=MilliwaysStack&amp;diff=6979"/>
		<updated>2026-01-21T11:00:26Z</updated>

		<summary type="html">&lt;p&gt;Obsidian: /* Shopping List */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We want to run an OpenStack experiment&lt;br /&gt;
&lt;br /&gt;
== The grander idea ==&lt;br /&gt;
&lt;br /&gt;
We want to try out an installation of OpenStack to give people around milliways experience with running (on). &lt;br /&gt;
&lt;br /&gt;
From an unnamed source we got 10 HPE servers. We will use 8 of them to run OpenStack on it. Storage is on a seperate machine. &lt;br /&gt;
&lt;br /&gt;
=== MVP ===&lt;br /&gt;
The MVP would be:&lt;br /&gt;
* Kubernetes / docker &lt;br /&gt;
* object storage&lt;br /&gt;
* file systems&lt;br /&gt;
* Networking&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* Firewalling&lt;br /&gt;
* Databases - mariaDB / PostgreSQL &lt;br /&gt;
* Someone something redis I guess&lt;br /&gt;
* container registry&lt;br /&gt;
&lt;br /&gt;
=== e-MVP ===&lt;br /&gt;
The extended MVP would be:&lt;br /&gt;
* functional Monitoring &amp;amp; alerting&lt;br /&gt;
* autoscaling &lt;br /&gt;
* integration into milliways identity &amp;amp; access management authentik &lt;br /&gt;
* logging &amp;amp; alerting&lt;br /&gt;
&lt;br /&gt;
== the software stack explained ==&lt;br /&gt;
&lt;br /&gt;
OpenStack is a cloud framework stack that offeres AWS / Azure / GC alike services. &lt;br /&gt;
&lt;br /&gt;
Most documentation is availible for Ubuntu &amp;amp; Red Hat. On the longer term an installation under NixOS might be feasable.&lt;br /&gt;
&lt;br /&gt;
== Asset List ==&lt;br /&gt;
=== Rack ===&lt;br /&gt;
* 47U&lt;br /&gt;
* 950mm external depth&lt;br /&gt;
** 915mm internal depth&lt;br /&gt;
=== Consumables &amp;amp; Small Materials ===&lt;br /&gt;
* 1 x Samsung 860 EVO 2TB&lt;br /&gt;
* Assorted M2 - M3 screws&lt;br /&gt;
* Assorted mismatched bundle of M5 and M6 cagenuts and bolts&lt;br /&gt;
* SFPs&lt;br /&gt;
=== Switches ===&lt;br /&gt;
* 2 x Dell PowerConnect 7048R-RA&lt;br /&gt;
* 1 x Cisco 3560e&lt;br /&gt;
=== [[MilliwaysStack_Servers | Servers]] ===&lt;br /&gt;
* 1 Dell PowerEdge R710 server as storage&lt;br /&gt;
** 2 x X5570 2,93GHz&lt;br /&gt;
** 192GB RAM&lt;br /&gt;
** 6 x 3,5&amp;quot; bays&lt;br /&gt;
*** 6 x hotswap 3,5&amp;quot; drive sleds/brackets&lt;br /&gt;
** Drives&lt;br /&gt;
*** 1 x Samsung 850 EVO 500GB&lt;br /&gt;
**** for OS&lt;br /&gt;
**** Hidden in aftermarket [https://www.amazon.nl/dp/B083XJPCGL &amp;quot;Optical Drive&amp;quot;] adapter.&lt;br /&gt;
***We have more drives than bays, but not enough drives to make a nice or ideal configuration. As such, the Dell storage situation is likely temporary until we can figure out if we add more 12T or 10T or keep as-is.&lt;br /&gt;
**** 2 x Seagate Exos X18 12TB&lt;br /&gt;
**** 1 x Seagate Exos X18 10TB&lt;br /&gt;
**** 4 x WD Red 4TB&lt;br /&gt;
**** 4 x WD Green 3TB&lt;br /&gt;
** no rails&lt;br /&gt;
* 2 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram&lt;br /&gt;
** PCI Riser to 4* NVMe adapter&lt;br /&gt;
*** 1TB Crucial NVMe &lt;br /&gt;
** iLO4&lt;br /&gt;
*** It seems it accepts 35DPH-SVSXJ-HGBJN-C7N5R-2SS4W as activation key for iLO Advanced license?&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** Slide rails&lt;br /&gt;
* 8 x HPE proliant DL380 Gen 8&lt;br /&gt;
** 2 x E5-2620 v3 2,4GHz&lt;br /&gt;
** 384GB ram &lt;br /&gt;
** iLO4&lt;br /&gt;
** without hard drives but has 2,5&amp;quot; bays&lt;br /&gt;
*** no drive sleds/brackets available, only blanks&lt;br /&gt;
** 7 x slide rails&lt;br /&gt;
&lt;br /&gt;
== Shopping List ==&lt;br /&gt;
 It&#039;s ofc. sexy as all hell to buy memory, AI cards, flash storage and allsorts, but literally none of that will ever work if we don&#039;t have our Generic basics in order. While we prefer big donations go to big ticket items, many small ticket items unexpectedly add up in the long run. Please do not forget the generic basics!&lt;br /&gt;
* Generic Basics&lt;br /&gt;
** PDU&lt;br /&gt;
*** &amp;lt;s&amp;gt;Temporary 1U unmanaged PDU with 16A/230V C19 input and 1* C19 + 8* Type F outlet.&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Perfect: Managed Rack mountable PDU with CEE Red 16A/20A 400v input to C13/C14 + C19/C20 outlets.&lt;br /&gt;
**** [https://www.eaton.com/us/en-us/skuPage.PDU3XEVSR6G20.html Stupid expensive example]&lt;br /&gt;
*** Alternatively; a &amp;quot;normal&amp;quot; Serverrack PDU (still strong prefer managed) + 16A/20A 400v -&amp;gt; 16A 230V transform&lt;br /&gt;
** Network Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [Color]&lt;br /&gt;
**** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Power Cables&lt;br /&gt;
*** Some actual properly matching cables would be great&lt;br /&gt;
*** [&#039;&#039;Type&#039;&#039;],[&#039;&#039;Amount&#039;&#039;],[&#039;&#039;Length&#039;&#039;]&lt;br /&gt;
** Screws, Nuts, Bolts&lt;br /&gt;
*** &amp;lt;s&amp;gt;Assorted M2,M2.5,M3 Screws&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Some actual properly matching cage nuts\bolts would be great&lt;br /&gt;
** PCI Risers&lt;br /&gt;
*** &amp;lt;s&amp;gt;Single NVMe adapters&amp;lt;/s&amp;gt;&lt;br /&gt;
*** Multi NVMe adapters&lt;br /&gt;
** KVM&lt;br /&gt;
*** PiKVM?&lt;br /&gt;
* Dell - Storage&lt;br /&gt;
** &amp;lt;s&amp;gt;2* Drive sleds&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;New RAID Card that supports passthrough\JBOD&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;2* SFF-8087 -&amp;gt; SFF-8087 Mini SAS Cable&amp;lt;/s&amp;gt;&lt;br /&gt;
** Drives&lt;br /&gt;
*** &amp;lt;s&amp;gt;500GB SSD for OS&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Bracket and SATA Cable Adapter for SSD&amp;lt;/s&amp;gt;&lt;br /&gt;
*** &amp;lt;s&amp;gt;Technically not shopping, but for historical tracking;&lt;br /&gt;
**** Old Exos X16 2 x 12T and 1 x 10T were RMA&#039;d and replaced with X18&#039;s&amp;lt;/s&amp;gt;&lt;br /&gt;
*** 12T ?&lt;br /&gt;
* HP1 - Control&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* HP2 - Compute&lt;br /&gt;
** &amp;lt;s&amp;gt;1* PCI riser to 4*NVMe adapter&amp;lt;/s&amp;gt;&lt;br /&gt;
** &amp;lt;s&amp;gt;1* 1TB NVMe&amp;lt;/s&amp;gt;&lt;br /&gt;
* Flash Storage&lt;br /&gt;
** We&#039;ll need [https://www.amazon.de/-/en/dp/B07GCDH5D8 Drive Trays] for the HPs if we wanna add SSDs&lt;br /&gt;
** &amp;lt;s&amp;gt;1 x 2TB Samsung 860 EVO&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
 nb. this is quick &#039;n&#039; dirty as I go along.&lt;br /&gt;
 In the short-term future I&#039;d much rather replace this adhoc documentation with something like NetBox.&lt;br /&gt;
=== Network ===&lt;br /&gt;
* Supernet 10.42.0.0/16&lt;br /&gt;
** Vlan 42&lt;br /&gt;
*** Interconnect&lt;br /&gt;
*** 10.42.0.0/30&lt;br /&gt;
**** Gateway 10.42.0.1&lt;br /&gt;
**** Milliways Core 10.42.0.2&lt;br /&gt;
** Vlan 5&lt;br /&gt;
*** Mgmt \ OOB&lt;br /&gt;
*** 10.42.1.0/24&lt;br /&gt;
**** Milliways Core 10.42.1.1&lt;br /&gt;
**** Dell iDRAC 10.42.1.5&lt;br /&gt;
**** Dell RAID Controller 10.42.1.6&lt;br /&gt;
**** HP 1 iLO 10.42.1.7&lt;br /&gt;
**** HP 2 iLO 10.42.1.8&lt;br /&gt;
** Vlan 10&lt;br /&gt;
*** Prod&lt;br /&gt;
*** 10.42.10.0/24&lt;br /&gt;
**** Milliways Core 10.42.10.1&lt;br /&gt;
**** Dell 10.42.10.2&lt;br /&gt;
**** HP 1 10.42.10.3&lt;br /&gt;
**** HP 2 10.42.10.5&lt;br /&gt;
&lt;br /&gt;
=== Cable Mgmt ===&lt;br /&gt;
 As there are some early ambitions to physically take this environment to events, perhaps we should seriously think about making our lives easier by already thinking about colorcoding connectivity. While this will help us connecting everything again at $event when we&#039;re sleepdeprived\drunk\explaining to newbies, this has the added effect of making it all look slightly more cooler than just a spaghetti of all boring white cables or worse, a spaghetti of whatever the fuck we have lying around.&lt;br /&gt;
&lt;br /&gt;
This is all just made-up without too much thought. This is specifically intended to start a discussion so we can work toward an agreement, it is not intended to be a unilateral decision. Example; You&#039;ll notice 0 thought was put into fiber or not ;)&lt;br /&gt;
* RED&lt;br /&gt;
** Mgmt \ OOB&lt;br /&gt;
*** iDRACs, iLOs, RAID Cards, etc&lt;br /&gt;
* GREEN&lt;br /&gt;
** Storage Prod&lt;br /&gt;
*** At least the Dell, maybe HPs if we get into flash storage&lt;br /&gt;
* BLUE&lt;br /&gt;
** Compute Prod&lt;br /&gt;
*** Likely overwhelmingly the HPs&lt;br /&gt;
* YELLOW&lt;br /&gt;
** Interconnect&lt;br /&gt;
*** Connectivity to $outside, between switches, whatever&lt;br /&gt;
&lt;br /&gt;
=== Naming Convention ===&lt;br /&gt;
 We need names!&lt;br /&gt;
 Can&#039;t keep calling these &amp;quot;Dell&amp;quot;, &amp;quot;HP1&amp;quot;, &amp;quot;HP2&amp;quot; etc.&lt;br /&gt;
 Calling them by their S/Ns is also super boring and cumbersome; &amp;quot;Oh yea, we need to setup 5V6S064&amp;quot;&lt;br /&gt;
 We could even opt for dual names. Internally, when logged in to $shell, the names could be functional &amp;quot;milliways-control-node-1&amp;quot; so it&#039;s clear what you&#039;re doing, but externally, the Asset Tag could be a Hitchhiker&#039;s Guide to the Galaxy character or a DiscWorld town or something. That way, if we do ever show this off at events, we can do cool shit with light up tags, make stuff funny and recognizable and cool to talk about - it also makes it way more relatable to market for when asking for donations; &amp;quot;Ya, we&#039;re looking for extra storage for Überwald&amp;quot; sounds much better than &amp;quot;Ya we&#039;re looking for extra storage for 5V6S064 or milliways-control-node-1&amp;quot;&lt;br /&gt;
 Naturally, once we get NetBox going, we can map the Asset names to the actual server name and potentially it&#039;s serial so we don&#039;t get confused internally (if we want to use serials, there&#039;s somethign to be said for not using serials here)&lt;br /&gt;
&lt;br /&gt;
* Functional&lt;br /&gt;
** milliways-control-node-1&lt;br /&gt;
** milliways-control-node-2&lt;br /&gt;
** control-node-1&lt;br /&gt;
** compute-node-1&lt;br /&gt;
** flash-storage-1&lt;br /&gt;
&lt;br /&gt;
* Marketing&lt;br /&gt;
** HGttG characters&lt;br /&gt;
*** Arthur&lt;br /&gt;
*** Ford&lt;br /&gt;
*** Zaphod&lt;br /&gt;
** Discworld locations&lt;br /&gt;
*** Ankh-Morpork&lt;br /&gt;
*** Überwald&lt;br /&gt;
*** Lancre&lt;br /&gt;
***&lt;br /&gt;
&lt;br /&gt;
=== OpenStack ===&lt;br /&gt;
&lt;br /&gt;
 We&#039;re using 2025.1 (epoxy) as 2025.2 (flamingo) has an undocumented breaking change making installation of keystone impossible. We have registered a bug with the documentation on launchpad for this.&lt;br /&gt;
&lt;br /&gt;
* [https://docs.openstack.org/install-guide/ Installation guide]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-2025-1-epoxy Minimal Deployment]&lt;br /&gt;
* [https://docs.openstack.org/install-guide/overview.html#example-architecture Example Architecture]&lt;br /&gt;
 Following installation guide recommendation, passwords are created with &amp;lt;code&amp;gt;openssl rand -hex 10&amp;lt;/code&amp;gt; and saved in a password store.&lt;br /&gt;
&lt;br /&gt;
==== Controller ====&lt;br /&gt;
* [https://docs.openstack.org/keystone/2025.1/install/ Identity service]&lt;br /&gt;
** [https://docs.openstack.org/keystone/2025.2/install/keystone-users-ubuntu.html Broken] in 2025.2&lt;br /&gt;
***[https://opendev.org/openstack/keystone/src/commit/82c80dccf6c2e74e27b90f5204de6da1fc6bd76d/releasenotes/notes/remove-wsgi-scripts-615b97ee4d6e0de2.yaml This] commit removes the WSGI scripts, ``keystone-wsgi-admin`` and ``keystone-wsgi-public``.&lt;br /&gt;
*** Both scripts are still called by the openstack command. This means running any openstack command to create a domain, projects, users, and roles fails with the error&lt;br /&gt;
****&amp;lt;code&amp;gt;Failed to discover available identity versions when contacting http://controller:5000/v3. Attempting to parse version from URL.&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Evidence:&lt;br /&gt;
**** &amp;lt;code&amp;gt;tail /var/log/apache2/keystone.log&amp;lt;/code&amp;gt;&lt;br /&gt;
***** &amp;lt;code&amp;gt;Target WSGI script not found or unable to stat: /usr/bin/keystone-wsgi-public&amp;lt;/code&amp;gt;&lt;br /&gt;
** Workaround, use 2025.1 instead&lt;br /&gt;
** Completed 2025-01-18&lt;br /&gt;
* [https://docs.openstack.org/glance/2025.1/install/ Image service]&lt;br /&gt;
** [https://docs.openstack.org/glance/2025.1/install/install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** The guide has you create 3 API endpoints for the service.&lt;br /&gt;
**** You need to configure access to keystone with one of them, but you are not told which one. Only &amp;lt;code&amp;gt;public&amp;lt;/code&amp;gt; will work.&lt;br /&gt;
*** Configuring glance-api.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
** Completed 2025-01-19&lt;br /&gt;
* [https://docs.openstack.org/placement/2025.1/install/ Placement service]&lt;br /&gt;
** [https://docs.openstack.org/placement/2025.1/install/verify.html Bad] Documentation&lt;br /&gt;
*** If you followed the guide, your user account [https://storyboard.openstack.org/#!/story/2008969 does not have the rights] to read &amp;lt;code&amp;gt;/etc/placement/placement.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Running &amp;lt;code&amp;gt;placement-status upgrade check&amp;lt;/code&amp;gt; as root proves the service works.&lt;br /&gt;
*** Undocumented requirement fulfilled; &amp;lt;code&amp;gt;usermod -aG placement&amp;lt;/code&amp;gt;&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portions of [https://docs.openstack.org/nova/2025.1/install/ Compute]&lt;br /&gt;
** [https://docs.openstack.org/nova/2025.1/install/controller-install-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring nova.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
*** The guide attempts to make you configure options for the networking service, which you have not installed yet, because the guide makes you install this service first&lt;br /&gt;
*** &amp;lt;code&amp;gt;Due to a packaging bug, remove the log_dir option from the [DEFAULT] section.&amp;lt;/code&amp;gt;&lt;br /&gt;
**** ???? THEN FIX THE PACKAGE?!?!?!!!!&lt;br /&gt;
*** The &amp;lt;code&amp;gt;[glance]&amp;lt;/code&amp;gt; option you are instructed to use is deprecated&lt;br /&gt;
** Completed 2025-01-20&lt;br /&gt;
* management portion of [https://docs.openstack.org/neutron/2025.1/install/ Networking]&lt;br /&gt;
** [https://docs.openstack.org/neutron/2025.1/install/controller-install-option2-ubuntu.html Bad] Documentation&lt;br /&gt;
*** Configuring neutron.conf is done haphazardly in the guide&lt;br /&gt;
**** config options are organized alphabetically, the guide is not.&lt;br /&gt;
* various Networking agents&lt;br /&gt;
* [https://docs.openstack.org/horizon/2025.1/install/ Dashboard]&lt;br /&gt;
&lt;br /&gt;
== communications ==&lt;/div&gt;</summary>
		<author><name>Obsidian</name></author>
	</entry>
</feed>