Since the collapse of WeaveWorks at the end of 2023, things have been hectic.

I spent a couple of months interviewing, and eventually landed up at SUSE working on Rancher.

In some ways, Rancher is similar to Weave GitOps, but without almost all of the GitOps, and a lot less declarative, most things are done as “ClickOps”.

It’s been tough finding a place at SUSE, the Rancher codebase is not what I’d write, and there are bits that are painfully bad, but there are a lot of smart folks working on it, hammering it into shape.

Hackweek is an annual thing at SUSE (we had a very similar week at Heroku) in which we get to hack on whatever we want.

I opted to work on a number of different things this year:

  • Getting gitopssets going again, it’s been in maintenance mode since last December.
  • Replacing the old Tekton Poller which I wrote about in July 2020.
  • Writing TinyGo for my Keyboard Featherwing I write a lot of Go, but I’ve never written any “Tiny” Go for embedded devices, despite having written a fair bit of Arduino C++ and some CircuitPython.
  • Implementing a small Forth interpreter.

That’s a lot of things to get on with…how’d it go?

GitOpsSets

GitOpsSets had been in maintenance-only for almost a year, only dependencies being updated.

I had forked it when WeaveWorks folded, and rearranged the packages, and some minor tweaks, but I hadn’t actually run it, so…getting it going took a little bit of effort, but it’s working now.

I fixed the GitRepository generator to work with post 2.2 Flux.

The CEL integration branch which would replace JSON-path in places where it makes sense is still active, I will do the work to get this to land, it will benefit from recent work I’ve done elsewhere in the CEL ecosystem.

I started a few separate explorations in GitOpsSets to see what is possible.

Configurable Cluster generation

Is it possible to generate from either GitOps Clusters (WeaveWorks) or Rancher Clusters?

I got quite far down the route, replacing the mechanism for querying clusters, and allowing a GVK to be provided. I stopped at the point where I realised that this could go in a few different directions, and one of them was a generic resource querying mechanism.

The caveat is that it becomes harder to select “ready” clusters, I will come back to this, and maybe it’s time to give up on WeaveWorks GitopsClusters (but there is now movement in the weave-gitops repo!), I’ll figure out the right way forward for this, as it’s a core element of the cluster-reflector, cluster-bootstrapper and cluster-generator workflow that I think is crucial for managing clusters at scale.

Generation from Keycloak

Over the past few years, I’ve seen a large number of enterprise users running Keycloak, while I’m not its biggest fan, it definitely does the job for folks, and so, can we generate from Keycloak users and groups within a realm?

This would allow generation of users and groups by querying they Keycloak API.

Ideally this would allow declarative generation of RBAC-type resources, e.g. RoleBindings by querying Keycloak as the source of truth.

I need to figure out what a generic version of this looks like, for example, is it valuable to expose a UsersGenerator which can query Keycloak and other things like OpenLDAP?

External generators

I made a start on external-generators, i.e. support for referencing generators which are Services that accept a specific format of request, and return a specific response format.

I’ve got the API defined, and implemented an example, I just need to hook this up to the generator resolution mechanism, effectively you should be able to reference an external generator by name, and the code will find a Generator resource that is an API endpoint that we can submit the generator parameters to, and receive a response from.

This is very, very similar to the approach in Tekton.

Most of this is already available via the APIClient generator in GitOpsSets, but external generators could integrate paginated responses in a standardised way which would be much more complicated to add to the APIClient generator.

I’m not even particularly wedded to the idea of an external Generator resource type, this might just be possible via an ExternalGenerator generator, which takes a URL and queries it rather than going via a Generator resource.

If done well, this might negate the need for a specific Keycloak generator, as it could just be an external generator.

GitPoller

Four and a half years ago I wrote a tool to allow you to trigger Tekton pipelines by polling a Git API for changes.

This was written because I saw a lot of requests by folks who wanted to use Tekton Triggers, but couldn’t easily expose an EventListener to the internet.

The design for the tool was not great, it was too tightly coupled to Tekton Pipelines, which added maintenance cost, at the time I thought this would reduce the cost of setting it up, but…the tradeoff wasn’t worth it.

I had started working on a replacement fairly early on, and after some experimentation I had decided on the shape of it with a reduction in the coupling the goal.

GitPoller had sat unreleased for the best part of three years, it needed a bit of modernisation, and translation of the original experimental code into more modern thinking.

It polls an upstream GitHub or GitLab repository, and when the commit changes it sends a CloudEvent with the response body from the detected change, this required a change to Tekton Triggers to respond appropriately to CloudEvents.

apiVersion: polling.gitops.tools/v1alpha1
kind: PolledRepository
metadata:
  name: polledrepository-sample
  namespace: poller-demo
spec:
  url: https://github.com/bigkevmcd/go-demo.git
  ref: main
  type: github
  frequency: 5m
  endpoint: https://el-polling-listener.poller-demo.svc.cluster.local:8080

This example will poll the main branch in that repo every 5 minutes, and send notifications to the tekton event listener in the endpoint.

It isn’t complete yet, there are a few things I need to do including providing support for custom CAs so that you can talk to a private GitHub/GitLab installation, but instead of talking to Tekton Pipelines, it now talks to Tekton Triggers, so you can use the full power of Triggers to drive your pipelines, including parsing the body using CEL.

TinyGo

I’ve written a fair bit of Arduino code, and some CircuitPython too.

I have a lot of micro-controllers, they’re really useful for teaching kids how computers work, and building small projects.

Despite this, I have never written any Go for a micro-controller, despite having looked at TinyGo several times.

I decided to change that, and I set a goal of getting some code to drive my Keyboard Featherwing which is a small keyboard/screen based board that accepts an Adafruit Feather micro-controller.

The device has an SD Card, screen with touch-capability, a 35 button keyboard, and 5 buttons including a joystick.

Debugging embedded code is hard, I can’t just write a test and stick strategic log.Printf calls in to see what’s failing, it needs to be compiled and flashed to the device and then the serial device monitored for those same logging messages.

I did manage to get the touch screen working, it’s a TI 2004 the there’s Arduino code to work from.

The conversion from Arduino C++ to Go wasn’t very hard, it took a few hours.

#include "TSC2004.h"

TSC2004 ts;

void setup() {
  Serial.begin(9600);

  ts.begin();
}

void loop() {
  if (!ts.touched()) {
    return;
  }

  const TS_Point p = ts.getPoint();

  Serial.print("(");
  Serial.print(p.x);
  Serial.print(", ");
  Serial.print(p.y);
  Serial.print(", ");
  Serial.print(p.z);
  Serial.println(")");
}

Gets converted to this Go:

package main

import (
	"fmt"
	"machine"
	"time"

	"github.com/bigkevmcd/tiny-go/tsc2004"
)

func main() {
	machine.I2C0.Configure(machine.I2CConfig{
		SDA: machine.SDA_PIN,
		SCL: machine.SCL_PIN,
	})

	tsc := tsc2004.NewI2C(machine.I2C0)
	if err := tsc.Begin(); err != nil {
		fmt.Println("failed to initialise", err)
	}

	for {
		touched, err := tsc.Touched()
		if err != nil {
			fmt.Println("error reading touch state ", err)
			continue
		}

		if !touched {
			continue
		}

		point, err := tsc.Point()
		if err != nil {
			fmt.Println("error reading touch point ", err)
			continue
		}

		fmt.Println(point)
	}
}

The Go code is a little bit messier because of the error handling, but the C++ code isn’t doing any error handling, I know which I prefer.

This is basically talking I2C to the correct port and accessing the registers to send appropriate events.

I have started on code to read the keyboard, and hopefully I can get that working, which would let me connect it to my other project…

Implementing a small Forth interpreter

I like the simplicity and explicitness of Forth, it works nicely on very low power machines…thus making it ideal for running on micro-controllers.

It’s a rite of passage for Forth developers to write their own Forth, to understand the combination of parsing, dictionaries and stacks that combine to make an implementation.

The TSC2004 code I struggled to TDD, because I didn’t really know where I was going with the code, I did design it in a way that does make it easy to test (and I’ve written tests).

The tiny-forth code is TDDed, it’s nice to actually create some code from scratch, with no framework or anything, just straight up Go stdlib code.

While the dictionary and stack are fairly easy, I use stacks a fair bit in my Go code for things, and linked-lists are trivial, it turns out the parsing and tokenisation of stdin is the hardest part.

While it’s trivial to read a line and easy enough to tokenise it and my tests can use strings.NewReader() as a stand-in for the console, it turns out parsing Forth input isn’t quite that simple, especially once you get past the first steps of parsing tokens and figuring out if they’re known words or integers to be put on the stack.

Parsing strings, and floats and other things adds complexity, and I haven’t gotten as far as “defining words” i.e. CREATE...DOES>.

It’s still really interesting, and there’s not a massive amount of code (which is still littered with TODOs that mark out design decisions not yet solidified).

Summary

All in all it’s been a fun week, I enjoyed all the projects I worked on.

It’s easy to forget how much I prefer working on TDD written code, the designs are flexible, changing things is easier, and I find that I make more explicit design decisions.

For example, these are two of the outstanding TODOs.

func newInterpreter(output consoleOutput) *forthInterpreter {
	i := &forthInterpreter{stack: newStack[int](), output: output}
	i.addWord("+", i.word_add)
    // `add.addWord` adds more words which are methods on the `forthInterpreter`
    // struct. 
}

// TODO: Shift this to a separate dictionary struct?
func (i *forthInterpreter) find(s string) *executionToken {
    // Iterate through the linked-list of execution tokens.
}
// TODO: Should these be standalone functions that accept an interpreter?
func (i *forthInterpreter) word_add() error {

Both of them have design implications, not major ones, but I can refactor with the tests behind me to ensure I don’t break anything.

I have decided to take on maintenance of GitOpsSets again - I really, really want to land support for SSA and the external generation mechanism.

I’ve watched as folks fork the old Tekton Poller, and probably realise that it needs too much work to get going while I had a replacement in progress and it needed a push to get it over the line.

The work to harden the GitPoller shouldn’t be massive, and I can do that in a few evenings, it turns out to be pretty useful for testing things in Tekton, because you don’t need to change anything to trigger a push, just delete and recreate the GitPoller.

I enjoyed the TinyGo, it’s slightly different from the main Go compiler, but the same techniques for testing it work, so once I get the hang of it, I should be able to TDD it as everything else. Integration tests will be a bit trickier, but I think they’re doable.

I did write a little bit of Zig, reimplementing a simple CPU in Zig, it’s definitely different from most of my current toolkit of languages, but I didn’t write a lot of code - I can see me coming back to this.

test "SyntheticCpu step" {
    const multiply: []const u8 = &.{ 0x04, 0x00, 0xc9, 0xff, 0xff, 0xff, 0x12, 0x00, 0xff, 0xff, 0xff, 0xff, 0x08, 0x00, 0x00, 0x00 };
    var cpu = SyntheticCpu.init(multiply);
    try expectEqual(0, cpu.ip);

    cpu.step();

    try expectEqual(6, cpu.ip);
    try expectEqual(0xc9, cpu.regs[0]);

    cpu.step();

    try expectEqual(12, cpu.ip);
    try expect(cpu.flags[CPU_FLAG_OF] == false);
    try expectEqual(55, cpu.regs[0]);
}

Finally, the Forth interpreter might fall by the wayside, it was an interesting experiment, but there are already micro-controller Forths, if anything has to get dropped from this week, that’ll be the thing.

I really enjoyed Hackweek, and I’m looking forward to getting back to customer problems, hopefully finally get a few branches over the line.