playwright and testing multiple environments

Following on from my post on configuring Cypress to be able to test multiple environments, here’s how I did the same thing in playwright (the node version)

As before, my “Use Case” is I have some number of defined environments in Kubernetes Clusters ( live, qa, dev), plus some number of personal developer environments…. and I don’t want those personal configs getting put into the repo (developers may want to use personal login details, for example)

The solution that worked for me is essentially the same as the Cypress idea: there are multiple config files, and some environment variable is used. In this case, I’m “hacking” the idea of .env to my benefit.

The whole idea of dotenv is that there’s an external file that loads a bunch of values into process.env for us … and note that dotenv can specify the path to the file being loaded. As with the Cypress example, we use an environment variable to select the one we want to use.

The “magic” happens in the central playwright.config.ts file:

//playwrite.config.ts
import { defineConfig, devices } from '@playwright/test';
import * as dotenv from 'dotenv';

// import the right config - default to beta, change using a `CONTEXT` env variable
let context:string;

if (process.env.CONTEXT == null) {
  context = 'beta'
} else {
  context = process.env.CONTEXT;  
}
dotenv.config({ path: `myConfigs/env-${context}` });

export default defineConfig({
  .....
});

… and we place the various config files in, you guessed it, a directory called myConfigs – and name each file env-foo

base_url=https://example.com
something=something_else
ad=nausium

In the tests, we then have something akin to:

import { test, expect } from '@playwright/test';

const baseUrl = process.env.base_url ?? "https://example.com";
const something = process.env.something ?? "";

test.describe('Test something', () => {
    ....
});

To select the appropriate config file, run:

CONTEXT=live npx playwright test

where the value of CONTEXT matches the identifier of the config file.

Cypress 10 and testing multiple environments

Cypress is a tool for doing [end-to-end] testing of web services.

One of the facets of web services is you often have multiple environments: the actual “live” service, a Beta/QA instance, and maybe some specialist variants… and it would be good to be able to apply the tests to any of these environments.

Before Cypress 10, configuration was through a file called cypress.json… and there is plenty of documentation on the internet to create cypress/config/foo.json files to modify the configuration for each environment.

With Cypress 10, this changed – and the cypress.json file was changed to cypress.config.js (or cypress.config.ts, if you’re going that route) – and the cypress/config/foo.json solutions don’t work.

My “Use Case” is I have some number of defined environments in Kubernetes Clusters (live, qa, dev), plus some number of personal developer environments…. and I don’t want those personal configs getting put into the repo (developers may want to use personal login details, for example)

I found one solution for cypress 10 [https://dev.to/samelawrence/how-to-test-in-multiple-environments-in-cypress-10-1i9h], where they put the multiple config elements into setupNodeEvents in cypress.config – however that doesn’t work for my case: no personal developer config in files in the repo.

This is my solution – there may be better, but this works for me:

// cypress.config.ts
import { defineConfig } from 'cypress';
import * as fs from 'fs';

function getConfigurationByFile(file) {
  const pathToConfigFile = 'cypress/config/' + file + '.json';
  return JSON.parse(fs.readFileSync(pathToConfigFile, 'utf-8'));
}

export default defineConfig({
  e2e: {
    baseUrl: 'https://beta.example.com', // no trailing slash
    specPattern: 'cypress/e2e/*.{spec.ts,cy.js}',
    env: {
      login_username: '<username>',
      login_password: '<password>',
      something: 'interesting',
    },
    // various other config items

    setupNodeEvents(on, config) {
      if (config.env.context) {
        const localData = getConfigurationByFile(config.env.context);
        if (localData.baseUrl) {
          config.baseUrl = localData.baseUrl;
        }
        if (localData.env) {
          config.env = localData.env;
        }
      }

      // IMPORTANT return the updated config object
      return config;
    },
  },
});

(The code looks a bit weird, because I use the eslinter, and it requires imports, not requires – YMMV)

This sets up a default configuration, with some env variables…. and the setupNodeEvents routine looks for an env field of context, and if it exists, modifies the config from it.

// cypress/config/live.json
{
  "baseUrl": "https://example.com",
  "env": {
    "login_username": "fred",
    "login_password": "flintstone",
  }
}

The update the config function, in this example, is pretty crude – it only changes baseUrl & env, and simply replaces a with b – it makes no attempt to merge lists or dicts in any way… so, in this example, something is lost from the env variables.

To select the appropriate config-file, run:

npx cypress run --env context=live

…. where the value of the context variable matches the name of the json file.

Authentication vs Authorisation

The two are often combined into a single action… logging in… but are actually very different:

Consider an aeroplane flight – that requires both Authentication and Authorisation.

When you line up to board a plane, they check both your passport and your boarding-pass/ticket.

  • The Passport “Authenticates” who you are, but says nothing about your place on the plane
  • The boarding-pass “Authorises” a place on the plane, but says nothing about who you are

The boarding crew check your passport [Authentication] and boarding pass [Authorisation], and possibly cross-check with a list of recognised names.

If either are wrong – perhaps a boarding pass for a later flight, or the ticket was booked in someone else’s name – you are not permitted onto the plane.

Direnv, pyenv, and shared libraries

direnv (https://direnv.net/) is great, it allows you to set specific environments when in a specific part of the directory-tree…. and you can use that to set a python virtual environment

pyenv (https://github.com/pyenv/pyenv) is great, it allows you to set a specific version of python for your virtual environment

The problem is that pyenv install 3.6.7 will install python 3.6.7, however some libraries won’t then install…. citing a shared library problem – mod-wsgi being the one that threw me.

To get round this, you need to install python with shared libraries enabled:

env PYTHON_CONFIGURE_OPTS="--enable-shared" pyenv install --verbose 3.7.6

worked for me…

Hand-crafting constraints in django migrations

So here’s the scenario: you’re extending a django app – and during development, you realise you need to add a multi-field constraint for one of the new tables.

Oh – and lets assume you’re writing tests… because how else do you know the code works?

The putting the constraint in the model is nice an easy:

class FeatureRestrictions(models.Model):
    # The feature
    feature = models.CharField(
                db_index=True,
                max_length=256,
                null=False,
                blank=False)

    # These three refer to entries in other tables.
    # customer & course may be blank
    organisation = models.ForeignKey(
        "customers.Organisation",
        on_delete=models.CASCADE,
        null=False,
        blank=False
    )
    customer = models.ForeignKey(
        "customers.Customer",
        blank=True,
        null=True,
        on_delete=models.CASCADE
    )

    # This could be False (null or "")
    course_code = models.CharField(blank=True,
                    null=True,
                    db_index=True,
                    max_length=256)

    class Meta:
        constraints = [
            models.UniqueConstraint(
                fields=[
                  "feature", "organisation",
                  "customer", "course_code"],
                name="unique_feature_trigger",
            )
        ]

The problem is adding the constraint to the migration file.
(Yes, you could rely on multiple python manage.py makemigrations commands, or you could be hand-editing the migration files.)

Adding a unique constraint to a single field is well documented, but the multi-field constraint was hard.

This is what I ended up with:

    operations = [
        migrations.CreateModel(
            name="FeatureRestrictions",
            fields=[
                (
                    "id",
                    models.AutoField(
                        auto_created=True,
                        primary_key=True,
                        serialize=False,
                        verbose_name="ID",
                    ),
                ),
                (
                    "feature",
                    models.CharField(
                        db_index=True,
                        max_length=256
                    )
                ),
                (
                    "course_code",
                    models.CharField(
                        blank=True,
                        db_index=True,
                        max_length=256,
                        null=True
                    ),
                ),
                (
                    "customer",
                    models.ForeignKey(
                        blank=True,
                        null=True,
                        on_delete=django.db.models.deletion.CASCADE,
                        to="customers.customer",
                    ),
                ),
                (
                    "organisation",
                    models.ForeignKey(
                        on_delete=django.db.models.deletion.CASCADE,
                        to="customers.organisation",
                    ),
                ),
            ],
        ),
        migrations.AddConstraint(
            model_name="featurerestrictions",
            constraint=models.UniqueConstraint(
                fields=(
                    "feature", "organisation",
                    "customer", "course_code"
                ),
                name="unique_feature_trigger",
            ),
        ),
    ]

Tool-tips in an accessibility world

In the web-accessibility world, the title attribute is not welcome (a quick web-search will find various articles about it).

In a service I run, wanted a UI design to present a library name, a brief description of what it is, the version currently installed, and a link to it’s documentation. I had some design criteria:

  • We want a lean/clean visual design (there are a LOT of libraries)
  • name & link to description are essential
  • Description & installed version can be considered a progressive enhancement, so don’t need to be displayed

The problem is that the text is only available to sighted users, using a mouse.

I had a play for a few hours…. and have this CodePen: https://codepen.io/perllaghu/pen/mdwRKZG to show my working.

The hidden delights of good design

I shop at a supermarket, and I tend to buy own-brand products.

After :cough: years, I have suddenly realised the thought that’s gone into some of their products – in this example: bread & cheese.

Their “value product” loaf of bread comes pre-sliced – and there’s always the right number of slices to make toasted sandwiches…. this means that the number of slices is divisible by 4. Also, each slice is just the right size for my sandwich maker…. they’re too long for my toaster, but just lovely for cheese-on-toast too. (A rival supermarket makes loaves that are a perfect fit for my toaster, but less good for my preferred use-case)

I’ve occasionally had an extra slice…. but only when the final “heel” slice has been unusable thin.

Think about this: this means that the company has decided on a size & shape of their loaves… and have quality-controlled the quantity of ingredients in the raw baking mix to ensure that each loaf is full and rectangular… and not bowed across the top.

Next we have cheese: Grated cheese is great [sorry] for cheese-on-toast… but in a sandwich, sliced cheese is better…. and the “value product” comes in just the right size to fit the slices of bread.

My old boss used to refer to this as “well seamed” – it’s not seamless, you can see the join, but that join you can see, and is a perfect fit.

What does any of this have to do with a tech/code environment?

Sometimes good product design is so good, the user don’t realise there’s an actual design process there.

Strive to consider your product; consider how well another product will seam with it

Soak-testing k8 components – port-forwarding for the win

The Senario

We have a multi-component service running in a kubernetes cluster, with almost everything talking over inter-pod [ie, kubernetes DNS] connections, and a minimal set of public end-points.

Obviously, each component has it’s own set of unit-tests, we can hand-test the complex operation of our primary facilities, and we can load-test the system with 1,000 users logging in “simultaneously”

The Situation

We’ve started getting support calls saying that one component seems to be failing, but only with a couple of users, and only when they’re using it in a lot of pieces of data.

The Requirement

We need to soak-test the component.
This is not a test for a large number of users, not is it a test for a large lump of data…. but a test of a large number of relatively small interactions.

Fine…. but the component we want to test is in the cluster, and doesn’t have a public API.

We could create some test-suite that fired up public-facing components and poke them to do the interactions with our target…. but there’s a better way: Kubernetes port-forwarding!

This uses kubectl to set up a port on the local machine what forwards to a pod in the cluster:

kubectl port-forward pod/component-kubernetes-name <local>:<remote>

(see https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/)

The Solution

Simple port forwarding is all very well, and fine for a one-off….. but I want to create a self-contained soak-test script that does the port-forwarding for you (it’s one less thing for someone else to remember to do….. in 6 months time, when they’ve forgotten I said it was in the documentation)

I’m writing my test in python, so we look to the kubernetes package for this.

This is what I have:

import requests

from kubernetes import client, config
from kubernetes.stream import portforward
from urllib3.util import connection as urllib3_connection

class myComponentSoakTest:
    # These don't change
    service_url = "http://localhost:9000/services/nbexchange/"

    # These get set by configuration
    cluster = str
    namespace = str

    # These are internal class variables
    myComponent_server = str
    k8_api = client.CoreV1Api

    #########
    #
    # Other functions
    #
    #########

    def setup(self):
        #########
        #
        # Other setup code
        #
        #########

        # Can we contact the k8 cluster?
        contexts, active_context = config.list_kube_config_contexts()
        if not contexts:
            sys.exit("Cannot find any context in kube-config file.")
        contexts = [context["name"] for context in contexts]
        if self.cluster not in contexts:
            sys.exit(f"{self.cluster} not in list of known clusters: {contexts}")

        config.load_kube_config(context=self.cluster)
        self.k8_api = client.CoreV1Api()
        pods = self.k8_api.list_namespaced_pod("default")
        items = list()
        for item in pods.items:
            if re.search(r"myComponent", item.metadata.name):
                items.append(item)
        if not items:
            sys.exit(f"Failed to find a myComponent server in the cluster")
        if len(items) > 1:
            sys.exit(f"There are too many myComponent servers in the cluster: {items}")
        self.myComponent_server = items[0].metadata.name

        # lifted from https://github.com/kubernetes-client/python/blob/master/examples/pod_portforward.py
        # Monkey patch urllib3.util.connection.create_connection
        def kubernetes_create_connection(*args, **kwargs):
            pf = portforward(
                self.k8_api.connect_get_namespaced_pod_portforward,
                self.myComponent_server,
                self.namespace,
                ports="9000",
            )
            return pf.socket(9000)
        urllib3_connection.create_connection = kubernetes_create_connection

    #########
    #
    # Other functions
    #
    #########


    def main(self):
        self.setup()
        ####
        # rest of testing

if __name__ == "__main__":
    app = myComponentSoakTest()
    app.main()

Now, when your code makes a

url = self.service_url + path
requests.get, url, headers=headers, cookies=cookies

The connection that should go to localhost:9000 is actually interrupted right down at socket level, and sent to the component on the cluster.

really sweet….