Planet Scheme
Planet Scheme
Wednesday, April 22, 2026
jointhefreeworld
Shared Libraries with Jenkins
Writing a Jenkins Shared Library pipeline
A good read to get started understanding about why and how we are doing
this, is the
Jenkins
official documentation
, which gives a good explanation about this, but
quite incomplete when it comes an actual example implementation.
I have taken most of the ideas and the article from
Adrian Kuper
, thank him and me for this
tutorial.
In this blog post I will try to explain how to setup and develop a
shared pipeline library for Jenkins, that is easy to work on and can be
unit tested with JUnit5 and Mockito.
This blog post is kinda long and touches many topics without explaining
them in full detail.
In order to be able to develop deployment pipelines, in the form of
shared library, we will need to set up a Java & Groovy development
environment.
For that purpose we will need the IntelliJ IDEA IDE that properly
supports Java and Groovy and has Gradle support.
You should begin by downloading OpenJDK 8 and installing to your
machine, from
here
After having installed the package, verify your Java version with the
javac -version command, which should display something like:
$ javac -version
javac 1.8.0_231
Following that you should download the Groovy language SDK from
here
and unzip it into your
SDKs folder. Then you should set an environment variable for
GROOVY_HOME
in your shell, for me ZSH:
export
GROOVY_HOME
"/Users/joe/Development/SDKs/groovy-3.0.7"
and augment your path with the groovy bin folder:
export
PATH
"/Users/joe/Development/SDKs/groovy-3.0.7/bin:$PATH"
You then have a setup that will be able to run Java and Groovy code
without problems.
Then let’s create a new IntelliJ IDEA project. I suggest using the
IntelliJ IDEA Ultimate for Jenkins shared pipeline development, because
it is the only IDE I know of, that properly supports Java and Groovy and
has Gradle support, and has excellent plugins, auto-complete and other
amazing features. So, if you don’t have it installed it yet, go ahead
and get a license.
Then you can open up the IDE and create a new project, select Gradle and
make sure to set the checkbox on Groovy.
Figure 1:
New Project
Next up, enter a GroupId and an ArtifactId.
Figure 2:
New Project Settings
Ignore the next window (the defaults are fine), click “Next”, enter a
project name and click “Finish”.
Figure 3:
Set defaults
IntelliJ should boot up with your new project. The folder structure in
your project should be something like the following.
Figure 4:
Initial Structure
This is cool for usual Java/Groovy projects, but for our purpose we have
to change things up a bit since Jenkins demands a project structure like
this:
├── build # Gradle compilation results
├── build.gradle # Gradle config for this project
├── gradle # Gradle runtime libraries and JAR files
├── gradlew # UNIX wrapper to run cradle, generated by the IDE
├── reports # Custom folder where our test coverage reports will go
├── resources # Necessary resources to run your pipelines, think JSON files, necessary config files
├── settings.gradle # Advanced Gradle setttings
├── src # Library code will reside here, this is the source code root, organised as a usual Java project
│ └── org
│ └── company
├── test # Unit tests for the library code, the contents of this folder will mimic the src folder structure
└── vars # Globally accessible (from Jenkins) scripts and methods, when loading the library
└── pipeline.gdsl
Make sure to add to your
.gitignore
the
.gradle
build
reports
.idea
folders.
You might be wondering where does
pipeline.gdsl
come from, well that
comes from your Jenkins instance, and depending on the plugins and
features you have installed on it, the file will contain different
content. This can be obtained from the pipeline syntax menu as seen in
the picture below. This file will ensure that your IDE understands
scripted pipeline steps. A message should pop up after having added the
contents to this file, with the text:
DSL descriptor file has been change and isn’t currently executed
to
which you should respond
Activate Back
Figure 5:
Jenkins IntelliJ Pipeline GDSL
Once you are setup with the project structure like above, edit your
build.gradle
so that it resembles:
buildscript {
repositories {
mavenCentral()
dependencies {
classpath 'com.eriwen:gradle-cobertura-plugin:1.1.1'
group 'org.company'
version '1.0-SNAPSHOT'
apply plugin: 'groovy'
apply plugin: 'cobertura'
sourceCompatibility = 1.8
repositories {
mavenCentral()
maven {
url 'https://repo.jenkins-ci.org/releases'
maven {
url 'https://repo.jenkins-ci.org/public'
dependencies {
implementation group: 'org.jenkins-ci.main', name: 'jenkins-core', version: '2.85'
implementation group: 'org.jenkins-ci.plugins.workflow', name: 'workflow-cps', version: '2.41', ext: 'jar'
implementation group: 'org.jenkins-ci.plugins.workflow', name: 'workflow-support', version: '2.16', ext: 'jar'
implementation group: 'org.jenkins-ci.plugins', name: 'script-security', version: '1.34', ext: 'jar'
implementation 'org.codehaus.groovy:groovy-all:3.0.7'
testImplementation 'org.junit.jupiter:junit-jupiter-api:5.3.1'
testRuntimeOnly 'org.junit.jupiter:junit-jupiter-engine:5.3.1'
testImplementation 'org.mockito:mockito-core:2.+'
test {
jvmArgs '-noverify'
useJUnitPlatform()
sourceSets {
main {
groovy {
// all code files will be in either of the folders
srcDirs = ['src', 'vars']
test {
groovy {
srcDirs = ['test']
cobertura {
format = 'html'
includes = ['**/*.groovy']
excludes = ['*_build.groovy']
reportsDir = file("./reports")
At this point we should have a nice folder structure and enough
dependencies to use to get to our goal. Cool, it’s time to implement our
shared library!
The General Approach
First a quick run-down on how we build our library and on why we do it
that way:
We will keep the “custom” steps inside var as simple as possible and
without any real logic. Instead, we create classes (inside src) that do
all the work.
We create an interface, which declares methods for all required Jenkins
steps (sh, bat, error, etc.). The classes call steps only through this
interface.
We write unit tests for your classes like you normally would with JUnit
and Mockito. This way we are able to:
Compile and execute our library/unit tests without Jenkins
Test that our classes work as intended
Test that Jenkins steps are called with the right parameters
Test the behaviour of our code when a Jenkins step fails
Build, test, run metrics and deploy your Jenkins Pipeline Library
through Jenkins itself
Now let’s get really going.
The Interface For Step Access
First, we will create the interface inside
org.somecompany
that will
be used by all classes to access the regular Jenkins steps like
sh
or
error
. We will start with a simple example, and then I will provide a
more advanced one.
package org.somecompany
interface IStepExecutor {
int sh(String command)
void error(String message)
// add more methods for respective steps if needed
This interface is neat, because it can be mocked inside our unit tests.
That way our classes become independent to Jenkins itself. For now,
let’s add an implementation that will be used in our vars Groovy
scripts:
package org.somecompany
class StepExecutor implements IStepExecutor {
// this will be provided by the vars script and
// let's us access Jenkins steps
private steps
StepExecutor(steps) {
this.steps = steps
@Override
int sh(String command) {
this.steps.sh returnStatus: true, script: "${command}"
@Override
void error(String message) {
this.steps.error(message)
Here is a more complex example, with more available methods. You can
expand on this by looking at the
pipeline.gdsl
file and taking
abstractions from there, both for methods, and properties.
import org.jenkinsci.plugins.workflow.cps.EnvActionImpl
import org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper
interface IStepExecutor {
/**
* Current build environment variables
*/
public EnvActionImpl env
/**
* Current build details
*/
public RunWrapper currentBuild
/**
* Current build parameters
*/
public Map params
/**
* Shell Script
* @param command
* @return
*/
int sh(String command)
/**
* Shell Script
* @param label
* @param command
* @return
*/
int sh(String label, String command)
/**
* Error signal
* @param message
*/
void error(String message)
/**
* Stage of a Jenkins build
* @param name
* @param body
*/
void stage(String name, Closure body)
/**
* Execute closures in parallel
* @param closures
*/
void parallel(Map closures)
/**
* Recursively delete the current directory from the workspace
*/
void deleteDir()
/**
* Update the commit status in GitHub
* @param name
* @param status
*/
void updateGitlabCommitStatus(String name, String status)
/**
* Send Slack Message
* @param channel
* @param color
* @param iconEmoji
* @param message
*/
void slackSend(String channel, String color, String iconEmoji, String message)
/**
* Accept GitHub Merge Request
* @param useMRDescription
* @param removeSourceBranch
*/
void acceptGitHubMR(Boolean useMRDescription, Boolean removeSourceBranch)
/**
* Archive JUnit-formatted test results
* @param location
*/
void junit(String location)
/**
* Stash some files to be used later in the build
* @param name
* @param includes
*/
void stash(String name, String includes)
/**
* Restore files previously stashed
* @param name
*/
void unstash(String name)
/**
* PowerShell Script
* @param command
* @return
*/
int powershell(String command)
/**
* PowerShell Script
* @param label
* @param command
* @return
*/
int powershell(String label, String command)
/**
* Change current directory
* @param directory
* @param body
*/
void dir(String directory, Closure body)
/**
* Allocate node. Change execution of build to said agent
* @param name
* @param body
*/
void node(String name, Closure body)
/**
* Catch error and set build result to failure
* @param params
* @param body
*/
void catchError(Map params, Closure body)
/**
* Add Cobertura coverage report result
* @param location
*/
void cobertura(String location)
And the implementation:
import org.jenkinsci.plugins.workflow.cps.EnvActionImpl
import org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper
final class StepExecutor implements IStepExecutor {
// this will be provided by the vars script and
// let's us access Jenkins steps
private steps
public EnvActionImpl env
public RunWrapper currentBuild
public Map params
StepExecutor(steps) {
this.steps = steps
this.env = this.steps.env
this.currentBuild = this.steps.currentBuild
this.params = Collections.unmodifiableMap(this.steps.params)
@Override
int sh(String command) {
this.steps.sh returnStatus: true, script: "${command}"
@Override
int sh(String description, String command) {
this.steps.sh returnStatus: true, label: "${description}", script: "${command}"
@Override
int powershell(String command) {
this.steps.powershell returnStatus: true, script: "${command}"
@Override
int powershell(String description, String command) {
this.steps.powershell returnStatus: true, label: "${description}", script: "${command}"
@Override
void error(String message) {
this.steps.currentBuild.setResult(JobStatus.Failure)
this.steps.error(message)
@Override
void stage(String name, Closure body) {
this.steps.stage(name, body)
@Override
void parallel(Map closures) {
this.steps.parallel(closures)
@Override
void deleteDir() {
this.steps.deleteDir()
@Override
void updateGitlabCommitStatus(String name, String status) {
this.steps.updateGitlabCommitStatus name: "${name}", status: "${status}"
@Override
void slackSend(String channel, String color, String iconEmoji, String message) {
this.steps.slackSend baseUrl: "https://hooks.slack.com/services/", botUser: true,
channel: "${channel}", color: "${color}", iconEmoji: "${iconEmoji}",
message: "${message}", teamDomain: "teamDomain",
tokenCredentialId: "token", username: "webhookbot"
@Override
void acceptGitHubMR(Boolean useMRDescription, Boolean removeSourceBranch) {
this.steps.acceptGitHubMR useMRDescription: useMRDescription, removeSourceBranch: removeSourceBranch
@Override
void stash(String name, String includes) {
this.steps.stash name: "${name}", includes: "${includes}"
@Override
void unstash(String name) {
this.steps.unstash name: "${name}"
@Override
void dir(String directory, Closure body) {
this.steps.dir(directory, body)
@Override
void node(String name, Closure body) {
this.steps.node(name, body)
@Override
void catchError(Map params, Closure body) {
this.steps.catchError buildResult: params.buildResult,
catchInterruptions: params.catchInterruptions,
message: params.message,
stageResult: params.stageResult,
body: body
@Override
void junit(String location) {
this.steps.junit testResults: "${location}", allowEmptyResults: false
@Override
void cobertura(String location) {
this.steps.cobertura autoUpdateHealth: false,
autoUpdateStability: false,
coberturaReportFile: "${location}",
conditionalCoverageTargets: '70, 0, 0',
failUnhealthy: false,
failUnstable: false,
lineCoverageTargets: '80, 0, 0',
maxNumberOfBuilds: 0,
methodCoverageTargets: '80, 0, 0',
onlyStable: false,
sourceEncoding: 'ASCII',
zoomCoverageChart: false
Adding Basic Dependency Injection
Because we don’t want to use the above implementation in our unit tests,
we will setup some basic dependency injection in order to swap the above
implementation with a mock during unit tests. If you are not familiar
with dependency injection, you should probably read up about it, since
explaining it here would be out-of-scope, but you might be fine with
just copy-pasting the code in this chapter and follow along.
So, first we create the org.somecompany.ioc package and add an IContext
interface:
package org.somecompany.ioc
import org.somecompany.IStepExecutor
interface IContext {
IStepExecutor getStepExecutor()
Again, this interface will be mocked for our unit tests. But for regular
execution of our library we still need an default implementation:
package org.somecompany.ioc
import org.somecompany.IStepExecutor
import org.somecompany.StepExecutor
class DefaultContext implements IContext, Serializable {
// the same as in the StepExecutor class
private steps
DefaultContext(steps) {
this.steps = steps
@Override
IStepExecutor getStepExecutor() {
return new StepExecutor(this.steps)
To finish up our basic dependency injection setup, let’s add a “context
registry” that is used to store the current context (DefaultContext
during normal execution and a Mockito mock of IContext during unit
tests):
package org.somecompany.ioc
class ContextRegistry implements Serializable {
private static IContext context
static void registerContext(IContext context) {
context = context
static void registerDefaultContext(Object steps) {
context = new DefaultContext(steps)
static IContext getContext() {
return context
That’s it! Now we are free to code testable Jenkins steps inside
vars
Coding A Custom Jenkins Step
Let’s imagine for our example here, that we want to add a step to our
library that calls the some class that performs some data seeding to a
database. To do this we first add a groovy script
example_build.groovy
to the
vars
folder that is called like our custom step we want to
implement. Since our script is called
example_build.groovy
our step
will later be callable with
example_build
in our
Jenkinsfile
. Add
the following content to the script for now:
void call(
String environment,
String envFile,
String dataSeederBranch,
String deploymentScriptsBranch
) {
// TODO
According to our general idea we want to keep our example_build script
as simple as possible and do all the work inside a unit-testable class.
So let’s create a new class
DataSeederJob
in a new package
org.somecompany.jobs:
package org.somecompany.jobs
final class DataSeederJob implements Serializable {
private String workspace
private String environment
private String envFile
private String dataSeederBranch
private String deploymentScriptsBranch
private String seedingEnvironment
private String seedingMode
private ArrayList
repositories
DataSeederJob(
String workspace,
String environment,
String envFile,
String dataSeederBranch,
String deploymentScriptsBranch,
String seedingEnvironment,
String seedingMode
) {
this.workspace = workspace
this.environment = environment
this.envFile = envFile
this.dataSeederBranch = dataSeederBranch
this.deploymentScriptsBranch = deploymentScriptsBranch
this.seedingEnvironment = seedingEnvironment
this.seedingMode = seedingMode
this.repositories = [
new TargetRepository(
"repo1",
this.deploymentScriptsBranch,
"deploymentscripts"
),
new TargetRepository(
"repo2",
this.dataSeederBranch,
"dataseeder"
void build() {
IStepExecutor steps = ContextRegistry.getContext().getStepExecutor()
steps.deleteDir()
steps.stage("Cloning new content", {
SourceControlUtils.parallelCheckoutCode(steps, this.repositories)
})
steps.stage("Preparing application environment", {
int status = steps.sh("""
/bin/cp deploymentscripts/${this.environment}/DataSeeder/${this.envFile} dataseeder/dist/config.toml
""")
if (status != 0) {
steps.error("Job failed! Copying env file exited with a non-zero status!")
})
steps.stage("Running DataSeeder", {
int status = steps.sh("""
cd dataseeder/dist
./goseeders-linux-x86 -env=${this.seedingEnvironment} -mode=${this.seedingMode}
""")
if (status != 0) {
steps.error("Job failed! Application exited with a non-zero status!")
})
As you can see, we use both the sh, deleteDir, stage and error steps in
our class, but instead of using them directly, we use the
ContextRegistry to get an instance of IStepExecutor to call Jenkins
steps with that. This way, we can swap out the context when we want to
unit test the build() method later.
Now we can finish our script in
vars
folder, which in this case will
also send a Slack message on failure:
void call(
String environment,
String envFile,
String dataSeederBranch,
String deploymentScriptsBranch
) {
ContextRegistry.registerDefaultContext(this)
IStepExecutor steps = ContextRegistry.getContext().getStepExecutor()
try {
DataSeederJob buildExecutor = new DataSeederJob(
steps.env.getProperty(EnvironmentVariables.workspace),
environment,
envFile,
dataSeederBranch,
deploymentScriptsBranch,
steps.params["SeedingEnvironment"] as String,
steps.params["SeedingMode"] as String
buildExecutor.build()
} catch (e) {
steps.currentBuild.setResult(JobStatus.Failure)
throw e
} finally {
String result = JobStatus.Success
if (steps.currentBuild.getResult() != null) {
result = steps.currentBuild.getResult()
switch (result) {
case JobStatus.Failure:
steps.slackSend(
SlackChannels.monitoringChannel,
SlackColors.failure,
SlackEmojis.failure,
String.format("""
%s - TEST PIPELINE FRAMEWORK
PARAMETERS: %s
""",
steps.currentBuild.getFullDisplayName(),
steps.params.toString(),
),
break
default:
break
First, we set the context with the context registry. Since we are not in
a unit test, we use the default context. The
this
that gets passed
into
registerDefaultContext()
will be saved by the
DefaultContext
inside its private
steps
variable and is used to access Jenkins steps.
After registering the context, we are free to instantiate our MsBuild
class and call the
build()
method doing all the work.
Nice, our
vars
script is finished. Now we only have to write some unit
tests for our
Job
class.
Adding Unit Tests
At this point writing unit tests should be business as usual. We create
a new test class
JobTest
inside the test folder with package
org.somecompany.jobs
. Before every test, we use Mockito to mock the
IContext
and
IStepExecutor
interfaces and register the mocked
context. Then we can simply create a new
Job
instance in our test and
verify the behaviour of our
build()
method.
Here is the data seeder test class:
import org.junit.jupiter.api.BeforeEach
import org.junit.jupiter.api.Test
import static org.mockito.ArgumentMatchers.any
import static org.mockito.ArgumentMatchers.anyString
import static org.mockito.Mockito.times
import static org.mockito.Mockito.verify
final class DataSeederJobTest {
private DataSeederJob sut
protected IContext context
protected IStepExecutor steps
@BeforeEach
void setup() {
context = mock(IContext.class)
steps = mock(IStepExecutor.class)
when(context.getStepExecutor()).thenReturn(steps)
ContextRegistry.registerContext(context)
@BeforeEach
void setupJob() {
String workspace = "workspace"
String environment = "environment"
String envFile = "envFile"
String dataSeederBranch = "dataSeederBranch"
String deploymentScriptsBranch = "deploymentScriptsBranch"
String seedingEnvironment = "seedingEnvironment"
String seedingMode = "seedingMode"
this.sut = new DataSeederJob(
workspace,
environment,
envFile,
dataSeederBranch,
deploymentScriptsBranch,
seedingEnvironment,
seedingMode
@Test
void jobBuildCallsDeleteDirStep() {
this.sut.build()
verify(steps).deleteDir()
@Test
void jobBuildCallsStageSteps() {
this.sut.build()
verify(steps, times(3)).stage(anyString(), any(Closure))
Another test class with several example tests, but unrelated to the data
seeder:
package org.somecompany.jobs
import org.junit.jupiter.api.BeforeEach
import org.junit.jupiter.api.Test
import static org.mockito.ArgumentMatchers.any
import static org.mockito.ArgumentMatchers.anyString
import static org.mockito.Mockito.*
import org.junit.jupiter.api.BeforeEach
import static org.mockito.Mockito.mock
import static org.mockito.Mockito.when
final class GenericGoJobTest {
private GenericGoJob sut
private String workspace = "workspace"
private String appName = "appName"
private String releaseDir = "releaseDir"
private String repoName = "repoName"
private String serviceName = "serviceName"
private Boolean shouldUnitTest = false
private Boolean deployToDifferentAgents = false
private ArrayList
destinationAgents = ["destinationAgent"]
private String mainGoFileLocation = "mainGoFileLocation"
protected IContext context
protected IStepExecutor steps
@BeforeEach
void setup() {
context = mock(IContext.class)
steps = mock(IStepExecutor.class)
when(context.getStepExecutor()).thenReturn(steps)
ContextRegistry.registerContext(context)
@BeforeEach
void setupJob() {
this.sut = new GenericGoJob(
this.workspace,
this.appName,
this.releaseDir,
this.repoName,
this.serviceName,
this.mainGoFileLocation,
this.shouldUnitTest,
this.deployToDifferentAgents,
this.destinationAgents,
@Test
void verifyJobCallsDeleteDir() {
this.sut.build()
verify(steps).deleteDir()
@Test
void verifyJobCallsStagesCorrectAmountOfTimes() {
this.sut.build()
verify(steps, times(5)).stage(anyString(), any(Closure))
@Test
void verifyJobCallsStagesCorrectAmountOfTimesWithDifferentAgentOption() {
this.deployToDifferentAgents = true
this.setupJob()
this.sut.build()
verify(steps, times(4)).stage(anyString(), any(Closure))
@Test
void verifyJobDoesNotCallNode() {
this.sut.build()
verify(steps, times(0)).node(anyString(), any(Closure))
@Test
void verifyJobCallsNodeWithDifferentAgentOption() {
this.deployToDifferentAgents = true
this.setupJob()
this.sut.build()
verify(steps, times(1)).node(anyString(), any(Closure))
@Test
void verifyJobCallsStageOneMoreTimeWithUnitTests() {
this.shouldUnitTest = true
this.setupJob()
this.sut.build()
verify(steps, times(6)).stage(anyString(), any(Closure))
@Test
void verifyJobCallsNodeMultipleTimesWithDifferentAgentOption() {
this.deployToDifferentAgents = true
this.destinationAgents = [
"test1",
"test2",
"test3",
"test4",
"test5",
this.setupJob()
this.sut.build()
verify(steps, times(5)).node(anyString(), any(Closure))
@Test
void verifyDeployGoSystemServiceCallsDir() {
this.sut.deployGoSystemService(steps)
verify(steps, times(1)).dir(anyString(), any(Closure))
@Test
void verifyBuildGoApplicationCallsSh() {
this.sut.buildGoApplication(steps)
verify(steps, times(1)).sh(anyString())
@Test
void verifyBuildGoApplicationCallsError() {
when(steps.sh(anyString())).thenReturn(-1)
this.sut.buildGoApplication(steps)
verify(steps).error(anyString())
@Test
void verifyUnitTestApplicationCallsStage() {
this.sut.unitTestApplication(steps)
verify(steps, times(1)).stage(anyString(), any(Closure))
@Test
void verifyDeployBuildCallsNodeCorrectly() {
this.deployToDifferentAgents = true
this.destinationAgents = [
"test1",
"test2",
"test3",
"test4",
"test5",
this.setupJob()
this.sut.deployBuild(steps)
verify(steps, times(5)).node(anyString(), any(Closure))
@Test
void verifyDeployBuildCallsStageCorrect() {
this.sut.deployBuild(steps)
verify(steps, times(1)).stage(anyString(), any(Closure))
@Test
void verifyPrepareStashContentCallsSh() {
this.sut.prepareStashContent(steps)
verify(steps, times(2)).sh(anyString())
@Test
void verifyPrepareStashCallsError() {
when(steps.sh(anyString())).thenReturn(-1)
this.sut.prepareStashContent(steps)
verify(steps, times(2)).error(anyString())
@Test
void verifyStashBuildCallsSh() {
this.sut.stashBuild(steps)
verify(steps, times(1)).sh(anyString())
@Test
void verifyStashBuildCallsError() {
when(steps.sh(anyString())).thenReturn(-1)
this.sut.stashBuild(steps)
verify(steps).error(anyString())
You can use the green play buttons on left of the IntelliJ code editor
to run the tests, which hopefully turn green.
Wrapping Things Up
That’s basically it. Now it’s time to setup your library with Jenkins,
create a new job and run a
Jenkinsfile
to test your new custom
example_build step. A simple test
Jenkinsfile
could look like this:
node('master') {
// Load your library
library('pipeline-framework@master')
// call the script with parameters
// if your call function does not require any params
// you could simply do example_build.call(), which I prefer,
// or simply example_build
example_build.call(
'Acceptance',
'config.toml',
'master',
'stable'
Then you can decide either to add this to a Pipeline job immediately on
the script box, or checkout this small script from SCM, that is up to
you.
Obviously there is still a lot more I could have talked about (things
like unit tests, dependency injection, Gradle, Jenkins configuration,
build and testing the library with Jenkins itself etc.), but I wanted to
keep this already very long blog post somewhat concise. I do hope
however, that the general idea and approach became clear and helps you
in creating a unit-testable shared library, that is more robust and
easier to work on than it normally would be.
One last piece of advice: The unit tests and Gradle setup are pretty
nice and help a ton in easing the development of robust shared
pipelines, but unfortunately there is still quite a bit that can go
wrong inside your pipelines even though the library tests are green.
Things like the following, that mostly happen because of Jenkins’ Groovy
and sandbox weirdness:
A class that does not implement Serializable which is necessary,
because “pipelines must survive Jenkins restarts”
Using classes like java.io.File inside your library, which is
prohibited
Syntax and spelling errors in your
Jenkinsfile
Therefore, it might be a good idea to have Jenkins instance solely for
integration testing, where new and modified
vars
scripts can be tested
before going “live”.
Again, feel free to write any kind of questions or feedback in the
comments, or contact me directly.
Wednesday, April 22, 2026
CI / CD
<2021-01-16 Sat>
Shared Libraries with Jenkins and Unit Tests
Wednesday, April 22, 2026
A Great Programmer
If writing code were a science, all developers would pretty much be the
same. But it is not. And just like in art, no two developers have the
same thinking or perception while working towards the same outcome.
While some struggle to produce the desired outcome, to a few, it comes
almost naturally, as if an epiphany hits them at the moment they start
writing code or solve a problem.
Say no to artificial energy drinks.
And in a blog post, Steve McConnell, one of the experts in software
engineering talks about an original study which was carried in the late
1960s by Sackman, Erikson, and Grant. They found that the ratio of
initial coding time between the best and worst programmers was about 20
to 1.
And the most interesting thing was that they found no relationship
between a programmer’s experience and code quality or productivity. In
simple words, writing good code is not the only factor that
differentiates a good programmer from a great one.
Let us start with the good programmers first.
Who is a good programmer?
I would say it is someone with:
Excellent technical skills and write clean, neat code.
Solid knowledge of development techniques and problem-solving
expertise.
Understanding programming best practices and when to employ them.
An abiding passion for programming and strive to contribute to the
team.
Respectable and likeable by other members of the team.
So if you are a programmer and you have all the above traits,
Congratulations! You are a good programmer. Be proud of it.
Now coming to the great ones.
They are rare.
Their productivity is 3 times that of a good programmer and 10 times
that of a bad programmer.
They belong to the top 1% who don’t just write code but have a set of
intangible traits that keep them poles apart from other programmers.
TLDR;
Great programmer
Good programmer + a set of intangible traits
While it’s not easy, if you’re dedicated enough, here are those
intangible traits which you can cultivate within yourself to transition
from being a good programmer to becoming a great programmer:
Abrupt learning capability.
They are sharp-minded and that means they have the ability to learn new
technologies and aren’t browbeaten by new technologies. They have the
ability to integrate seemingly disparate bits of information and process
information on the fly.
Every programmer will surely experience a situation where he or she
doesn’t know the answer. Great programmers will find different
resources, talk to the right people and find the solution no matter how
impossible it appears. The best skill anyone can possess is knowing how
to learn, and great programmers have mastered the skill of
self-learning.
A great programmer doesn’t let his ego come in between his work and his
learning process. If he needs to know something, he will approach anyone
in the hierarchy; from the lowest to the highest.
They balance pragmatism and perfectionism.
John Allspaw, Chief Technology Officer at Etsy makes a good point in his
post
On
being a senior engineer
He says that top-notch developers are healthy skeptics, which tend to
ask themselves and their peers’ questions while they work, such as:
What could I be missing?
How will this not work?
Will you please shoot as many holes as possible into my thinking on
this?
Even if it’s technically sound, is it understandable enough for the rest
of the organization to operate, troubleshoot, and extend it?
The idea behind these questions is that they perfectly understand the
importance of peer review and by a solid peer-review only, good design
decisions can be made. So they “beg” for the bad news.
A great programmer will tend to not trust their own code until they’ve
tested it extensively. Having said that, they also have the ability to
understand market dynamics and the need to ship the product at the
earliest. So they have the ability to make both quick and dirty hacks
and elegant and refined solutions, and the wisdom to choose which is
appropriate for a given situation at hand.
Some lesser programmers will lack the extreme attention to detail
necessary for some problems. Others are stuck in perfectionist mode.
Great programmers balance the two with perfect precision.
They have great intuition.
In the sixth book of
The Nicomachean Ethics
, the famous philosopher
and statesman _Aristotle_discusses the fourth of five capabilities
people need to have for attaining true knowledge and thus becoming
successful in whatever they do: intuition.
Aristotle
’s point is simple. Intuition is the way we start knowing
everything and knowledge gained by intuition must anchor all other
knowledge. In fact, this way of gaining knowledge is so foundational
that justification is impossible. That’s because knowledge by intuition
is not based on a series of facts or a line of reasoning to a
conclusion.
Instead, we know intuitional truth simply by the process of
introspection and immediate awareness. From Steve Jobs to Richard
Branson to Warren Buffet, the intuitives are generally successful in
whatever they do, because they can see things more clearly and find the
best solutions to problems more quickly than others. No doubt, all these
individuals have a huge storage of expert knowledge and experience.
But they also seem to have an abundance of intuition that comes
naturally to them and which enables them to grasp the essence of
complicated problems and find uncannily right solutions. Great
programmers typically display an intuitive understanding of algorithms,
technologies, and software architecture based on their extensive
experience and good development sense. They have the ability to
understand at a glance what tools in their arsenal best fit the problem
at hand.
And their intuitive abilities extend well beyond development and coding.
This makes them highly versatile in articulating both technical and
non-technical problems with both a layman and a specialist audience.
They are visionaries and they love challenges and will often seek to
break their own code (before others do) in their pursuit of excellence.
They are master communicators.
To get your ideas across, you need to make it simple and communicate as
unambiguously as possible. Sounds simple? Isn’t it? Damien Filiatrault
has rightly said:
Good communication skills directly correlate with good development
skills.
But unfortunately, this lack of clarity is the root cause of all
troubles at work. And this is because of a phenomenon called the Curse
of Knowledge. In 1990, a Stanford University graduate student in
psychology named Elizabeth Newton illustrated the curse of knowledge by
studying a simple game in which she assigned people to one of two roles:
“tapper” or “listener.”
Each tapper was asked to pick a well-known song, such as “Happy
Birthday” and tap out the rhythm on a table. The listener’s job was to
guess the song. Over the course of Newton’s experiment, 120 songs were
tapped out. Listeners guessed only three of the songs correctly: a
success ratio of 2.5%.
But before they guessed, Newton asked the tappers to predict the
probability that listeners would guess correctly. They predicted 50%.
The tappers got their message across one time in 40, but they thought
they would get it across one time in 2.
his favourite IDE and using a dark editor color scheme.
Why did this happen? When a tapper taps, it is impossible for her to
avoid hearing the tune playing along to her taps. Meanwhile, all the
listener can hear is a kind of bizarre Morse code. Yet the tappers were
flabbergasted by how hard the listeners had to work to pick up the tune.
The problem is that once we know something — say, the melody of a song
— we find it hard to imagine not knowing it.
Our knowledge has “cursed” us. We have difficulty sharing it with others
because we cannot readily re-create their state of mind. That is why
great programmers always confirm after communicating the message to the
team.
They also can understand problems clearly, break them down into
hypotheses and propose solutions cohesively. They understand concepts
quickly or ask the right questions to understand, and above all, they
don’t need every small bit to be written down in a document.
So if you want to become a great programmer, you need to make sure there
is effective communication between you and your team. This not only
keeps you at a higher plane of commitment but also shows your superiors
that you are genuinely interested and invested in delivering a quality
product.
Last thoughts
So as you can see here, to be the best-of-class in your field, you don’t
need any fancy degrees or even money to invest. All you need is an
attitude to learn, be insanely curious and an intuitive ability to
connect things based on the knowledge gained by you over the years.
Also important is the need to cultivate a healthy positive attitude,
ditching that ego and have a tolerance to take and act on feedback. Once
you do all this, I promise you will achieve greatness. As Bob Marley
stated:
The greatness of a man is not in how much wealth he acquires, but in his
integrity and his ability to affect those around him positively.
Wednesday, April 22, 2026
The Best Programmers I Know
Original
article by Matthias Endler
I have met a lot of developers in my life.
Lately, I asked myself: “What does it take to be one of the best? What do they all have in common?”
In the hope that this will be an inspiration to someone out there, I wrote down the traits I observed in the most exceptional people in our craft. I wish I had that list when I was starting out. Had I followed this path, it would have saved me a lot of time.
Read the Reference
If there was one thing that I should have done as a young programmer,
it would have been to
read the reference
of the thing I was using.
I.e. read the [Apache Webserver Documentation](
),
the [Python Standard Library](
),
or the [TOML spec](
).
Don’t go to Stack Overflow, don’t ask the LLM, don’t
guess
, just go straight to the
source
Oftentimes, it’s surprisingly accessible and well-written.
Know Your Tools Really Well
Great devs understand the technologies they use on a
fundamental level
It’s one thing to be able to
use
a tool and a whole other thing to truly
grok
(understand) it.
A mere user will fumble around, get confused easily, hold it wrong and not optimize the config.
An expert goes in (after reading the reference!)
and sits down to write a config for the tool of which they understand every single line and can explain it to a colleague.
That leaves no room for doubt!
To know a tool well, you have to know:
its history: who created it? Why? To solve which problem?
its present: who maintains it? Where do they work? On what?
its limitations: when is the tool not a good fit? When does it break?
its ecosystem: what libraries exist? Who uses it? What plugins?
For example, if you are a backend engineer and you make heavy use of Kafka,
I expect you to know a lot about Kafka – not just things you read on Reddit.
At least that’s what I expect if you want to be one of the best engineers.
Read The Error Message
As in
Really Read the Error Message and Try to Understand What’s Written
Turns out, if you just sit and meditate about the error message, it starts to speak to you.
The best engineers can infer a ton of information from very little context.
Just by reading the error message, you can fix most of the problems on your own.
It also feels like a superpower if you help someone who doesn’t have that skill.
Like “reading from a cup” or so.
Break Down Problems
Everyone gets stuck at times.
The best know how to get unstuck.
They simplify problems until they become digestible.
That’s a hard skill to learn and requires a ton of experience.
Alternatively, you just have awesome problem-solving skills, e.g., you’re clever.
If not, you can train it, but there is no way around breaking down hard problems.
There are problems in this world that are too hard to solve at once for anyone involved.
If you work as a professional developer, that is the bulk of the work you get paid to do:
breaking down problems.
If you do it right, it will feel like cheating:
you just solve simple problems until you’re done.
Don’t Be Afraid To Get Your Hands Dirty
The best devs I know read a lot of code and they are not afraid to touch it.
They never say “that’s not for me” or “I can’t help you here.”
Instead, they just start and learn.
Code is
just code
They can just pick up any skill that is required with time and effort.
Before you know it, they become the go-to person in the team for whatever they touched.
Mostly because they were the only ones who were not afraid to touch it in the first place.
Always Help Others
A related point.
Great engineers are in high demand and are always busy, but they always try to help.
That’s because they are naturally curious and their supportive mind is what made them great engineers in the first place.
It’s a sheer joy to have them on your team, because they are problem solvers.
Write
Most awesome engineers are well-spoken and happy to share knowledge.
The best have some outlet for their thoughts: blogs, talks, open source, or a combination of those.
I think there is a strong correlation between writing skills and programming.
All the best engineers I know have good command over at least one human language – often more.
Mastering the way you write is mastering the way you think and vice versa.
A person’s writing style says so much about the way they think.
If it’s confusing and lacks structure, their coding style will be too.
If it’s concise, educational, well-structured, and witty at times, their code will be too.
Excellent programmers find joy in playing with words.
Never Stop Learning
Some of the best devs I know are 60+ years old.
They can run circles around me.
Part of the reason is that they keep learning.
If there is a new tool they haven’t tried or a language they like, they will learn it.
This way, they always stay on top of things without much effort.
That is not to be taken for granted: a lot of people stop learning really quickly after they
graduate from University or start in their first job.
They get stuck thinking that what they got taught in school is the “right” way to do things.
Everything new is bad and not worth their time.
So there are 25-year-olds who are “mentally retired” and 68-year-olds who are still fresh in their mind.
I try to one day belong to the latter group.
Somewhat related, the best engineers don’t follow trends, but they will always carefully
evaluate the benefits of new technology. If they dismiss it, they can tell you exactly
why
when the technology would be a good choice, and what the alternatives are.
Status Doesn’t Matter
The best devs talk to principal engineers and junior devs alike. There is no hierarchy.
They try to learn from everyone, young and old.
The newcomers often aren’t entrenched in office politics yet and still have a fresh mind.
They don’t know why things are
hard
and so they propose creative solutions.
Maybe the obstacles from the past are no more, which makes these people a great source of inspiration.
Build a Reputation
You can be a solid engineer if you
do
good work,
but you can only be one of the best if you’re
known
for your good work;
at least within a (larger) organization.
There are many ways to build a reputation for yourself:
You built and shipped a critical service for a (larger) org.
You wrote a famous tool
You contribute to a popular open source tool
You wrote a book that is often mentioned
Why do I think it is important to be known for your work?
All of the above are ways to extend your radius of impact in the community.
Famous developers impact way more people than non-famous developers.
There’s only so much code you can write.
If you want to “scale” your impact, you have to become a thought leader.
Building a reputation is a long-term goal.
It doesn’t happen overnight, nor does it have to.
And it won’t happen by accident.
You show up every day and do the work.
Over time, the work will speak for itself.
More people will trust you and your work and they will want to work with you.
You will work on more prestigious projects and the circle will grow.
I once heard about this idea that your latest work should
overshadow everything you did before.
That’s a good sign that you are on the right track.
Have Patience
You need patience with computers and humans.
Especially with yourself.
Not everything will work right away and people take time to learn.
It’s not that people around you are stupid; they just have incomplete information.
Without patience, it will feel like the world is against you and
everyone around you is just incompetent. That’s a miserable place to be.
You’re too clever for your own good.
To be one of the best, you need an incredible amount of patience, focus, and dedication.
You can’t afford to get distracted easily if you want to solve hard problems.
You have to return to the keyboard to get over it.
You have to put in the work to push a project over the finishing line.
And if you can do so while not being an arrogant prick, that’s even better.
That’s what separates the best from the rest.
Never Blame the Computer
Most developers blame the software, other people, their dog, or the weather for
flaky, seemingly “random” bugs.
The best devs don’t.
No matter how erratic or mischievous the behavior of a computer seems,
there is
always
a logical explanation: you just haven’t found it yet!
The best keep digging until they find the reason.
They might not find the reason immediately, they might never find it,
but they never blame external circumstances.
With this attitude, they are able to make incredible progress and learn things that others fail to.
When you mistake bugs for incomprehensible magic, magic is what it will always be.
Don’t Be Afraid to Say “I Don’t Know”
In job interviews, I pushed candidates hard to at least say “I don’t know” once.
The reason was not that I wanted to look superior (although some people certainly had that impression).
No, I wanted to reach the boundary of their knowledge.
I wanted to stand with them on the edge of what they thought they knew.
Often, I myself didn’t know the answer. And to be honest, I didn’t care about the answer.
What I cared about was when people bullshitted their way through the interview.
The best candidates said
“Huh, I don’t know, but that’s an interesting question! If I had to guess, I would say…”
and then they would proceed to deduce the answer.
That’s a sign that you have the potential to be a great engineer.
If you are afraid to say “I don’t know”, you come from a position of hubris or defensiveness.
I don’t like bullshitters on my team.
Better to acknowledge that you can’t know everything.
Once you accept that, you allow yourself to learn.
“The important thing is that you don’t stop asking questions,” said Albert Einstein.
Don’t Guess
“In the Face of Ambiguity, Refuse the Temptation to Guess”
That is one of my favorite rules in [PEP 20 – The Zen of Python](
).
And it’s so, so tempting to guess!
I’ve been there many times and I failed with my own ambition.
When you guess, two things can happen:
In the
best case
you’re wrong and your incorrect assumptions lead to a bug.
In the
worst case
you are right… and you’ll never stop and second guess yourself.
You build up your mental model based on the wrong assumptions.
This can haunt you for a long time.
Again, resist the urge to guess.
Ask questions, read the reference, use a debugger, be thorough.
Do what it takes to get the answer.
Keep It Simple
Clever engineers write clever code.
Exceptional engineers write simple code.
That’s because most of the time, simple is enough.
And simple is more maintainable than complex.
Sometimes it
does
matter to get things right, but
knowing the difference is what separates the best from the rest.
You can achieve a whole lot by keeping it simple.
Focus on the right things.
Final Thoughts
The above is not a checklist or a competition;
and great engineering is not a race.
Just don’t trick yourself into thinking that you can skip the hard work.
There is no shortcut. Good luck with your journey.
Wednesday, April 22, 2026
Black Box Testing in modern Software
Testing software at a high level means freedom of implementation and refactoring, while maintaining guarantees of correctness. This is something all good engineers love 💘
Black-box testing is a method of software testing that examines the functionality of an application without peering into its internal structures or workings.
In my opinion, for complex systems and system interactions, one should default to writing a lot of black-box tests, and some unit tests where it makes sense, and leave other kinds of integration tests out of scope, until it might make sense.
Black box tests are the best at BDD
Black box tests stimulate functional, requirement and behaviour driven testing.
Testing a system becomes simpler and we can cover many more “real” edge cases.
Unit testing still has its place
Unit testing is an invaluable tool that should also be used in parallel to black box tests and other high level tests.
These tests are very easy to use, useful and give also good information about a system.
Ideally, your “core domain” should be fully tested, and your “business logic” should be encoded in more pure code, ideally as data.
Dealing with dependencies
When we start using external dependencies in a system like databases, caches, external APIs, etc.
You have 3 choices:
create and use mocks
create and use stubs (dummy implementations)
use real dependencies
Mocks are inherently evil
Mocks are generally painful to write, read, debug and maintain.
They should be avoided when possible, we should use real implementations for most tests to a system.
testcontainers is a great library for this purpose, and I use it extensively.
Java baggage
Lots of developers coming from a traditional enterprisy JDK environment know what we mean.
This tendency of doing one class per file, and creating tests for every single class and every single method along with extensive Mockito ideology.
The cure
No marrying the test structure to code structure. No testing every individual class when not needed.
Ensure we test functionalities, almost never test how the code is written / implemented (this gives flexibility and freedom to refactor).
Write many black box tests. Attempt to test all “user paths” and possible interactions with the system, good and bad, happy and unhappy, low load high load, etc.
Writing many unit tests where it makes sense. Preferably using data generators, property based, or auto-generated test cases, with a set of inputs.
What’s in it for me ?
Easier and simpler tests of the entire system, tests have lower complexity.
Easy to cover 100% of a “user flow” or a “data flow”.
Low chance of false positives (partly thanks to avoiding mocks too).
This allows for a good test-driven development approach, and more confidence in product.
Testers require less technical knowledge, programming or IT skills and do not need to learn all nitty gritty implementation details of the system.
More loose coupling from the code means more freedom of implementation + refactor.
Wednesday, April 22, 2026
Breaking free of Javascript
Javascript has a stranglehold on all Front End Development. If you write
code for the browser, then it’s most likely written directly in
Javascript or its very close cousin TypeScript. The problem is that
Javascript is a terrible language.
TLDR: You should break free from Javascript by learning PureScript which
compiles to Javascript.
Typescript and other attempts to curtail Javascript are about as
effective as a band-aid on a puncture wound. One might argue that it’s
better than nothing but eventually, you’re still going to bleed out.
The language has undergone many changes since its initial development,
which consisted of a whopping 10 days, but all of these changes are just
polishing a turd.
Javascript is a veritable Smorgasbord of languages paradigms. It has
some Object Oriented, Functional and Procedural features all mixed
together in an unpalatable Schizophrenic Goulash (mixed metaphor
intended).
There are more bad parts of Javascript than good parts and anyone
working in Javascript on a daily basis will attest to the fact that
being a good Javascript Developer is more about knowing what NOT to do.
Seasoned Javascript Developers have a litany of language constructs that
they routinely avoid like the plague lest they fall victim to the
plethora of runtime exceptions that are routinely encountered in
production, e.g. the dreaded “undefined is not a function”.
Javascript’s reign is supreme thanks to its monopolistic hold on the
Browser. All previous attempts to extricate the Browser from the tyranny
of Javascript have long since failed leaving most leery of attempting
yet another failed coup d’état.
Freedom or Death
What options do we have?
We could decide not to develop applications for the Browser. I like to
think that we should develop mostly on the server-side by default,
unless the situation requires very advanced client-side features and
state. I would personally recommend Go and Haskell as strong candidates
for server side applications. This of course could be done in a number
of different languages.
We could create an Open Source Browser that would allow for other
languages or perhaps it could have a built-in language that’s “better”
than Javascript.
But our Browser would be completely incompatible with every single web
site on the planet. This may appear to be Freedom at first but it most
definitely is Death. No one but the most fervent would adopt this.
We could develop a Browser Extension that allows for a better developer
experience. This too has been tried before and since we’re reaching a
near monopoly in Browser development by only the largest of companies,
our initial taste of Freedom could, on a whim of a corporate giant, be
transformed into sudden Death.
Most affected by an oppressive environment are too busy trying to
survive in the current climate than to overturn it.
Revolutions are rare, dangerous and costly but subversion isn’t.
If you can’t beat ’em, join ’em (sort of)
Javascript’s stranglehold on the Browser isn’t a new phenomenon. We’ve
seen this very thing before. In fact, it’s rampant in the hardware
world.
A Microprocessor has a single instruction set that can never be
superseded by any other. Not without completely replacing the hardware.
Yet, we don’t call for revolution but instead work to insulate ourselves
from the deficiencies of such a system. We did this over half a century
ago when we created high-level languages.
Any flaws or complexities in the underlying architecture are squelched
by an abstraction layer that frees us from having to regularly consider
the pitfalls.
By writing a compiler, we freed ourselves from the tyranny of a single
platform and by using this same approach, we can free ourselves from our
current dilemma.
A Horse of a Different Color
What we want is a way to write Javascript without having to write
Javascript. To do that, we’re going to need a Transpiler.
A Transpiler will compile code in one language and produce code in
another. Technically, CoffeeScript, TypeScript and Babel are Transpilers
but they start with nearly Javascript and produce Javascript.
These solutions do not give us the benefits that we’re hoping for. This
is the equivalent of writing in Assembly Language instead of Machine
Code.
What we want is a whole new language. One that avoids all of the
terrible design decisions of Javascript. There are many languages that
transpile to Javascript that are far superior to Javascript.
I’m going to concentrate on Functional Programming Languages only
because it’s becoming very clear that Functional Programming is the
future of our industry.
This is evident by the mass adoption of Functional Features in today’s
most popular languages. This is historically what’s been seen right
before a major paradigm shift is about to occur in the software
industry.
For the curious, Richard Feldman does a wonderful job of making this
argument in this entertaining and illuminating talk The Next Paradigm
Shift in Programming.
Go
GopherJS is an honourable mention and could be exactly what you search
if you are into Go:
see GopherJS
on GitHub
Elm
Elm is a great beginner language for Functional Programming. The
ecosystem is mature and I have personally been responsible for a team
that put over 160K lines of Elm code into production without a single
line of Elm code producing a Runtime Error.
Elm is a dialect of ML and a very small subset of Haskell.
If you want to dip your toe into the Statically Typed, Purely Functional
Programming world, then Elm can be a great starting point
).
Unfortunately, Elm’s lack of power quickly shows as your application
becomes complex and the resources for learning are somewhat limited. The
go-to book for learning Elm is Elm in Action.
ReasonML
Facebook uses Functional Programming Languages, one of which is Haskell,
the granddaddy of them all. The other notable language is the one they
developed called ReasonML. It’s a dialect of OCaml, which is a dialect
of ML.
It touts safety and interoperability with both Javascript and OCaml
ecosystems (
).
Unfortunately, Reason isn’t a Pure Functional Language and so it suffers
from many of the problems that Imperative Languages do. It’s a
compromise the way that TypeScript is a compromise.
There are a few books on Reason, Web Development with ReasonML:
Type-Safe, Functional Programming for JavaScript Developers and ReasonML
Quick Start Guide: Build fast and type-safe React applications that
leverage the JavaScript and OCaml ecosystems
Fable
For those married to the .NET ecosystem, there’s Fable, a language that
lets you write in Microsoft’s Functional Programming Language, F#, and
compiler to Javascript.
Fable supports most of the F# core library and most of the commonly used
.NET APIs (
).
Unfortunately, like ReasonML, Fable is not a Pure Functional Programming
language either.
I couldn’t seem to find any books on it but here’s a free online “book”,
The Elmish Book. The use of the word “Elm” in the title seems to be
coincidental and has nothing to do with the Elm language.
PureScript
PureScript was developed by Haskell developers who stole as much as they
could from Haskell while making some great improvements along the way.
This language is my personal favorite. All new projects at my company
will be developed in this language. It has all of the expressive power
of Haskell yet it runs beautifully in the Browser producing clean
readable Javascript (
).
It’s a Pure Functional Language (hence its name) just like Haskell. It
is the most powerful of all the languages listed here but unfortunately
has a big downside.
The learning curve is pretty steep. That’s what motivated me to write my
book to make that process as painless as possible. Once you put in the
work to learn PureScript, it will pay you back ten fold in dividends.
There are 2 books I know of. The free one, which is great if you already
know Haskell, PureScript by Example.
If you’ve never seen a Functional Language or if you have no idea what
Functional Programming is and you’re interested in the most powerful of
all of the aforementioned languages, then I’d suggest you consider my
book, Functional Programming Made Easier: A Step-by-Step Guide.
It’s a complete Functional Programming course for Imperative Programmers
of any language. It starts from the very beginning of Functional
Programming and by the end, you’ve developed a web-server and front-end
single page application in PureScript.
Too Good to be True?
All of these languages are very different from Javascript. They can be
downright scary. It’s not like going from Java to C# or from Java to
Javascript.
These are Functional Programming Languages. You might be thinking that
what you’d really like is Java, C# or Python but in the Browser.
But the biggest gains in program safety and developer productivity is
only possible from a Functional Programming Language.
That’s a pretty bold claim and for the unconvinced, I invite them to ask
programmers who routinely program in a Functional Language in their
professional work and ask them if they miss their old Imperative
Programming Languages.
I’d be willing to bet that 99 out of 100 would say that they’d never go
back to the old way of programming. Then ask them how much effort it
took to learn these new fangled beasts.
Wednesday, April 22, 2026
Don’t Reward the Best Firefighter
The “Hero” Trap
Every engineering team has that one
“hero.”
You know the one: the 3 AM incident response god.
They dive into a production dumpster fire, pull a rabbit out of a hat, and somehow bring the system back online.
We love them for it. We give them the shout-outs in Slack, the fat bonuses, and the “Senior Staff” titles because they’re the “go-to” person when everything breaks.
But by making them the main character, we’re lowkey ensuring our culture stays a mess.
Visible Heroes vs. Invisible Wins
For every loud firefighter, there’s an invisible
fire preventer.
This is the engineer who spends a month refactoring a messy legacy service that no one else wants to touch.
Their work doesn’t show up as a shiny new feature on the roadmap. Their success is silent—it’s the catastrophic outage that
didn’t
happen six months from now.
And their reward? Often, it’s getting passed over for a promo because their “impact” wasn’t as visible as the person who “saved the day.”
We Built a Broken Game
This is a total incentive fail, and honestly, it’s on all of us. Performance reviews are fundamentally biased toward
reactive work.
Managers are great at measuring things that are visible on a dashboard:
Features shipped
Tickets closed
Incidents resolved
There is no column on the spreadsheet for
“disasters averted.”
As a result, we’ve built a career ladder that basically encourages engineers to let things smolder, knowing they’ll get more credit for putting out a blaze than for making sure there’s no fire in the first place.
Redefining “Impact”
It’s time to stop treating “impact” as a synonym for “loud activity.” Real impact is the verifiable elimination of future risk.
The Automation Win:
The engineer who fixes a flaky, manual deployment isn’t just “closing a ticket.” They’re giving every dev on the team their time back, forever. That’s massive, compounding impact.
The Refactor Win:
The engineer who cleans up a bug-prone module isn’t just “tidying up.” They are measurably lowering the failure rate for the entire business. That is direct risk reduction.
We need to start hyping up the architects of fireproof buildings, not just the people who are good with a hose.
This takes a conscious effort to hunt for the “invisible” work. We need to use data to quantify risk
before
it fails and treat the reduction of that risk as a top-tier contribution.
Next time you’re sitting in a performance calibration, ask yourself the hard question:
Are we promoting the people who are best at navigating a broken system, or the ones who are actually fixing it?
Wednesday, April 22, 2026
German naming convention
Table of Contents
Expect the Violent Psychopath
Naming Tropes
It’s All Greek To Me
German / Dutch Naming Convention
Isomorphic Naming
There’s one thing that could make our life as software engineers much
easier: better naming convention.
Expect the Violent Psychopath
You’re trying to tell a story with your code. Your code should tell that
story clearly, not cryptically, for an audience besides yourself. A good
yardstick for deciding what kind of audience you are writing for is to
imagine someone who has a familiarity with your domain but not your
program’s take on the domain. I think programmers forget that, as they
are authors, they have readers.
A famous and regularly quoted piece of advice, from the mailing list
comp.lang.c in 1991, John F. Woods wrote:
Always code as if the guy who ends up maintaining your code will be a
violent psychopath who knows where you live. Code for readability.
It’s hard to put it better than that.
Naming Tropes
There are some common naming conventions which are departures from plain
English, usually in the interest of brevity:
Abbreviations: when words are abbreviated such as
fct
for
“function”,
dfn
for “definition”,
ctx
for “context.”
It’s All Greek To Me: using simply
, etc. as in mathematics.
“Hungarian” notation: any prefix or suffix notation in which a single
letter is used to refer to a type or property of the variable, as in
sigils like
$foo
(“scalar foo”),
lpszFoo
(“long pointer string
zero-terminated”), or
fooL
(list of foo).
Acronyms: using initial letters to refer to concepts:
throwVE
(“throw validation error”).
Most of these are unnecessary and/or harmful.
It’s All Greek To Me
A word on this convention. Single letter naming comes from mathematical
tradition; it means “there isn’t a good noun for this because it’s
general”. A person of X height. In some cases, this is actually
reasonable. Consider:
identity x = x
The identity function isn’t enhanced by calling its parameter thing; it
literally doesn’t matter what it is, especially in some typed languages.
In fact, one could argue that it’s harmful to try to using a meaningful
English name.
However, anywhere that your variables have some meaning, by using “Greek
convention”, you’re throwing away information that could help someone to
digest your code better. You’re not trying to fit your code on a napkin.
German / Dutch Naming Convention
This is what I consider good naming convention. I discovered this
convention while working with a German colleague, who, I’d always joked,
uses long variable names, and almost never abbreviates anything.
However, the more I read his code, the more I realised I was able to
read the story he was trying to tell, and appreciated it a lot: Using as
many words as necessary to clearly name something. Everything.
I called this “German” naming convention although same applies for
Dutch, as a reference to the fact that the German language is known for
its compound words, which can become comically long and specific at
times. Some examples include, Betäubungsmittelverschreibungsverordnung
(“regulation requiring a prescription for an anaesthetic”),
Rechtsschutzversicherungsgesellschaften (“legal protection insurance
companies”), and the 1999 German “Word of the Year”:
Rindfleischetikettierungsüberwachungsaufgabenübertragungsgesetz (“beef
labelling regulation and delegation of supervision law”).
Don’t write
fopen
when you can write
openFile
. Write
throwValidationError
and not
throwVE
. Call that name
function
and
not
fct
. That’s German naming convention. Do this and your readers
will appreciate it.
Isomorphic Naming
This convention complements German naming convention completely.
Isomorphic naming is to say that the name of the variable is the same
form of the name of the type. A simple heuristic, in other words: just
use the name of the type.
Here’s a real sample where better naming convention would make this
easier to read without being a cryptographer:
updateColExp
:: QualifiedTable -> RenameField -> ColExp -> IO ColExp
updateColExp qt rf (ColExp fld val) =
ColExp updatedFld <$> updatedVal
...
Look at this naming convention. This may be appropriate if you’re in
some kind of code golfing competition, but I can’t even pronounce these
names. Applying the type-based naming heuristic, we get:
updateColumnExpression
:: QualifiedTable -> RenameField -> ColumnExpression -> IO ColumnExpression
updateColumnExpression qualifiedTable renameField (ColumnExpression field value) =
ColumnExpression updatedField <$> updatedValue
...
Look, it’s readable, plain English! Isn’t this a huge improvement? Any
maintainer reading this code can read each variable and know what it is.
I can even pronounce the names out loud.
Note that this convention only works well when your types are well-named
too, by German naming convention.
Original post can be found at
Chris Done’s
site
Wednesday, April 22, 2026
Getting rid of Git history
as
Wednesday, April 22, 2026
Hexagon of Doom - The Cost of Over-Abstraction and Indirection
Disclaimer: This article reflects personal experiences and gripes within specific team environments. Your mileage may vary, but the warning against premature or redundant abstraction stands.
When “Clean Code” Becomes “Complicated Code”
Hexagonal Architecture, often called
Ports and Adapters (P&A)
, is lauded for its promise of decoupling the core business logic (the “domain”) from external concerns (databases, UIs, APIs). In theory, it’s a beautiful solution for creating adaptable, testable systems.
However, like many architectural patterns, P&A is
not
a universal good. In practice—especially for
small projects
small teams
, and particularly when using modern frameworks that provide powerful
Dependency Injection (DI)
and layering capabilities (like
ZIO
or Spring)—it often transforms from an asset into a
liability
, drowning projects in unnecessary indirection and cognitive load.
Let me explain why I think that in many contemporary environments, P&A introduces
net harm
by prioritizing abstract purity over practical simplicity.
The Double-Layering Paradox (Hex + Layers)
The primary goal of P&A is to invert dependencies: the domain defines an interface (a
Port
), and an external module implements it (an
Adapter
). This keeps the domain clean.
When you already utilize a powerful, effect-aware layering system like
ZIO Layers
, this benefit is almost entirely redundant, leading to an architectural redundancy:
Indirection for Indirection’s Sake:
P&A adds interfaces for every dependency. When combined with a framework’s natural
Service
and
Layer
abstractions, you end up with
two or more levels of indirection
to reach a simple implementation.
Every time a developer needs to trace a call, they must traverse the application layer, the ZIO Service/Layer boundary,
and
the Port/Adapter boundary.
This complexity makes
debugging significantly more painful
and slows down the basic task of understanding code flow.
Complexity Debt: Small Teams, Big Overkill
The value of an abstraction must justify its cost. For a tiny microservice that mainly performs CRUD operations or orchestrates two external calls, the architectural overhead of P&A is rarely justified.
The 9/10 Rule:
Most small services
are not complex enough
to warrant this pattern. We often see P&A implemented universally because
“it’s good practice,”
not because the domain demands it.
This is
architecture astronautics
—designing for a future complexity that never materializes.
Onboarding Nightmare:
Team members, especially new joiners, already struggle to grasp complex functional programming paradigms, frameworks, effects and layers, etc. Adding the P&A pattern on top of this introduces a massive
cognitive hurdle
The result is a team that spends more time studying the
structure
of the code than solving the
business problem
. If a developer needs four hours just to restudy the system structure before making a change, the architecture is failing.
Change is More Difficult:
Making a simple change now often requires modifications across three or four files (Domain Port, Application Service, Infrastructure Adapter, and the Layer wiring). This distributed logic dramatically increases the difficulty and risk associated with even minor feature updates.
This snippet illustrates the cognitive tax of over-abstraction. You might see something like this in over-engineered code:
val result =
for {
order <- OrderService.create(dto)
_ <- NotificationService.notify(order)
} yield order
val run = result.provide(
OrderService.live,
NotificationService.live,
OrderProcessorLive.layer,
PaymentServiceStripeAdapter.layer,
InventoryPortDatabaseAdapter.layer,
NotificationPortEmailAdapter.layer,
HandlebarsMailTemplating.layer,
MailTemplatingAdapter.layer,
A simple CRUD endpoint now requires juggling four adapters and multiple ports — none of which add business value.
Compare it to this, simpler and easier on everyone:
val result =
for {
order <- OrderService.create(dto)
_ <- NotificationService.notify(order)
} yield order
val run = result.provide(
OrderService.live,
NotificationService.live
The Testability Illusion
A core selling point of P&A is enhanced testability. By defining a Port, you can easily mock the Adapter implementation.
However, this benefit is moot. Frameworks already provide an elegant, built-in mechanism for swapping implementations (a.k.a.,
Layer Stubbing
or
Mocking
).
// ZIO: Define a test layer with a mock implementation
val mockPaymentService: ULayer[PaymentServicePort] = ZLayer.succeed {
new PaymentServicePort {
def process(p: Payment) = ZIO.unit // Mocked behavior
// Now run the test using the 'provide' method with the mock layer
// The Port interface itself wasn't strictly necessary for the mocking!
The P&A abstraction is simply surplus to requirements when robust DI tooling is available.
The Tyranny of Types and Namespaces
P&A, when combined with enthusiastic Domain-Driven Design (DDD) and strict folder structures, can lead to an explosion of files, types, and excessively deep namespaces.
This kind of verbose, deeply nested imports are telltale signs of
over-architecting
. It suggests a system size and complexity that usually only exists in a large, decades-old monolith, not a small, modern service. The sheer volume of types to track creates
cognitive overhead
that actively slows development.
🧩 Observation:
Below you see how we’ve defined 5+ types and layers just to wire a single function.
Every refactor means updating the Port, Adapter, and the wiring.
Native dependency system already is your “Port”.
// --- Domain Port ---
trait PaymentServicePort {
def process(payment: Payment): Task[Receipt]
// --- Domain Model ---
final case class Payment(id: String, amount: BigDecimal)
final case class Receipt(id: String, status: String)
// --- Application Service (uses the Port) ---
final class PaymentProcessor(paymentService: PaymentServicePort) {
def handle(p: Payment): Task[Receipt] =
paymentService.process(p)
// --- Infrastructure Adapter ---
final class StripePaymentAdapter extends PaymentServicePort {
override def process(p: Payment): Task[Receipt] =
ZIO.succeed(Receipt(p.id, "OK - charged via Stripe"))
// --- ZIO Layer wiring (adds a second indirection) ---
object PaymentLayers {
val stripeLayer: ULayer[PaymentServicePort] =
ZLayer.succeed(new StripePaymentAdapter)
val processorLayer: URLayer[PaymentServicePort, PaymentProcessor] =
ZLayer.fromFunction(new PaymentProcessor(_))
// --- Usage ---
val app = for {
processor <- ZIO.service[PaymentProcessor]
r <- processor.handle(Payment("p1", 42))
_ <- ZIO.logInfo(r.toString)
} yield ()
val runApp =
app.provide(
PaymentLayers.processorLayer,
PaymentLayers.stripeLayer
A Simpler Prescription for Sanity
Instead of resorting to heavy patterns like P&A, small teams can achieve clean, maintainable, and highly testable code with a simpler “cocktail” of established, less intrusive patterns:
Good Domain-Driven Design (DDD):
Focus on correct
naming
, clear
domain models
, and ubiquitous language. This is where the most valuable abstraction lies.
Simple Structure:
A combination of
MVC
(Model-View-Controller, or a simple
Application Service
layer) for structure, combined with
Command and Query
abstractions for separating read/write concerns, provides excellent clarity without excessive indirection.
Harness Native DI:
Leverage your framework’s native DI system fully. These tools were designed to manage dependencies cleanly; don’t fight them by adding manual indirection.
Know When to Stop
Hexagonal Architecture is a powerful tool, but it’s a tool for scaling complexity. For the vast majority of small to medium-sized projects—especially those built with modern, DI-rich frameworks—it represents a premature optimization that results in
architecture debt
and
developer burnout
Before adopting a pattern, ask the critical question:
Does this solve a problem I have today, or am I abstracting for a problem I might never have?
Often, the healthiest, most maintainable architecture is the simplest one that works. We must resist the urge to complicate code in the name of purity.
Wednesday, April 22, 2026
Most Technical Problems are People Problems
I have worked at several companies doing Software Engineering and I feel like I’ve seen the best and the worst. From aging systems containing millions of lines of untested code, built on frameworks past their expiry date, to repulsive code and deployment constructs that stemmed from disagreements, bad communication, and terrible processes, I’ve seen it all, ticking technical time bombs, and all, but one specific case really illustrated the cultural rot for me.
I once worked at a big furniture company famous worldwide, which had an enormous amount of technical debt - millions of lines of code, no unit tests, frameworks that were well over 2 decades out of date, things running on mainframe IBM computer to which no replacement parts can be found, etc.
On one specific project, we had a market need to get features to market really fast, but there were no limitations in tech stack or so, so it was a golden opportunity to create something beautiful and maintainable.
Rather than communicating with other teams to learn, or seeing what industry standards we live by in this day and age, this team simply copied & pasted a few hundred thousand lines of the legacy code, and started hacking things onto it, effectively horseshoeing it into something that would work.
For the non-technical reader, this is an enormous problem because now we have two different outdated (legacy) systems, which will start diverging, and apart from the fact that the systems are old, the software practices questionable, and the hardware is not even manufactured anymore, now all features & bug fixes must be solved in two separate codebases that will grow apart over time. When I heard about this, a young & naive version of me thought he could fix the situation….
The article which inspired me to write this one is found at
Helmet Hair
Tech Debt is caused mostly by People Problems
Then there is tech debt.
Tech debt projects are always a hard sell to management, because even if everything goes flawlessly, the code just does roughly what it did before
. This project of mine to refactor the legacy system into something modern was no exception, and the optics weren’t great. I did as many engineers do and
ignored the politics
, put my head down, and got it done.
For the curious minds, yes, I managed to deploy something working at the end, and replaced the old system (eventually), but the project was really not well received by my engineer colleagues (management was indifferent).
I realized I was essentially trying to solve a people problem with a technical solution.
Most of the developers at this company were happy doing the same thing today that they did yesterday… and five years ago
As
Andrew Harmel-Law
points out, code tends to follow the personalities of the people that wrote it. The code was calcified because the developers were also. Personality types who dislike change tend not to design their code with future change in mind.
Most technical problems are really people problems. Think about it. Why does technical debt exist? Because requirements weren’t properly clarified before work began. Because a salesperson promised an unrealistic deadline to a customer. Because a developer chose an outdated technology because it was comfortable. Because management was too reactive and cancelled a project mid-flight. Because someone’s ego wouldn’t let them see a better way of doing things. Because technology is made by people, and people are bound to make mistakes.
The core issue with the project was that admitting the need for refactoring was also to admit that the way the company was building software was broken and that individual skillsets were sorely out of date. Trying to fix one part of many other problematic ones, while other developers continued doing as they always did.
In my career, I’ve already met several engineers openly tell me,
“I don’t want to learn anything new”
. I realized that you’ll never clean up tech debt faster than others create it. It is like triage in an emergency room, you must stop the bleeding first, then you can fix whatever is broken.
An Ideal World
: The project also showed me how impossible the engineer’s ideal of a world is, in which engineering problems can be solved in a vacuum - staying out of “politics” and letting the work speak for itself - a world where deadlines don’t exist…and let’s be honest, neither do customers.
This ideal world rarely exists. The vast majority of projects have non-technical stakeholders, and telling them “just trust me; we’re working on it” doesn’t cut it. I realized that
the perception that your team is getting a lot done is just as important as getting a lot done
Non-technical people do not intuitively understand the level of effort required or the need for tech debt cleanup; it must be communicated effectively by engineering - in both initial estimates & project updates. Unless leadership has an engineering background, the value of the technical debt work likely needs to be quantified and shown as business value.
Perhaps these are the lessons that prep one for more senior positions. In my opinion, anyone above senior engineer level needs to know how to collaborate cross-functionally, regardless of whether they choose a technical or management track. Schools teach Computer Science, not navigating personalities, egos, and personal blindspots.
I have worked with some incredible engineers - the type that have deep technical knowledge on just about any technology you bring up. When I was younger, I wanted to be that engineer - and to some degrees I feel like I did become that. For all of their (considerable) strengths, more often than not, those engineers shy away from the interpersonal. The tragedy is that they are incredibly productive ICs, but may fail with bigger initiatives because they are only one person (a single processor core can only go so fast).
Perhaps equally valuable is the
heads up coder
: the person who is deeply technical, but also able to pick their head up & see project risks coming (technical & otherwise) and steer the team around them.
The journey from technical problem solver to effective engineering leader often involves a sobering realization: the code is the culture made manifest.
The catastrophic technical debt I’ve often seen, is rarely about the lack of technical skill, it is most often a symptom of deeper organizational failures. This includes fear of change, short-term managerial thinking, and a profound communication gap between the builders and the stakeholders.
To truly tackle technical debt, we must evolve beyond the “Heads-Down” coder and embrace the Heads-Up Coder. This senior perspective understands that refactoring and modernization aren’t technical projects, but essential parts of day-to-day work as engineer.
You cannot clean up technical debt without first addressing the people, process, and politics that created it.
Focusing solely on the code is like sweeping the kitchen floor while the roof is leaking.
Wednesday, April 22, 2026
Markdown Cheat-sheet for Beginners
Table of Contents
Common Formatting
Headers
Emphasis (Bold and Italic)
Lists
Unordered Lists (bullet points)
Ordered Lists (numbered)
Links and Images
Links
Images
Other Useful Elements
Blockquotes
Horizontal Rule
Code Blocks
On editors
Markdown is a simple, lightweight markup language used for formatting plain text. It’s designed to be easy to read and write, and it can be converted into HTML, PDF, LaTEX and many other formats.
It’s a very popular format for writing documentation, blog posts, and notes. It is extremely future-proof since you write documents in plain-text, with some sprinkles of markup.
The key idea is to use simple punctuation to add formatting.
You don’t need to memorize all of these. Just remember the basics and refer back to this guide as needed. The best way to learn Markdown is to start writing!
Common Formatting
Headers
Headers are used for titles and subtitles. They are created with the hash symbol (
). The number of hashes determines the size of the header.
# This is a large title (H1)
## This is a major heading (H2)
### This is a sub-heading (H3)
#### This is a smaller heading (H4)
Emphasis (Bold and Italic)
You can make text bold or italic to add emphasis.
This is **bold text** using two asterisks.
This is also __bold text__ using two underscores.
This is *italic text* using a single asterisk.
This is also _italic text_ using a single underscore.
Lists
Lists are great for organizing information.
Unordered Lists (bullet points)
Use a hyphen (
) followed by a space.
- First item in the list
- Second item in the list
- A nested item
- Third item
Ordered Lists (numbered)
Use a number followed by a period and a space.
1. First step
2. Second step
3. Third step
1. A nested step
4. Final step
Links and Images
Links
To link to another webpage, use the format:
[text to display](URL)
Visit the official [Markdown website](https://daringfireball.net/projects/markdown/).
Images
Images are similar to links, but they start with an exclamation mark (
).
![Alt text for the image]()
Other Useful Elements
Blockquotes
To quote text from another source, use the `>` symbol.
> "The simplest way to write in Markdown is to just start typing."
> This can span multiple lines.
Horizontal Rule
A horizontal rule is a line that separates content. Use three or more hyphens (
------
).
Code Blocks
To show code or text that should not be formatted, use three backticks (
```
) before and after the text.
```
This text is inside a code block.
It will not be formatted with bold or italic.
```
On editors
Technical users will be at ease with Markdown, and can edit it and preview it from their favorite editor (think Emacs, Vim, VSCode, IntelliJ, etc.)
For non-technical users, the best Markdown editors are those that prioritize a clean, simple, and intuitive user interface.
The “What You See Is What You Get” (WYSIWYG) Approach: I recommend MarkText (desktop app), A free and open-source alternative to Typora, offering a real-time preview experience. It’s known for its sleek design and focus on a clean, elegant interface.
The “Two-Pane” or “Live Preview” Approach: Ghostwriter (desktop app), Dillinger (web based), StackEdit (web based).
The “Note-Taking” Approach: Obisidan (desktop app), Joplin (desktop app)
Wednesday, April 22, 2026
Why SQL SELECT * is often a bad idea
There are some anti-patterns one should avoid when writing SQL queries. Sometimes these may seem like a shortcut, but in reality this can lead to bugs, problems, and brittle applications.
It’s almost always better to use the explicit column list in the SELECT query than a * (star) wildcard. It not only improves the performance but also makes your code more explicit. It also helps you create maintainable code, which will not break when you add/remove columns from your table, especially if you have views that refer to the original table.
SELECT *
Doing a
SELECT *
may seem like a time-saver, but it’s actually setting you up for problems in the long run, specially when database schema changes.
Breaks Views While Adding New Columns to a Table
When you use SELECT * in views, then you create subtle bugs if a new column has been added ro an old one is removed from the table. Why? Because your view might break, or start returning an incorrect result.
Dependency on Order of Columns on ResultSet
When you use the SELECT * query in your application and have any dependency on order of column, which you should not, the ordering of the result set will change if you add a new column or change the order of columns.
Conflicts in a JOIN Query
When you use SELECT * in JOIN query, you can introduce complications when multiple tables have columns with the same name e.g. status, active, name, etc.
More Application Memory
Due to this increase in data, your application may require more memory just to hold unnecessary data that it will not be using
Increased Network Traffic
SELECT * obviously returns more data than required to the client, which, in turn, will use more network bandwidth.
Unnecessary I/O (Input Output)
By using SELECT *, you can be returning unnecessary data that will just be ignored, but fetching that data is not free of cost.
COUNT(*)
One should default to using
COUNT
clauses with column names, such as
COUNT(id)
, since this will count values which are non-NULL.
If NULL is explicitly wanted then one can choose for
COUNT(1)
or
COUNT(*)
There is a negligible performance difference, at least in PostgreSQL between 1 and * , so use at your discretion.
Wednesday, April 22, 2026
SSR wins over Javascript
In this day and age, server side rendering proves it is stable and more
effective than the JavaScript bloat we are growing used to. Any simple
page, with perhaps at most 3 KB of actual text content will download
over 2 MB of JavaScript in order to simply function.
Besides that fact, SPA and API situations, where you have a decoupled
frontend from the backend, still do not justify the fact of using
JavaScript, or even worse, any of its frameworks. That paradigm can
easily be, in most cases, fulfilled by server side rendering in the
backend language of your choice, which will spit out a digested HTML for
your clients.
In specific, if you keep the same programming language, in my case Go,
across your applications, you can share a lot of logic, data models, and
you get a consistent developer experience, and an easier to maintain
system.
Not to mention the fact that the JavaScript ecosystem goes with the
wind, is the most unstable place I have ever seen, and most of the times
I need to look into some source code written in that language, of some
external package dependency, I start questioning the author’s sanity,
and mine, and also my career choices.
Going further, with the bloated state management tools, and broken
concurrency model of JavaScript, as well as the insane “features” of the
language, and then seeing TypeScript, a “sub-set” of the language, which
also brings its own horrors and pre-compilation, I have no doubt that
the show must come to a stop, and for my professional needs, as a
software engineer at an e-commerce platform, I have decided to throw
away all the SPA code (3 attempts) that has previously been in
production, and go in a totally oposite direction.
This direction may seem unorthodox, but it has many reasons that make it
a success, and a joy to work with, and extend.
To give some background, the previous SPA I helped create and worked
with, that consumed my API’s data, was created using Vue.js 2, SCSS with
scoped styles (horror!), VueX, and TypeScript.
This combination has quickly lead to spaghetti code of terrible quality,
and lots of it. To make matters worse, there was a transition of API v1
to v2, where some data models changed, and the developer decided to
refuse that change, and write his own mappers, to convert those v2 data
models to v1, and vice-versa, in order to comply with my API, which he
was interfacing with.
As you can imagine, this has lead to an unmaintainable project, that no
one dares to touch.
So I decided to follow my beloved KISS principle, and to go ahead and
get started on writing a SOLID coded Go application, that will act as
the API consumer, and spit out processed HTML for the client’s browser.
I also followed the principle of “graceful degradation”, which in basic
terms, means that the website should function perfectly without
JavaScript, and it should only be used to enhance the user experience,
like in heavy animations, sliders, etc. By no means should JavaScript be
the center of the application, due to all the negative aspects it
brings, and due to the superior language that Go is.
I have been met with much success with this application, and can really
say confidently that my future endeavours will always go the same route,
unless really necessary to code a more client-side application, in which
case I will most likely turn to GopherJS or such, or perhaps some Go to
WebAssembly, so I don’t need to write any more JavaScript ever.
I want to list some of the advantages that I have found, and am sure
anyone will find when doing things more in an “old but gold” old-school
approach of building websites, server side rendered:
In my case, one language for the whole platform 😀 Go !
Native browser features, like remembering how far in the list of the
previous page you were when you go back.
SEO is greatly improved, and you don’t even need to think about it as
much. Search engines will easily crawl your site, and will read
correctly formed HTML.
Security improvements, due to not exposing application code, or API
calls in the client’s browser.
Simpler front-end applications, no mapping, no state management, no
problems. API data directly to HTML.
Go is a well designed, compiled, statically typed and concurrent
language. Why not use it for client-facing applications?
The ecosystem around Go is much more stable and reliable than
JavaScript’s.
The standard library of Go is great, and can power most of your needs
without additional thrid-party dependencies.
High-concurrency and asynchronous process capabilities in the
client-facing applications.
Accessibility is greatly improved due to the HTML being delivered as
already rendered, and we can care more about writing semantically
correct HTML, instead of caring so much about the newest JS framework.
Supporting security conscious people and organizations: let’s face it
there’s bad people doing heinous things with JavaScript, so the
easiest solution is to get rid of JavaScript.
Mobile devices, web-views, and embedded devices are better supported,
without inconsistencies.
Search engine spiders only follow real links - JavaScript confuses
them.
Cross site scripting attacks risk is greatly reduced.
Hacking/Defacement through DOM manipulation is no longer possible.
Severe speed improvements, both on first, and on subsequent loads.
I really have no intentions to be part of the spaghetti mess that
JavaScript easily becomes, I want a consistent and predictible language,
that compiles to binary code and can easily beat any out there.
As for the SPA fans out there, I know it has its use cases, but let’s
face it, JS is doing so much more than it should since React and buddies
came along, this has gotten way out of hand and it is time to stop.
Thanks for your attention, and I look forward to seeing comments.
Wednesday, April 22, 2026
My Tech Radar
Tech changes fast. But it’s becoming an ever-more critical component of success.
A Tech Radar is our trusty compass and map, charting the ever-shifting landscape of technology. It’s not just some dusty document; it’s a living, breathing artifact that reflects our collective wisdom, our triumphs, and yes, our hard-won lessons.
At its core, a Tech Radar is a visual tool that helps us categorize and assess various technologies – be they programming languages, development tools, ingenious techniques, or even the very platforms we build upon. It typically organizes these “blips” into different quadrants, like “Languages & Frameworks” or “Tools,” and then, crucially, places them into “rings” that signify their current status. Think of these rings as a spectrum, guiding our decisions:
Adopt
These are the technologies we’ve embraced, the ones that have proven their mettle in the trenches. They’re reliable, we’re confident in them, and they’re our go-to for new projects. When something lands here, it means it’s been thoroughly vetted and consistently delivers.
Trial
Here, you’ll find the promising newcomers or existing technologies we’re actively experimenting with. We’ve seen some success, they’re showing real potential, and we’re ready to put them through their paces on a project or two to truly understand their strengths and limitations.
Assess
This ring is for the intriguing ideas, the nascent technologies that have piqued our interest. They might be revolutionary, or they might just be a flash in the pan. We’re keeping an eye on them, doing our research, and perhaps even building a small prototype to see if they hold water.
Deprecate
This is where we park the technologies we’re actively discouraging for new work. Maybe they’ve been superseded by something better, maybe they’re too high-maintenance, or perhaps they simply don’t align with our long-term vision. It’s about being pragmatic and cutting our losses, preventing new projects from inheriting technical debt.
✅ Adopt
🐂 Languages & Frameworks
Guile Scheme
GNU Artanis
Emacs Lisp
Common Lisp
Haskell
Typescript
Javascript (vanilla)
Rust
GTK4 Libadwaita
🧰️ Tools
GNU Guix
SQLite
PostgreSQL
Emacs
Org mode (Emacs)
Servant (Haskell), API as a type
⚙️ Techniques
Plain SQL queries
Guix Manifests
Guix dev shells
Nix Flakes
Nix dev shells
Woodpecker CI/CD
🌐 Platforms
Custom low-power green VPS
❌ Deprecate
🐂 Languages & Frameworks
Scala
ZIO (Scala)
Svelte
🧰️ Tools
MySQL
🌐 Platforms
Amazon AWS
Wednesday, April 22, 2026
About
Planet Scheme
collects blog posts from individuals and projects around the Scheme community.
Feed
Maintenance
Planet Scheme is brought to you by the
Scheme.org
community. It was previously curated by Jens Axel Søgaard.
To send feedback or to have your blog featured, please write the
schemeorg
mailing list
Source code
Blogs
Adrien Ramos
(feed)
Alaric Snell-Pym
(feed)
Andrew Whaley
(feed)
Andy Wingo
(feed)
Arthur A. Gleckler
(feed)
Arto Bendiken
(feed)
Ben Simon
(feed)
Chicken Gazette
(feed)
Christian Kellermann
(feed)
crumbles.blog
(feed)
Danny Yoo
(feed)
Dave Herman
(feed)
Dominique Boucher
(feed)
Doug Williams
(feed)
Gauche Devlog
(feed)
Grant Rettke
(feed)
Greg Hendershott
(feed)
Guile News
(feed)
Gwen Weinholt
(feed)
Ian O
(feed)
Idiomdrottning
(feed)
Jacob Matthews
(feed)
Jens Axel Søgaard
(feed)
Jens Axel Søgaard
(feed)
Joe Marshall
(feed)
jointhefreeworld
(feed)
Joshua Herman
(feed)
Jérémy Korwin-Zmijowski
(feed)
LIPS Scheme Blog
(feed)
Llewellyn Pritchard
(feed)
M J Ray
(feed)
Marc Nieper-Wikirchen
(feed)
Mark Damon Hughes
(feed)
Per Bothner
(feed)
Peter Bex
(feed)
Peter Schombert
(feed)
Programming Praxis
(feed)
Retropikzel's blog
(feed)
Ryan Culpepper
(feed)
Sam Tobin-Hochstadt
(feed)
Scheme Requests for Implementation
(feed)
spritely.institute
(feed)
The Racket Blog
(feed)
Tim van der Linden
(feed)
Vasilij Schneidermann
(feed)
Vladimir Nikishkin
(feed)
Will Farr
(feed)
Yinso Chen
(feed)
Yoshikatsu Fujita
(feed)
Orrery
Planet Clojure
Planet Lisp
Racket Stories
Friday, April 24, 2026