Thursday 17 October 2024

GitHub Actions and Tagging Images

Sometimes an image created and put in some repository needs to have more than one tag. The most common case is when building on to tag it as latest. Or current. The GitHub actions by default tag it with the the short SHA of the commit that triggered the workflow. This is like the built image has its own ID whit which it can be addressed in later commands:

This is good ... in Github's own context. But in the lifecycle of the image it may exist in several other contexts (repository, deployment environment, automation scripts, etc.)

There are a few ways (command syntaxes). Probably the easiest way is to just list the image:tag pairs one after the other:

Yet, there is even simpler way: not to issue the tag command but to define the tags during build-time:

And this works fine. 

Next the tags have to be pushed to the desired repository. I ensured (for myself) that pushing the same image with different tags doesn't create multiple copies of the same image but only adds the new tags to the same image (silly thought but the question appears at some point).

Another question was if a push command has to be issued for every single tag, or this can be done in one go. You know - to reduce traffic (sometimes images are +1GB) ...

I found the correct syntax for pushing multiple tags with a single command:

And it worked just fine in a local experiment.


The key (obviously) was to use the --all-tags option of the docker push command. It also has the reduced form -a.

Unfortunately this didn't work in the Github action, since the Docker version in the runner's environment doesn't recognize the said option:

This, of course, is a bit unpleasant but not a showstopper. The solution is to simply issue the command once per every tag.  Which is another reason for striving to keep images smaller when possible.

Maybe I should try the same setup with podman sometime.

 

Wednesday 9 October 2024

Passing Credentials to GCP Cloud Run from a Github Action

Recently I had to automate a GCP Cloud Run deployment on commits in a Github repository. In cases like that the default solution is to define a Github action. Well not exactly.

 

The Cloud native way

In the case of Google Cloud (I can't be detailed about the others) the Cloud Build  is what's the closest to the execution environment. Using it utilizes the GCP's native containerization (Cloud Build's Docker runner), direct tagging and uploading to an Artifact Registry (formerly the Container Repository which was deprecated and is now retired) repository, and eventually that is the shortest path to Cloud Run itself.

Yet, the "cloud native" way is not always the right or the proper way.

In my case the Cloud Run application was a backend application the source for which resides in a monorepo along with the one for the accompanying frontend application. After a short research for my context I settled on the premise that the Cloud Build triggers can't distinguish if a commit comes solely for one of the applications and build and deploy everything possible in the repository tree (frontend, backend and whatever else Dockerized applications it finds in its repo scan). Not always convenient or necessary.

On the other hand the Github workflows can see not only branches but directory tree paths too. This was enough to tip the scales towards this solution - to build the application at it's source of truth and call the cloud operations from there to finish the job with deploying it (through the custom gcloud run deploy command).

 

Implementing the Action

For the basic case of building the application, authenticating with the Cloud, uploading the image and deploying it with the bare minimum of the said command there are more than enough tutorials. There are even readily available templates for the task in the Github's own workflows library

There are also many video tutorials with different levels of complexity. I especially liked this one:


because of it's graceful pace and to the point, grounded explanations - nothing redundant. Kudos.

Yet, my case was just a bit more complex than the minimum deployments. And it's nothing that is special. I even think almost anyone's Cloud Run use case includes that - the environment variables and the secrets.

The slightly specific sub-case arises when one needs to pass a service account credentials to the container being deployed. In the above video there is an example of defining a secret that contains such credentials but in his case is for a different purpose. These credentials are the ones authenticating the Github action (basically a runner - container with Ubuntu OS by default) with the Google Cloud Infrastructure for invoking the image upload and eventually the deployment. But I was already past that step and had the problem with passing the contents of the said secret to the container being deployed.

I won't get into details why I didn't use Workload Identity Federation, or resorted to the Base64-encoding method, let's just say it wasn't entirely my decision - I had to deal with just passing the secret.

 

Variables, Secrets and Execution Environments

Now, the variables are simple. Quick consulting with the official documentation and the commands reference it becomes clear that the --set-env-vars flag should be used - a flag per variable.

As for the secrets - they're basically the same with the main difference that one's no longer able to see their value anywhere on Github after creation. 

When it comes to putting a Service Account in a secret there's a certain gotcha that must be complied with. Let's keep the following in mind:

  • In its essence a Service Account is a string in a JSON format that if kept as a local file is a multiline JSON document. 
  • Github Secrets can take multiline values but preserve them in a single line string. How true is that can be arguable.

After long hours of frustration, chatbot circular hallucinations, short-lived moments of epiphany, and the mandatory existential dread I decided that I'll fight it off and make it actually work with the flags file method

In short the method includes the additional step (prior to executing the deployment command) that defines a YAML file on the fly and fills it with the environment variables assigning them the values of the secrets. The syntax looks like this:

Once created the file is passed to the command with the --env-vars-file flag.

Looks easy and why it didn't work out of the box cost me the longest time to figure out. It has to do with how the secrets keep the multiline values.

The telltale sign of the problem if easily visible (if you know what to look for) here:

What is seen is the Github secret actually preserves the multiline JSON format if one assigns it this way on creation/update. From here on it is the gcloud run deploy command that can't work it out this way because it can't recognize it as a proper YAML syntax.

So what could I do? The common sense suggested that when deployed from the Cloud Console the Service Accounts being passed as variables or secrets never pose this problem. 

So I refreshed all of the secrets' values with the ones taken directly from the Console (from the Edit screen but the fact that the variables are single lines can be seen also in the auto generated YAML of the deployment). 

The result in the runner's log changed to:

At this point all of the secrets being single-line strings are written as such in the generated flags file in proper YAML syntax, hence the successfully executed deployment.


Take Out

The imposing conclusion has the following dimensions:

  • Keep it simple and if possible retain the whole process of the Github workflow to be visible on one page. The action and the flags file solution meet that requirement while the Base64 encoding and Workload Identity Federation methods although robust and proven in smaller use cases might be an overkill in terms of configuration and coding.
  • Always double check the values of your environment variables and secrets, AND when used with the flags file make sure the values are strictly one-liners, no matter what they hold.

And probably, as a rule of thumb, check as much as needed with the Cloud Console the alignment between the two points of deployment because sometimes drift happens and may cause other hard-to-debug problems.

Thursday 11 April 2024

The Change


 

 This is something like an announcement ... 


Today I renamed the blog (from nbide to polystack) because the industry changed in a way that:

  • One language is already not enough.
  • One IDE is rarely enough.
  • One cloud provider is usually not entirely enough.  And often it might not even be the right solution.


At first I wanted to keep it simple and repurpose the blog to be related to the programming languages I'm currently interested in but it's never that simple. The mentioned languages right now are:

  • Rust
  • Go
  • C (only C, and NOT C++ - that thing is not healthy)
  • Java (still)
  • Kotlin
  • Haskell


And since no language exists without its realm, purpose, and use cases some possible habitats and the various tooling can be involved in the narrative. That is for context and perspective. So things to be mentioned might include:

  • Cloud Providers (GCP, and Azure mostly)
  • Infrastructure (IaC) tools (Terraform, and Ansible mostly)
  • IDEs and Editors (NetBeans is always in my heart but I recognize that with time it get farther and farther from the best ones)
  • GitOps (GitLab mostly)
  • Linux - as a development environment and conainerization suitable distros

... the list can't be limited.


Also this might not be the only blog to cover these areas, and cross-posting might happen from time to time.

So let's get started ...



Monday 25 June 2018

NetBeans 9 - more than the usual upgrades

Follow this link.

If you're still not aware of the current state of the IDE's governance, the first thing you see might surprise you - Oracle gave the IDE to the Apache Foundation. So from 9.0 version onward we'll have Apache NetBeans.

From the rest of the contents of the page you can see some of the interesting new features that come to support the latest additions in the 9th and 10th versions of the JDK. It's mostly some syntactic sugar (the var type declartation) and hints. The modular system is supported with GUI tools, as is the new Java Shell ... But don't take myword for it, just go and see for yourselves.

Currently the RC1 is available, so I hope we'll have the final version within a month.

Wednesday 8 February 2012

JBoss AS 7 here?

In a previous post I promised to make a review for a book about JBoss AS 7. At that time I still didn't dig into it, and didn't know that our latest NetBeans IDE (7.1) still does not support the seventh generation of JBoss servers. You may see some informative rant going on here. And to be clear - hacking to simulate support thanks to the configurations of the 5-th or 6-th versions wouldn't help. JBoss 7 is completely redesigned to be modular. So the default support (unsurprisingly) comes from the rival - eclipse ;)
While we're waiting you may actually read the book, because it is a good one. If you need first to read the review, it is here on my other blog.

Tuesday 17 January 2012

Be careful when using NetBeans on DropBox

My recent experimentation with the cloud service took me to a revelation that says: "Well you should've known better!" At least I should be more careful setting up my system. Especially when being aware of the nature of things, but still let myself be sloppy. 

It cost me some precious time and puzzled, achy head, but at the end the situation is already crystal clear to me.

I was trying to run NetBeans with portable JDK - both tools in the DropBox folder. It constantly refused to run saying it couldn't find a proper JDK.
At some point I decided to go with absolute paths in the netbeans.conf file. I did it very simply by defining in all my environments the variable $DROPBOX_BASE (with the actual corresponding value for every system of course). No more misty relative paths.
Still I didn't have success running the IDE. Hmm!

These days I'm experimenting with the new JBoss AS 7. Well taking the same approach there, the result that followed is obvious - the JBoss complained  that it can not find a proper JDK unless I show it its position on the system with the --jdkhome command line option. Getting the same result after using the option in question, puzzled me the most. But it didn't take long until I realized that something wrong should be happening with some executable bits. Again some time and googling passed until I remembered that on the current (Linux) system I placed the Dropbox folder on NTFS partition, which was never even designed for the idea of executable bits being set as permissions. 

So, my conclusion: when using DropBox on Linux, never set its folder on partitions designed for Windows OSes. You might still sync properly, but successfully executing binaries from it is very unlikely. 
Touché!

Tuesday 10 January 2012

NetBeans 7.1 is here and ...

The official announcement came almost a week ago and for me it almost coincided with another event. The guys from PACKT Publishing gave me the chance to review another one of their books.

First about the IDE:
May be the release notes would be more informative than the official announcement. At least it contains all the possible links you might need along with the latest top features presented visually. Of course there is this video presentation, which is worth watching.
It seems this time the focus is on JavaFX. Its 2.0 version is covered in a way that makes its applications' configuration and deployment easy and complete - it seems you won't miss a feature here. 
From the other features presented in the video the most attractive seemed the visual debugger, the batch and selective rectangular re-factoring and the enhanced maven integration.

Once I have more time to make some stuff with it, I'll report. Which leads me to the second point - the book:

Going through this book is a nice opportunity to check the new architecture of the revamped JBoss Application Server. I guess (and hope) it will have some in-depth tweaks. Scrolling over its TOC increased my appetite seeing topics like clustering, security and cloud leveraging. I'm anxious to start it, so I'll say no more in this post.