Skip to main content

Show versions used on the manager

· One min read
Christian Berendt
Founder of OSISM

The osism get versions manager command can be used to display the versions of the individual modules used by OSISM. The OSISM version used is listed under OSISM Version. If available, the release used for the corresponding module is listed under Module Release.

$ osism get versions manager
+---------------+-----------------+------------------+
| Module | OSISM version | Module release |
|---------------+-----------------+------------------|
| osism-ansible | 7.0.0b | |
| ceph-ansible | 7.0.0b | quincy |
| kolla-ansible | 7.0.0b | 2023.2 |
+---------------+-----------------+------------------+

Switch to OpenTofu

· One min read
Christian Berendt
Founder of OSISM

In blog posts with the tag News, we will now write about news that are not directly related to a new feature or one of our managed infrastrucutre environments.

Marc Schöchlin, Site Reliability Engineer at the Sovereign Cloud Stack project, successfully completed the migration of osism/testbed from Terraform to OpenTofu yesterday. The migration went smoothly and basically only the Terraform binary had to be replaced with the new OpenTofu binary.

More details in the SCS blog in the Opensource - Testbed adopts OpenTofu article written by Marc.

Use of the container shell

· One min read
Christian Berendt
Founder of OSISM

With the OSISM CLI it is possible to enter a shell on a container running on a node.

This is useful, for example, to view running instances that are managed via Libvirt.

In this example, the command virsh list is executed in the nova_libvirt container running on the com1069 node.

$ osism console com1069/nova_libvirt
(nova-libvirt)[root@com1069 /]# virsh list
Id Name State
------------------------------------
190 instance-001b2492 running

(nova-libvirt)[root@com1069 /]#

Use of the ClusterShell

· One min read
Christian Berendt
Founder of OSISM

ClusterShell is an event-driven open source Python framework, designed to run local or distant commands in parallel on server farms or on large Linux clusters. We learned to use it by chance during a large HPC project with the team there and learned to like it.

ClusterShell can be used in a rudimentary way via the console command of the OSISM CLI. The Ansible inventory groups are available as node groups. These are automatically generated and updated by the inventory reconciler.

In this example, the command uname -v is executed on all nodes in the node group housing1047.

$ osism console --type clush housing1047
Enter 'quit' to leave this interactive mode
Working with nodes: com[1047-1050]
clush> uname -v
com1049: #38~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Nov 2 18:01:13 UTC 2
com1050: #38~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Nov 2 18:01:13 UTC 2
com1047: #38~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Nov 2 18:01:13 UTC 2
com1048: #38~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Nov 2 18:01:13 UTC 2
clush>

Restart of a container on a specific node

· 2 min read
Christian Berendt
Founder of OSISM

We not only develop OSISM, we also use it to operate our own cloud infrastructure, the REGIO.cloud. When operating REGIO.cloud, we often come across tasks in our day-to-day business that we can already solve with the help of OSISM. If not, we open an issue for the task and build it in so that we can solve it directly in OSISM in the future.

In blog posts with the tag Machine Room, we will now write about such tasks and how we were able to solve them with OSISM.

Yesterday we had a hiccup in our RabbitMQ cluster. This has caused problems with the attachment of volumes to instances during the night. After analyzing the problem, it was decided that only a restart of the nova_compute containers, which provide the Nova Compute Service, would solve the problem. With the play manage-container it is possible to run an action, e.g. restart, of a specific container.

As we have our compute nodes in housings, we have also mapped them in the inventory in inventory/10-custom and can now use those groups to restart all Nova Compute services one by one.

$ osism apply manage-container \
-e container_action=restart \
-e container_name=nova_compute \
-l housing1047
2024-01-12 08:28:55 | INFO | Task was prepared for execution. It takes a moment until the task has been started and output is visible here.

PLAY [Manage container] ********************************************************

TASK [Manage container] ********************************************************
changed: [com1047]

PLAY [Manage container] ********************************************************

TASK [Manage container] ********************************************************
changed: [com1048]

PLAY [Manage container] ********************************************************

TASK [Manage container] ********************************************************
changed: [com1049]

PLAY [Manage container] ********************************************************

TASK [Manage container] ********************************************************
changed: [com1050]

PLAY RECAP *********************************************************************
com1047 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
com1048 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
com1049 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
com1050 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Kubernetes Service Deployments

· 2 min read
Christian Berendt
Founder of OSISM

New big and small features are constantly being added to OSISM. This makes using OSISM a little better for operators of the Sovereign Cloud Stack every day.

Since we currently only do a major release every 6 months in which we write about these big and small features in the release notes, there will be this kind of blog posts from now on. In blog posts with the tag Sneak Peak, we will now write about new features before the next major release.

This blog entry is specifically about the possibility of deploying services on the recently integrated Kubernetes cluster.

The deployment of services on the integrated Kubernetes cluster will be possible in future via the kubernetes environment. A first simple example for the deployment of Nginx is already available in the osism/testbed repository. The new environment is used as usual with osism apply.

$ osism apply -e kubernetes nginx

$ kubectl get pods -n nginx
NAME READY STATUS RESTARTS AGE
nginx-f7f5c78c5-crhnf 1/1 Running 0 2m28s
nginx-f7f5c78c5-tjf6r 1/1 Running 0 2m28s
nginx-f7f5c78c5-qbqjz 1/1 Running 0 2m28

$ kubectl get services -n nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.43.84.203 192.168.16.100 80:30612/TCP 2m46s

$ curl -I http://192.168.16.100
HTTP/1.1 200 OK
Server: nginx/1.25.3