Replies: 1 comment
-
Hi @ymartin59 👋🏼 Using the
I’m not sure I fully understand the complexity you’re facing. Could you provide a more detailed example? I might be able to suggest a solution. One idea that comes to mind is using tags to track and search for your templates. For example, during the build process, you could tag the new template as Would that approach work for you? |
Beta Was this translation helpful? Give feedback.
-
Hello
We are implementing a local pipeline to manage disk image lifecycle the DevOps way:
We are facing efficiency troubles with the last step as we expect to scale up to 20 identical VM creation at the same time from same disk image version. We run a 5-nodes productive Proxmox cluster with Ceph RBD storage and we probably have far enough hardware and network capacities to achieve this VDI use case.
Because of provider limitation (preventing to use disk image direct from Ceph), our current implementation creates an empty VM with small disk, and use "qm" commands through ssh to discard empty disk and import/register disk from qcow2 image but this takes too much time and it is error-prone in CI because in case of timeout or failure for any reason, the terraform state remains locked (lack of error handling when invoking "qm" actions). Failure rate is annoying enough to require us to monitor our pipeline and manually retry selectively failed-vm-job.
So we now plan to register VM from disk image as Proxmox template first (and so clone it as VM instances) - but this requires a complex management of VM ID and LRU for clean up / recycling with an catalog to refer to proper image identifier (name+version)
May you please provide us some tips for the best option to complete our provisioning process from disk image?
Will the template/clone scale better?
Thank you in advance for your support
Beta Was this translation helpful? Give feedback.
All reactions