Replies: 6 comments
-
Additionally, Unable to remove the the Primary storage, I disable it, but still there is no delete option. from the KVM I can type phen with no errors, I am able to ping the cluster monitors and admin. |
Beta Was this translation helpful? Give feedback.
-
@tatay188 |
Beta Was this translation helpful? Give feedback.
-
Thank you, I reinstalled the Both management servers only, to clean the DB, which seem to be corrupted.
Primary management server 10.1.1.1
Secondary management server 10.1.1.2
Appears to work, the ProxyVM is UP and RUnning, which in my case is the 154.
The secondaryvm is not coming up:
The logs on the management shows this:
2025-04-01 21:34:47,979 ERROR [o.a.c.c.p.RootCACustomTrustManager] (pool-5799-thread-1:[]) (logid:) Certificate ownership verification failed for client: 10.1.1.2
2025-04-01 21:34:47,979 ERROR [c.c.u.n.Link] (AgentManager-SSLHandshakeHandler-20:[]) (logid:) SSL error caught during wrap data: Certificate ownership verification failed for client: 10.1.1.2, for local address=/10.1.1.1:8250, remote address=/10.1.1.2:37130.
2025-04-01 21:34:47,979 INFO [c.c.a.m.ClusteredAgentManagerImpl] (AgentManager-Handler-15:[]) (logid:) Connection from /10.1.1.2 closed but no cleanup was done.
2025-04-01 21:34:48,006 DEBUG [o.a.c.c.p.RootCACustomTrustManager] (pool-5800-thread-1:[]) (logid:) A client/agent attempting connection from address=10.1.1.2 has presented these certificate(s):
Certificate [1] :
Serial: 5a10500c8bc88a27
Not Before:Tue Apr 01 07:24:34 UTC 2025
Not After:Thu Mar 25 19:24:34 UTC 2055
Signature Algorithm:SHA256withRSA
Version:3
Subject DN:CN=csmgmtatl202
Issuer DN:CN=ca.cloudstack.apache.org
Alternative Names:[[7, 172.23.123.62], [7, fde0:f:2897:23:123:0:0:62], [7, fe80:0:0:0:e643:4bff:fe81:9460], [2, csmgmtatl202], [2, cloudstack.internal]]
Certificate [2] :
Serial: 79809785d3aceb1f
Not Before:Tue Apr 01 07:23:21 UTC 2025
Not After:Thu Mar 25 19:23:21 UTC 2055
Signature Algorithm:SHA256withRSA
Version:3
Subject DN:CN=ca.cloudstack.apache.org
Issuer DN:CN=ca.cloudstack.apache.org
Alternative Names:null
2025-04-01 21:34:48,007 ERROR [o.a.c.c.p.RootCACustomTrustManager] (pool-5800-thread-1:[]) (logid:) Certificate ownership verification failed for client: 10.1.1.2
2025-04-01 21:34:48,007 ERROR [c.c.u.n.Link] (AgentManager-SSLHandshakeHandler-20:[]) (logid:) SSL error caught during wrap data: Certificate ownership verification failed for client: 10.1.1.2, for local address=/10.1.1.1:8250, remote address=/10.1.1.2:37146.
2025-04-01 21:34:48,008 INFO [c.c.a.m.ClusteredAgentManagerImpl] (AgentManager-Handler-2:[]) (logid:) Connection from /10.1.1.2 closed but no cleanup was done.
2025-04-01 21:34:48,458 WARN [o.a.c.s.PremiumSecondaryStorageManagerImpl] (secstorage-1:[ctx-5ef23302]) (logid:40b0ff44) Unable to start secondary storage VM [242] due to [Unable to create a deployment for VM instance {"id":242,"instanceName":"s-242-VM","type":"SecondaryStorageVm","uuid":"712feb3b-ee2f-413c-b5cc-1f1fdb4d7489"}]. com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM instance {"id":242,"instanceName":"s-242-VM","type":"SecondaryStorageVm","uuid":"712feb3b-ee2f-413c-b5cc-1f1fdb4d7489"}Scope=interface com.cloud.dc.DataCenter; id=1
at com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:1237)
at com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:5467)
at jdk.internal.reflect.GeneratedMethodAccessor239.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:569)
at com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:106)
at com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:5591)
at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:99)
at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:652)
at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:600)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840)
2025-04-01 21:34:48,459 INFO [o.a.c.s.PremiumSecondaryStorageManagerImpl] (secstorage-1:[ctx-5ef23302]) (logid:40b0ff44) Unable to start secondary storage VM [242] for standby capacity, it will be recycled and will start a new one.
2025-04-01 21:34:48,477 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (secstorage-1:[ctx-5ef23302]) (logid:40b0ff44) Sync job-1499 execution on object VmWorkJobQueue.242
2025-04-01 21:34:51,525 DEBUG [c.c.c.CapacityManagerImpl] (secstorage-1:[ctx-5ef23302]) (logid:40b0ff44) VM instance {"id":242,"instanceName":"s-242-VM","type":"SecondaryStorageVm","uuid":"712feb3b-ee2f-413c-b5cc-1f1fdb4d7489"} state transited from [Stopped] to [Expunging] with event [ExpungeOperation]. VM's original host: null, new host: null, host before state transition: null
2025-04-01 21:34:51,534 DEBUG [c.c.v.ClusteredVirtualMachineManagerImpl] (secstorage-1:[ctx-5ef23302]) (logid:40b0ff44) Expunging vm VM instance {"id":242,"instanceName":"s-242-VM","type":"SecondaryStorageVm","uuid":"712feb3b-ee2f-413c-b5cc-1f1fdb4d7489"}
2025-04-01 21:34:51,534 DEBUG [c.c.v.ClusteredVirtualMachineManagerImpl] (secstorage-1:[ctx-5ef23302]) (logid:40b0ff44) Cleaning up NICS [] of VM instance {"id":242,"instanceName":"s-242-VM","type":"SecondaryStorageVm","uuid":"712feb3b-ee2f-413c-b5cc-1f1fdb4d7489"}.
2025-04-01 21:34:51,534 DEBUG [o.a.c.e.o.NetworkOrchestrator] (secstorage-1:[ctx-5ef23302]) (logid:40b0ff44) Cleaning network for vm: 242
2025-04-01 21:34:51,549 DEBUG [c.c.n.g.PublicNetworkGuru] (secstorage-1:[ctx-5ef23302]) (logid:40b0ff44) public network deallocate network: networkId: 200, ip: 199.5.15.2
The log on the AGENT:
2025-04-01 21:46:24,642 DEBUG [kvm.resource.LibvirtComputingResource] (UgentTask-5:[]) (logid:) Executing command [/usr/share/cloudstack-common/scripts/vm/network/security_group.py get_rule_logs_for_vms ].
2025-04-01 21:46:24,829 DEBUG [kvm.resource.LibvirtComputingResource] (UgentTask-5:[]) (logid:) Successfully executed process [8034] for command [/usr/share/cloudstack-common/scripts/vm/network/security_group.py get_rule_logs_for_vms ].
2025-04-01 21:46:24,829 DEBUG [agent.properties.AgentPropertiesFileHandler] (UgentTask-5:[]) (logid:) Property [hypervisor.uri] has empty or null value. Using default value [null].
2025-04-01 21:46:24,830 DEBUG [kvm.resource.LibvirtConnection] (UgentTask-5:[]) (logid:) Looking for libvirtd connection at: qemu:///system
2025-04-01 21:46:24,851 DEBUG [kvm.resource.LibvirtComputingResource] (UgentTask-5:[]) (logid:) Host health check script path is not specified
2025-04-01 21:46:24,852 DEBUG [cloud.agent.Agent] (UgentTask-5:[]) (logid:) Sending ping: Seq 1-52: { Cmd , MgmtId: -1, via: 1, Ver: v1, Flags: 11, [{"com.cloud.agent.api.PingRoutingWithNwGroupsCommand":{"newGroupStates":{},"_hostVmStateReport":{"v-154-VM":{"state":"PowerOn","host":"hv.kvmvcompatl2001.321communications.cloud"}},"_gatewayAccessible":"true","_vnetAccessible":"true","hostType":"Routing","hostId":"1","outOfBand":"false","wait":"0","bypassHostMaintenance":"false"}}] }
2025-04-01 21:46:24,875 DEBUG [cloud.agent.Agent] (Agent-Handler-1:[]) (logid:3f91d148) Received response: Seq 1-52: { Ans: , MgmtId: 250977680725600, via: 1, Ver: v1, Flags: 100010, [{"com.cloud.agent.api.PingAnswer":{"_command":{"hostType":"Routing","hostId":"1","outOfBand":"false","wait":"0","bypassHostMaintenance":"false"},"sendStartup":"false","result":"true","wait":"0","bypassHostMaintenance":"false"}}] }
2025-04-01 21:46:47,392 DEBUG [cloud.agent.Agent] (agentRequest-Handler-1:[]) (logid:dd9050db) Processing command: com.cloud.agent.api.GetHostStatsCommand
2025-04-01 21:46:48,253 DEBUG [cloud.agent.Agent] (agentRequest-Handler-2:[]) (logid:4e06d514) Processing command: com.cloud.agent.api.GetVmStatsCommand
2025-04-01 21:46:48,253 DEBUG [agent.properties.AgentPropertiesFileHandler] (agentRequest-Handler-2:[]) (logid:4e06d514) Property [hypervisor.uri] has empty or null value. Using default value [null].
2025-04-01 21:46:48,253 DEBUG [kvm.resource.LibvirtConnection] (agentRequest-Handler-2:[]) (logid:4e06d514) Looking for libvirtd connection at: qemu:///system
2025-04-01 21:46:48,273 DEBUG [kvm.resource.LibvirtComputingResource] (agentRequest-Handler-2:[]) (logid:4e06d514) Trying to get VM with name [v-154-VM].
2025-04-01 21:46:48,279 DEBUG [kvm.resource.LibvirtVMDef] (agentRequest-Handler-2:[]) (logid:4e06d514) Using informed label [hdc] for volume [null].
2025-04-01 21:46:48,280 DEBUG [kvm.resource.LibvirtComputingResource] (agentRequest-Handler-2:[]) (logid:4e06d514) Found [3] network interface(s) for VM [{"name":"v-154-VM","uuid":"e4ee4966-84c3-4e23-80bf-7f49c90f59df"}].
2025-04-01 21:46:48,288 DEBUG [kvm.resource.LibvirtVMDef] (agentRequest-Handler-2:[]) (logid:4e06d514) Using informed label [hdc] for volume [null].
2025-04-01 21:46:48,288 DEBUG [kvm.resource.LibvirtComputingResource] (agentRequest-Handler-2:[]) (logid:4e06d514) Found [2] disk(s) for VM [{"name":"v-154-VM","uuid":"e4ee4966-84c3-4e23-80bf-7f49c90f59df"}].
2025-04-01 21:46:48,294 DEBUG [kvm.resource.LibvirtComputingResource] (agentRequest-Handler-2:[]) (logid:4e06d514) Ignoring disk [<disk device='cdrom' type='file'><driver name='qemu' type='raw' /><source file=''/><target dev='hdc' bus='ide'/></disk>] in VM [{"name":"v-154-VM","uuid":"e4ee4966-84c3-4e23-80bf-7f49c90f59df"}]'s stats since its deviceType is [cdrom].
2025-04-01 21:46:48,294 DEBUG [kvm.resource.LibvirtComputingResource] (agentRequest-Handler-2:[]) (logid:4e06d514) Retrieved statistics for VM [{"name":"v-154-VM","uuid":"e4ee4966-84c3-4e23-80bf-7f49c90f59df"}]: [{"cpuTime":77890000000,"diskReadIOs":9707.0,"diskReadKBs":279018.0,"diskWriteIOs":13349.0,"diskWriteKBs":213746.0,"networkReadKBs":116786.5341796875,"networkWriteKBs":2075.14453125}].
2025-04-01 21:46:48,300 DEBUG [kvm.resource.LibvirtComputingResource] (agentRequest-Handler-2:[]) (logid:4e06d514) Old stats exist for VM [{"name":"v-154-VM","uuid":"e4ee4966-84c3-4e23-80bf-7f49c90f59df"}]; therefore, the utilization will be calculated.
2025-04-01 21:46:48,301 DEBUG [kvm.resource.LibvirtComputingResource] (agentRequest-Handler-2:[]) (logid:4e06d514) Calculated metrics for VM [{"name":"v-154-VM","uuid":"e4ee4966-84c3-4e23-80bf-7f49c90f59df"}]: [{"cpuUtilization":0.9648174332529319,"diskReadIOs":0.0,"diskReadKBs":0.0,"diskWriteIOs":245.0,"diskWriteKBs":1688.0,"entityType":"vm","intFreeMemoryKBs":875312.0,"memoryKBs":1048576.0,"networkReadKBs":7.56640625,"networkWriteKBs":9.48046875,"numCPUs":1,"targetMemoryKBs":1048576.0}].
2025-04-01 21:46:57,839 DEBUG [cloud.agent.Agent] (agentRequest-Handler-4:[]) (logid:95989193) Processing command: com.cloud.agent.api.GetStorageStatsCommand
2025-04-01 21:46:57,839 INFO [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:[]) (logid:95989193) Trying to fetch storage pool e76f8956-1a81-3e97-aff6-8dc3f199a48a from libvirt
2025-04-01 21:46:57,839 DEBUG [kvm.resource.LibvirtConnection] (agentRequest-Handler-4:[]) (logid:95989193) Looking for libvirtd connection at: qemu:///system
2025-04-01 21:46:57,861 INFO [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:[]) (logid:95989193) Asking libvirt to refresh storage pool e76f8956-1a81-3e97-aff6-8dc3f199a48a
2025-04-01 21:46:57,956 DEBUG [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:[]) (logid:95989193) Successfully refreshed pool e76f8956-1a81-3e97-aff6-8dc3f199a48a Capacity: (89.0110 TB) 97868596985856 Used: (5.60 GB) 6010871808 Available: (80.1011 TB) 88072118124544
2025-04-01 21:47:22,383 DEBUG [kvm.storage.MultipathSCSIAdapterBase] (MultipathMapCleanupJob:[]) (logid:) Executing command [/usr/share/cloudstack-common/scripts/storage/multipath/cleanStaleMaps.sh ].
2025-04-01 21:47:22,401 DEBUG [kvm.storage.MultipathSCSIAdapterBase] (MultipathMapCleanupJob:[]) (logid:) Successfully executed process [8053] for command [/usr/share/cloudstack-common/scripts/storage/multipath/cleanStaleMaps.sh ].
2025-04-01 21:47:22,401 DEBUG [kvm.storage.MultipathSCSIAdapterBase] (MultipathMapCleanupJob:[]) (logid:) Multipath Cleanup Job elapsed time (ms): 18; result: 0
2025-04-01 21:47:24,642 DEBUG [kvm.resource.LibvirtComputingResource] (UgentTask-5:[]) (logid:) Executing command [/usr/share/cloudstack-common/scripts/vm/network/security_group.py get_rule_logs_for_vms ].
2025-04-01 21:47:24,827 DEBUG [kvm.resource.LibvirtComputingResource] (UgentTask-5:[]) (logid:) Successfully executed process [8058] for command [/usr/share/cloudstack-common/scripts/vm/network/security_group.py get_rule_logs_for_vms ].
2025-04-01 21:47:24,828 DEBUG [agent.properties.AgentPropertiesFileHandler] (UgentTask-5:[]) (logid:) Property [hypervisor.uri] has empty or null value. Using default value [null].
2025-04-01 21:47:24,828 DEBUG [kvm.resource.LibvirtConnection] (UgentTask-5:[]) (logid:) Looking for libvirtd connection at: qemu:///system
2025-04-01 21:47:24,851 DEBUG [kvm.resource.LibvirtComputingResource] (UgentTask-5:[]) (logid:) Host health check script path is not specified
2025-04-01 21:47:24,852 DEBUG [cloud.agent.Agent] (UgentTask-5:[]) (logid:) Sending ping: Seq 1-53: { Cmd , MgmtId: -1, via: 1, Ver: v1, Flags: 11, [{"com.cloud.agent.api.PingRoutingWithNwGroupsCommand":{"newGroupStates":{},"_hostVmStateReport":{"v-154-VM":{"state":"PowerOn","host":"hv.kvmvcompatl2001.321communications.cloud"}},"_gatewayAccessible":"true","_vnetAccessible":"true","hostType":"Routing","hostId":"1","outOfBand":"false","wait":"0","bypassHostMaintenance":"false"}}] }
2025-04-01 21:47:24,870 DEBUG [cloud.agent.Agent] (Agent-Handler-3:[]) (logid:) Received response: Seq 1-53: { Ans: , MgmtId: 250977680725600, via: 1, Ver: v1, Flags: 100010, [{"com.cloud.agent.api.PingAnswer":{"_command":{"hostType":"Routing","hostId":"1","outOfBand":"false","wait":"0","bypassHostMaintenance":"false"},"sendStartup":"false","result":"true","wait":"0","bypassHostMaintenance":"false"}}] }
2025-04-01 21:47:47,451 DEBUG [cloud.agent.Agent] (agentRequest-Handler-3:[]) (logid:b981df36) Processing command: com.cloud.agent.api.GetHostStatsCommand
2025-04-01 21:47:48,361 DEBUG [cloud.agent.Agent] (agentRequest-Handler-5:[]) (logid:66422334) Processing command: com.cloud.agent.api.GetVmStatsCommand
2025-04-01 21:47:48,362 DEBUG [agent.properties.AgentPropertiesFileHandler] (agentRequest-Handler-5:[]) (logid:66422334) Property [hypervisor.uri] has empty or null value. Using default value [null].
2025-04-01 21:47:48,362 DEBUG [kvm.resource.LibvirtConnection] (agentRequest-Handler-5:[]) (logid:66422334) Looking for libvirtd connection at: qemu:///system
2025-04-01 21:47:48,381 DEBUG [kvm.resource.LibvirtComputingResource] (agentRequest-Handler-5:[]) (logid:66422334) Trying to get VM with name [v-154-VM].
2025-04-01 21:47:48,387 DEBUG [kvm.resource.LibvirtVMDef] (agentRequest-Handler-5:[]) (logid:66422334) Using informed label [hdc] for volume [null].
2025-04-01 21:47:48,387 DEBUG [kvm.resource.LibvirtComputingResource] (agentRequest-Handler-5:[]) (logid:66422334) Found [3] network interface(s) for VM [{"name":"v-154-VM","uuid":"e4ee4966-84c3-4e23-80bf-7f49c90f59df"}].
2025-04-01 21:47:48,396 DEBUG [kvm.resource.LibvirtVMDef] (agentRequest-Handler-5:[]) (logid:66422334) Using informed label [hdc] for volume [null].
2025-04-01 21:47:48,396 DEBUG [kvm.resource.LibvirtComputingResource] (agentRequest-Handler-5:[]) (logid:66422334) Found [2] disk(s) for VM [{"name":"v-154-VM","uuid":"e4ee4966-84c3-4e23-80bf-7f49c90f59df"}].
2025-04-01 21:47:48,401 DEBUG [kvm.resource.LibvirtComputingResource] (agentRequest-Handler-5:[]) (logid:66422334) Ignoring disk [<disk device='cdrom' type='file'><driver name='qemu' type='raw' /><source file=''/><target dev='hdc' bus='ide'/></disk>] in VM [{"name":"v-154-VM","uuid":"e4ee4966-84c3-4e23-80bf-7f49c90f59df"}]'s stats since its deviceType is [cdrom].
2025-04-01 21:47:48,401 DEBUG [kvm.resource.LibvirtComputingResource] (agentRequest-Handler-5:[]) (logid:66422334) Retrieved statistics for VM [{"name":"v-154-VM","uuid":"e4ee4966-84c3-4e23-80bf-7f49c90f59df"}]: [{"cpuTime":78540000000,"diskReadIOs":9707.0,"diskReadKBs":279018.0,"diskWriteIOs":13688.0,"diskWriteKBs":216070.0,"networkReadKBs":116794.1005859375,"networkWriteKBs":2084.625}].
2025-04-01 21:47:48,408 DEBUG [kvm.resource.LibvirtComputingResource] (agentRequest-Handler-5:[]) (logid:66422334) Old stats exist for VM [{"name":"v-154-VM","uuid":"e4ee4966-84c3-4e23-80bf-7f49c90f59df"}]; therefore, the utilization will be calculated.
2025-04-01 21:47:48,409 DEBUG [kvm.resource.LibvirtComputingResource] (agentRequest-Handler-5:[]) (logid:66422334) Calculated metrics for VM [{"name":"v-154-VM","uuid":"e4ee4966-84c3-4e23-80bf-7f49c90f59df"}]: [{"cpuUtilization":1.0812789034168413,"diskReadIOs":0.0,"diskReadKBs":0.0,"diskWriteIOs":339.0,"diskWriteKBs":2324.0,"entityType":"vm","intFreeMemoryKBs":875312.0,"memoryKBs":1048576.0,"networkReadKBs":7.56640625,"networkWriteKBs":9.48046875,"numCPUs":1,"targetMemoryKBs":1048576.0}].
2025-04-01 21:47:57,991 DEBUG [cloud.agent.Agent] (agentRequest-Handler-1:[]) (logid:24e4a527) Processing command: com.cloud.agent.api.GetStorageStatsCommand
2025-04-01 21:47:57,991 INFO [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-1:[]) (logid:24e4a527) Trying to fetch storage pool e76f8956-1a81-3e97-aff6-8dc3f199a48a from libvirt
2025-04-01 21:47:57,991 DEBUG [kvm.resource.LibvirtConnection] (agentRequest-Handler-1:[]) (logid:24e4a527) Looking for libvirtd connection at: qemu:///system
2025-04-01 21:47:58,013 INFO [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-1:[]) (logid:24e4a527) Asking libvirt to refresh storage pool e76f8956-1a81-3e97-aff6-8dc3f199a48a
2025-04-01 21:47:58,107 DEBUG [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-1:[]) (logid:24e4a527) Successfully refreshed pool e76f8956-1a81-3e97-aff6-8dc3f199a48a Capacity: (89.0110 TB) 97868596985856 Used: (5.60 GB) 6011473920 Available: (80.1011 TB) 88072117530624
^C
***@***.***:/var/log/cloudstack/agent# virsh list --all
Id Name State
--------------------------
1 v-154-VM running <<<Proxy VM running
the sec Storage is not, or should i load manually.
Tata Y.
… On Apr 1, 2025, at 4:14 PM, Wei Zhou ***@***.***> wrote:
weizhouapache
left a comment
(apache/cloudstack#10643)
@tatay188
Can you surround each ipv6 address by "[" and "]"
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.
<https://github.com/tatay188> <#10643 (comment)> <https://github.com/notifications/unsubscribe-auth/ACNEALPUSODF4WYWC6GIT5L2XLXRFAVCNFSM6AAAAAB2FXPMOKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDONZQGU3TMMZYHE>
weizhouapache
left a comment
(apache/cloudstack#10643)
<#10643 (comment)>
@tatay188 <https://github.com/tatay188>
Can you surround each ipv6 address by "[" and "]"
—
Reply to this email directly, view it on GitHub <#10643 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ACNEALPUSODF4WYWC6GIT5L2XLXRFAVCNFSM6AAAAAB2FXPMOKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDONZQGU3TMMZYHE>.
You are receiving this because you were mentioned.
|
Beta Was this translation helpful? Give feedback.
-
Everytime I reinstall something stops working and something starts working properly. |
Beta Was this translation helpful? Give feedback.
-
@tatay188 |
Beta Was this translation helpful? Give feedback.
-
Helllo Weiz. Single management server works !! How should I Add the additional Management server to work properly ? Both Initial VMs are up and running !! I follow exactly the process. Something interesting I still not able to add my own security values when I am installing the servers. I should be able to have more than one Management server IP internally right ? Here are the steps: on the secondary: |
Beta Was this translation helpful? Give feedback.
-
problem
I am having the following problem, The Libvirt is unable to convert from Secondary Storage to CEPH RBD.
CEPH is purely IPV6, The RBD was created, There are no error on the CEPH side.
The Initial VM Creation starts and then stops, they never come enabled,
the initial VMs are deleted and the process start in a loop.
systemctl status libvirtd
● libvirtd.service - Virtualization daemon
Loaded: loaded (/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2025-04-01 01:19:22 UTC; 8h ago
Docs: man:libvirtd(8)
https://libvirt.org
Main PID: 2271 (libvirtd)
Tasks: 19 (limit: 32768)
Memory: 98.5M
CPU: 9.064s
CGroup: /system.slice/libvirtd.service
└─2271 /usr/sbin/libvirtd --listen
Apr 01 02:32:57 kvmvcompatl2001 libvirtd[2271]: invalid argument: Connections from inside daemon must be direct
Apr 01 02:32:57 kvmvcompatl2001 libvirtd[2271]: End of file while reading data: Input/output error
Apr 01 02:33:02 kvmvcompatl2001 libvirtd[2271]: invalid argument: Connections from inside daemon must be direct
Apr 01 02:33:02 kvmvcompatl2001 libvirtd[2271]: End of file while reading data: Input/output error
Apr 01 02:33:27 kvmvcompatl2001 libvirtd[2271]: invalid argument: Connections from inside daemon must be direct
Apr 01 02:33:27 kvmvcompatl2001 libvirtd[2271]: End of file while reading data: Input/output error
Apr 01 02:33:32 kvmvcompatl2001 libvirtd[2271]: invalid argument: Connections from inside daemon must be direct
the agent.log is showing the following in a loop:
2025-04-01 02:33:54,480 DEBUG [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:[]) (logid:d64e452c) Starting copy from source image /mnt/a1aa3257-b554-3896-a168-a593ebde9994/5ccb81a1-26ec-4d57-a02a-37f81e09be08.qcow2 to RBD image 3cephUserandPool/f32a0f81-5661-41ac-832f-f5dfffa8b1e0
2025-04-01 02:33:54,480 DEBUG [utils.script.Script] (agentRequest-Handler-4:[]) (logid:d64e452c) Executing command [qemu-img convert -O raw -U --image-opts driver=qcow2,file.filename=/mnt/a1aa3257-b554-3896-a168-a593ebde9994/5ccb81a1-26ec-4d57-a02a-37f81e09be08.qcow2 rbd:3cephUserandPool/f32a0f81-5661-41ac-832f-f5dfffa8b1e0:mon_host=20XX:YYYY:ZZZZ:LLLL::OO:24;20XX:YYYY:ZZZZ:LLLL::OO:26;20XX:YYYY:ZZZZ:LLLL::OO:auth_supported=cephx:id=3cephUserandPool:key=KEYGENERATEDBYCEPH:rbd_default_format=2:client_mount_timeout=30 ].
2025-04-01 02:33:54,532 WARN [utils.script.Script] (agentRequest-Handler-4:[]) (logid:d64e452c) Execution of process [7500] for command [qemu-img convert -O raw -U --image-opts driver=qcow2,file.filename=/mnt/a1aa3257-b554-3896-a168-a593ebde9994/5ccb81a1-26ec-4d57-a02a-37f81e09be08.qcow2 rbd:3cephUserandPool/f32a0f81-5661-41ac-832f-f5dfffa8b1e0:mon_host=20XX:YYYY:ZZZZ:LLLL::OO:24;20XX:YYYY:ZZZZ:LLLL::OO:26;20XX:YYYY:ZZZZ:LLLL::OO:auth_supported=cephx:id=3cephUserandPool:key=KEYGENERATEDBYCEPH:rbd_default_format=2:client_mount_timeout=30 ] failed.
2025-04-01 02:33:54,532 DEBUG [utils.script.Script] (agentRequest-Handler-4:[]) (logid:d64e452c) Exit value of process [7500] for command [qemu-img convert -O raw -U --image-opts driver=qcow2,file.filename=/mnt/a1aa3257-b554-3896-a168-a593ebde9994/5ccb81a1-26ec-4d57-a02a-37f81e09be08.qcow2 rbd:3cephUserandPool/f32a0f81-5661-41ac-832f-f5dfffa8b1e0:mon_host=20XX:YYYY:ZZZZ:LLLL::OO:24;20XX:YYYY:ZZZZ:LLLL::OO:26;20XX:YYYY:ZZZZ:LLLL::OO:auth_supported=cephx:id=3cephUserandPool:key=KEYGENERATEDBYCEPH:rbd_default_format=2:client_mount_timeout=30 ] is [1].
2025-04-01 02:33:54,532 WARN [utils.script.Script] (agentRequest-Handler-4:[]) (logid:d64e452c) Process [7500] for command [qemu-img convert -O raw -U --image-opts driver=qcow2,file.filename=/mnt/a1aa3257-b554-3896-a168-a593ebde9994/5ccb81a1-26ec-4d57-a02a-37f81e09be08.qcow2 rbd:3cephUserandPool/f32a0f81-5661-41ac-832f-f5dfffa8b1e0:mon_host=20XX:YYYY:ZZZZ:LLLL::OO:24;20XX:YYYY:ZZZZ:LLLL::OO:26;20XX:YYYY:ZZZZ:LLLL::OO:auth_supported=cephx:id=3cephUserandPool:key=KEYGENERATEDBYCEPH:rbd_default_format=2:client_mount_timeout=30 ] encountered the error: [qemu-img: rbd:3cephUserandPool/f32a0f81-5661-41ac-832f-f5dfffa8b1e0:mon_host=20XX:YYYY:ZZZZ:LLLL::OO:24;20XX:YYYY:ZZZZ:LLLL::OO:26;20XX:YYYY:ZZZZ:LLLL::OO:auth_supported=cephx:id=3cephUserandPool:key=KEYGENERATEDBYCEPH:rbd_default_format=2:client_mount_timeout=30: error while converting raw: invalid conf option 550:5607:fff0::22:24;20XX:YYYY:ZZZZ:LLLL::OO:26;20XX:YYYY:ZZZZ:LLLL::OO:auth_supported: No such file or directory].
2025-04-01 02:33:54,532 ERROR [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:[]) (logid:d64e452c) Failed to convert from /mnt/a1aa3257-b554-3896-a168-a593ebde9994/5ccb81a1-26ec-4d57-a02a-37f81e09be08.qcow2 to rbd:3cephUserandPool/f32a0f81-5661-41ac-832f-f5dfffa8b1e0:mon_host=20XX:YYYY:ZZZZ:LLLL::OO:24;20XX:YYYY:ZZZZ:LLLL::OO:26;20XX:YYYY:ZZZZ:LLLL::OO:auth_supported=cephx:id=3cephUserandPool:key=KEYGENERATEDBYCEPH:rbd_default_format=2:client_mount_timeout=30 the error was: qemu-img: rbd:3cephUserandPool/f32a0f81-5661-41ac-832f-f5dfffa8b1e0:mon_host=20XX:YYYY:ZZZZ:LLLL::OO:24;20XX:YYYY:ZZZZ:LLLL::OO:26;20XX:YYYY:ZZZZ:LLLL::OO:auth_supported=cephx:id=3cephUserandPool:key=KEYGENERATEDBYCEPH:rbd_default_format=2:client_mount_timeout=30: error while converting raw: invalid conf option 550:5607:fff0::22:24;20XX:YYYY:ZZZZ:LLLL::OO:26;20XX:YYYY:ZZZZ:LLLL::OO:auth_supported: No such file or directory
2025-04-01 02:33:54,532 INFO [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:[]) (logid:d64e452c) Attempting to remove storage pool a1aa3257-b554-3896-a168-a593ebde9994 from libvirt
2025-04-01 02:33:54,532 DEBUG [kvm.resource.LibvirtConnection] (agentRequest-Handler-4:[]) (logid:d64e452c) Looking for libvirtd connection at: qemu:///system
2025-04-01 02:33:54,534 INFO [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:[]) (logid:d64e452c) Storage pool a1aa3257-b554-3896-a168-a593ebde9994 has no corresponding secret. Not removing any secret.
2025-04-01 02:33:54,577 INFO [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:[]) (logid:d64e452c) Storage pool a1aa3257-b554-3896-a168-a593ebde9994 was successfully removed from libvirt.
2025-04-01 02:33:54,579 DEBUG [cloud.agent.Agent] (agentRequest-Handler-4:[]) (logid:d64e452c) Seq 3-8729383452727574659: { Ans: , MgmtId: 250977680725600, via: 3, Ver: v1, Flags: 110, [{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":"false","details":"com.cloud.utils.exception.CloudRuntimeException: Failed to copy /mnt/a1aa3257-b554-3896-a168-a593ebde9994/5ccb81a1-26ec-4d57-a02a-37f81e09be08.qcow2 to f32a0f81-5661-41ac-832f-f5dfffa8b1e0","wait":"0","bypassHostMaintenance":"false"}}] }
2025-04-01 02:33:54,621 DEBUG [cloud.agent.Agent] (agentRequest-Handler-3:[]) (logid:d64e452c) Request:Seq 3-8729383452727574660: { Cmd , MgmtId: 250977680725600, via: 3, Ver: v1, Flags: 100011, [{"com.cloud.agent.api.StopCommand":{"isProxy":"false","checkBeforeCleanup":"false","controlIp":"169.254.138.135","forceStop":"false","volumesToDisconnect":[],"vmName":"v-172-VM","executeInSequence":"false","wait":"0","bypassHostMaintenance":"false"}}] }
2025-04-01 02:33:54,621 DEBUG [cloud.agent.Agent] (agentRequest-Handler-3:[]) (logid:d64e452c) Processing command: com.cloud.agent.api.StopCommand
2025-04-01 02:33:54,621 DEBUG [resource.wrapper.LibvirtStopCommandWrapper] (agentRequest-Handler-3:[]) (logid:d64e452c) backing up the cmdline
2025-04-01 02:33:57,691 DEBUG [resource.wrapper.LibvirtStopCommandWrapper] (agentRequest-Handler-3:[]) (logid:d64e452c) Failed to backup cmdline file due to There was a problem while connecting to 169.254.138.135:3922
Error logs showing:
libvirt: Remote Driver error : invalid argument: Connections from inside daemon must be direct
libvirt: QEMU Driver error : Domain not found: no domain with matching name 'v-172-VM'
libvirt: QEMU Driver error : Domain not found: no domain with matching name 'v-172-VM'
libvirt: QEMU Driver error : Domain not found: no domain with matching name 'v-172-VM'
libvirt: QEMU Driver error : Domain not found: no domain with matching name 'v-172-VM'
libvirt: QEMU Driver error : Domain not found: no domain with matching name 'v-172-VM'
libvirt: Storage Driver error : Storage pool not found: no storage pool with matching uuid 'a1aa3257-b554-3896-a168-a593ebde9994'
libvirt: Secrets Driver error : Secret not found: no secret with matching uuid 'a1aa3257-b554-3896-a168-a593ebde9994'
The Host VM reached the IPV6 ip.
versions
ACS 4.20.0
Hypervisor KVM
Primary stoarge CEPH Squid RBD
Secondary NFS EMC
VXLAN Only no VLANS anywhere
The steps to reproduce the bug
1.Install Ceph, Management, KVM host
2.Primary storage using CEPH RBD
3.Adding Host, Infrastructure VMs start, but do not complete
...
What to do about it?
Any help, or guidance, Seems a bug. I am unable to try with IPv4, and the CEPH is settle as IPv6.
Beta Was this translation helpful? Give feedback.
All reactions