Proxmox_LXC

NFS LXC

https://unix.stackexchange.com/questions/450308/how-to-allow-specific-proxmox-lxc-containers-to-mount-nfs-shares-on-the-network

Yes, it's possible. Simply create a new profile (based on lxc-container-default-cgns) and use it for the specific containers. So first run

cp /etc/apparmor.d/lxc/lxc-default-cgns /etc/apparmor.d/lxc/lxc-default-with-nfs

Then edit the new file /etc/apparmor.d/lxc/lxc-default-with-nfs:

replace profile lxc-container-default-cgns by profile lxc-container-default-with-nfs put the NFS configuration (see below) just before the closing bracket (}) NFS configuration

Either write

mount fstype=nfs*,
mount fstype=rpc_pipefs,

or (being more explicit)

mount fstype=nfs,
mount fstype=nfs4,
mount fstype=nfsd,
mount fstype=rpc_pipefs,

and finally run service apparmor reload.

Use the new profile

Edit /etc/pve/lxc/${container_id}.conf and append this line:
lxc.apparmor.profile: lxc-container-default-with-nfs

Then stop the container and start it again, e.g. like this:

pct stop ${container_id} && pct start ${container_id}

Now mounting NFS shares should work.

disabling cluster mode

https://www.reddit.com/r/Proxmox/comments/avk2gx/help_cluster_not_ready_no_quorum_500/

systemctl stop pve-cluster
systemctl stop corosync
pmxcfs -l
rm /etc/pve/corosync.conf
rm /etc/corosync/*
killall pmxcfs
systemctl start pve-cluster

move vm to another node

vzdump 130 --mode stop
scp tar new_node
pct restore 1234 var/lib/vz/dump/vzdump-lxc-1234-2016_03_02-02_31_03.tar.gz -ignore-unpack-errors 1 -unprivileged

run unms in lxc

lxc.apparmor.profile = unconfined
do /usr/lib/lxc/ID.conf
pak lxc-stop -n ID
lxc-start -n ID
by default je to lxc brutalne unprivileged

Waiting for UNMS to start
CONTAINER ID        IMAGE                     COMMAND                  CREATED             STATUS              PORTS                                            NAMES
51d5a2823aef        ubnt/unms:1.1.2           "/usr/bin/dumb-init …"   9 seconds ago       Up 6 seconds                                                         unms
320bb0c8b23e        ubnt/unms-crm:3.1.2       "make server_with_mi…"   10 seconds ago      Up 8 seconds        80-81/tcp, 443/tcp, 9000/tcp, 2055/udp           ucrm
c8ced9596c84        ubnt/unms-netflow:1.1.2   "/usr/bin/dumb-init …"   11 seconds ago      Up 7 seconds        0.0.0.0:2055->2055/udp                           unms-netflow
27b9c3344742        redis:5.0.5-alpine        "docker-entrypoint.s…"   15 seconds ago      Up 11 seconds                                                        unms-redis
1f1fd4ad8b11        ubnt/unms-nginx:1.1.2     "/entrypoint.sh ngin…"   15 seconds ago      Up 10 seconds       0.0.0.0:80-81->80-81/tcp, 0.0.0.0:443->443/tcp   unms-nginx
dcbc960d019f        postgres:9.6.12-alpine    "docker-entrypoint.s…"   15 seconds ago      Up 12 seconds                                                        unms-postgres
1ac50d102245        rabbitmq:3.7.14-alpine    "docker-entrypoint.s…"   15 seconds ago      Up 13 seconds                                                        unms-rabbitmq
a34be8e22abe        ubnt/unms-fluentd:1.1.2   "/entrypoint.sh /bin…"   17 seconds ago      Up 15 seconds       5140/tcp, 127.0.0.1:24224->24224/tcp             unms-fluentd
UNMS is running

Proxmox cluster

Prodloužení timeoutu na 10 sekund

dle https://www.thegeekdiary.com/how-to-change-pacemaker-cluster-heartbeat-timeout-in-centos-rhel-7/ jsem dal do /etc/pve/corosync.conf „token: 9500“ a to by mohlo prodlouzit quorum timeout na 10sekund

proxmox 7 centos 7

With old systemd there is no console and network in centos and other systems in proxmox 7. Solutin is to install newer version of systemd.

Workaround proposed on https://forum.proxmox.com/threads/pve-7-wont-start-centos-7-container.97834/post-425419:

  1. gain access to the CT with pct enter <CTID>
  2. enable the network with ifup eth0
  3. issue yum update
  4. exit from the CT
  5. stop the CT with pct stop <CTID>
  6. start the CT normally

proxmox two node cluster

add to file /etc/pve/corosync.conf

quorum {
    provider: corosync_votequorum
    two_node: 1
    wait_for_all: 0
}
Tisk/export