# Notes on getting RHCOS/RHEL to boot on LPAR Target: Booting the RHCOS 4.16 live installer (from rhcos-416.94.202410211619-0-live.s390x.iso) in an LPAR with Fibre Channel (FCP) storage, to install a node that will join an existing OpenShift cluster. ## s390x ISOs ### RHCOS ```bash tree rhcos rhcos ├── boot.catalog ├── coreos │   ├── features.json │   ├── igninfo.json │   ├── kargs.json │   └── miniso.dat ├── generic.ins └── images ├── cdboot.img ├── cdboot.prm ├── genericdvd.prm ├── generic.prm ├── initrd.addrsize ├── pxeboot │   ├── initrd.img │   ├── kernel.img │   └── rootfs.img └── redhat.exec 4 directories, 15 files ``` ### RHEL ```bash ❯ tree rhel rhel ├── generic.ins └── images ├── boot.cat ├── cdboot.img ├── cdboot.prm ├── genericdvd.prm ├── generic.prm ├── initrd.addrsize ├── initrd.img ├── install.img ├── kernel.img └── redhat.exec 2 directories, 11 files ``` ## Sample .ins Files (LPAR Boot Instructions) The .ins file for LPAR (often named generic.ins) specifies the kernel, initrd, parm file, and an address size file, along with the memory load addresses. The RHCOS ISO provides these files under an images/ directory. A minimal example (for RHCOS 4.x on s390x) is as follows​ [@rhel_boot_params] [@ibm_install_ocp]. ```bash images/kernel.img 0x00000000 images/initrd.img 0x02000000 images/genericdvd.prm 0x00010480 images/initrd.addrsize 0x00010408 ``` This assumes the ISO contents are available (e.g. via FTP or attached as virtual media) with the same directory structure. The above instructs the loader to place `kernel.img` at memory address `0`, `initrd.img` at `0x02000000`, the parameter file at `0x00010480`, and the `initrd.addrsize` file at `0x00010408`. These standard addresses are provided by Red Hat​ [@rhel_boot_params]. You would point the HMC to this `.ins` file when IPLing the LPAR to boot RHCOS. ```bash # No diff between RHEL and RHCOS generic INS file diff rhcos/generic.ins rhel/generic.ins| wc -l 0 ``` ## Sample .prm File (Kernel Parameters for RHCOS) The genericdvd.prm file (or a copy thereof) contains the kernel boot parameters required to start the RHCOS installer on IBM Z. For RHCOS, the parameters direct the CoreOS live installer to the target install disk and the Ignition config. A minimal example for RHCOS 4.16 in an LPAR with Fiber Channel storage is below (line breaks are for readability; each space-separated entry is a kernel parameter): ```bash rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=sda coreos.live.rootfs_url=http:///rhcos-416.94.202410211619-0-live-rootfs.s390x.img coreos.inst.ignition_url=http:///ignition/worker.ign ip=dhcp nameserver= cio_ignore=all,!condev zfcp.allow_lun_scan=0 rd.zfcp=0.0.,0x,0x ``` ### Explaination `rd.neednet=1` – Ensure networking is brought up (required when using network kernel args like multiple `ip=` entries)​ [@ocp_ibm_z] `console=ttysclp0` – Use the SCLP line printer console (the standard console for LPAR) [@ibm_install_ocp] `coreos.inst.install_dev=sda` – The installation target block device. In this example, we use the first SCSI disk (`/dev/sda`) attached via FCP. (On IBM Z, DASD devices would be named “dasd**a**”, etc.)​ [@ocp_ibm_z] `coreos.live.rootfs_url=...` – URL to the RHCOS live root filesystem image. The RHCOS ISO’s initrd will fetch this rootfs image. This should point to the \*-live-rootfs.s390x.img file matching the RHCOS version [@ibm_install_ocp]. Only HTTP/HTTPS sources are supported for this URL​ [@ocp_install]. `coreos.inst.ignition_url=...` – URL to the Ignition config (in this case, a worker node ignition) that will configure the node to join the cluster​ [@ibm_install_ocp]. This Ignition file (e.g. worker.ign or master.ign) should be hosted on an HTTP/HTTPS server accessible to the LPAR. `ip=dhcp` (or a static `ip=::::::none`) – Network configuration. Here we use DHCP for simplicity. If using static networking, specify the IP, gateway, subnet and device; for example: `ip=192.0.2.10::192.0.2.1:255.255.255.0:rhcos-node:encccw0.0.0600:none` (the format is `ip=::::::`)​ [@ibm_install_ocp] `nameserver=` – DNS server to use (if not provided via DHCP) [@ibm_install_ocp] `cio_ignore=all,!condev` – Ignore all I/O devices except the console device. This speeds up boot and device discovery on systems with many devices​ [@rhel_boot_params] (the installer will activate needed devices later). `zfcp.allow_lun_scan=0` – Disable automatic LUN scanning on FCP devices. This is recommended to avoid scanning all LUNs when we will explicitly specify the target LUN(s) [@ibm_install_ocp] `rd.zfcp=0.0.,0x,0x` – Brings up the FCP-attached SCSI disk. Provide the channel subsystem device number of the FCP adapter, and the target WWPN and LUN of the storage device​ [@rhel_boot_params]. For example: `rd.zfcp=0.0.4000,0x5005076300C213E9,0x5022000000000000` for FCP adapter `0.0.4000, WWPN 5005076300C213E9, LUN 5022000000000000​` [@rhel_boot_params]. (Repeat `rd.zfcp` for multiple paths or multipath setup as needed) These parameters instruct the RHCOS live kernel to fetch its rootfs image and Ignition config over the network and then install CoreOS to the specified disk. In practice, you would replace placeholders (``, ``, ``, ``, etc.) with your environment’s values (IP addresses, device IDs, etc.). Ensure an HTTP server is hosting the rootfs image and Ignition files (as prepared in your OpenShift install process)​ [@ibm_install_ocp] ## RHCOS ISO Mount ```bash # Mount RHCOS ISO from subdirectory sudo mount -o loop ./rhcos/rhcos-416.94.202410211619-0-live.s390x.iso ~/mnt/rhcos ``` ## RHEL ISO Mount ```bash # Mount RHEL ISO from subdirectory sudo mount -o loop ./rhel/rhel-9.5-s390x-boot.iso ~/mnt/rhel ``` ## Working with ISO Content in Read-Write Mode ISO9660 filesystems are inherently read-only. To work with the content in a read-write manner: ### Option 1: Extract and Work with Content ```bash # For RHCOS mkdir -p ~/rw-rhcos cp -r ~/mnt/rhcos/* ~/rw-rhcos/ # Now modify content in ~/rw-rhcos/ # For RHEL mkdir -p ~/rw-rhel cp -r ~/mnt/rhel/* ~/rw-rhel/ # Now modify content in ~/rw-rhel/ ``` ### Option 2: Use OverlayFS (Advanced) ```bash # For RHCOS mkdir -p ~/overlay-rhcos/{lower,upper,work,merged} sudo mount -o loop ./rhcos/rhcos-416.94.202410211619-0-live.s390x.iso ~/overlay-rhcos/lower sudo mount -t overlay -o lowerdir=~/overlay-rhcos/lower,upperdir=~/overlay-rhcos/upper,workdir=~/overlay-rhcos/work overlay ~/overlay-rhcos/merged # Now you can modify files in ~/overlay-rhcos/merged and changes will be stored in ~/overlay-rhcos/upper ``` ### Option 3: Create a New ISO After Modifications ```bash # After making changes in the extracted directory, create a new ISO: sudo genisoimage -o new-rhcos.iso -r -J -V "RHCOS_NEW" ~/rw-rhcos/ ``` ## Unmount Commands When you're done using the ISOs, unmount them with: ```bash sudo umount ~/mnt/rhcos sudo umount ~/mnt/rhel ``` ## Retrieve Ignition Files When adding a worker node to an existing OpenShift 4.x IPI cluster that’s been running for more than 24 hours, you must generate a fresh worker Ignition config. The original worker.ign from installation contains a bootstrap certificate that expires after 24 hours​, so a new node using it will fail to join. OpenShift’s Machine Config Operator maintains a secret with the latest worker Ignition pointer config (which includes the cluster’s current internal certificate and URL). You can extract this via the oc CLI: ```bash # Login as a cluster-admin and switch to the machine API namespace oc project openshift-machine-api # Extract the worker Ignition JSON to a file (e.g., worker.ign) oc extract secret/worker-user-data-managed --keys=userData --to=- > worker.ign ``` ## Approve Node Join ```bash oc get csr -o name # find the CSR names oc adm certificate approve csr-abc123 oc adm certificate approve csr-xyz456 ``` # References @ign https://access.redhat.com/solutions/5504291 @rhel_boot_params https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/automatically_installing_rhel/preparing-a-rhel-installation-on-64-bit-ibm-z_rhel-installer#customizing-boot-parameters_preparing-a-rhel-installation-on-64-bit-ibm-z @ibm_install_ocp https://community.ibm.com/community/user/ibmz-and-linuxone/blogs/gerald-hosch1/2024/07/26/installing-red-hat-openshift-with-lpar-on-ibm-z?communityKey=fd56de68-d38b-499b-a1f4-51010f4eee66 @ocp_ibm_z https://www.redhat.com/en/blog/installing-ocp-in-a-mainframe-z-series#:~:text=,bootstrap%20will%20be%20downloaded%20from @ocp_install https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/postinstallation_configuration/configuring-multi-architecture-compute-machines-on-an-openshift-cluster#creating-multi-arch-compute-nodes-ibm-power