The Subscription Cannot Be Run at This Time Please Wait a Few Minutes and Try Again

Content

Versions:

Limitations impacting all versions

Unable to admission Deject Pak for Integration user interfaces


Issue: When users try to access the Deject Pak for Integration UI routes, they get the bulletin,Application is non available .

Cause:Network traffic has not been allowed between the deployed instance of Platform Navigator and the Ingress Controller, as required. (If you would similar more data well-nigh this policy, review the Ruby-red Hat documentation.)

Solution:

Using the CLI

1. Log in to the Red Hat OpenShift cluster CLI as a Cluster Administrator.

two. Confirm theendpointPublishingStrategy type of theIngressController:

                oc get --namespace openshift-ingress-operator ingresscontrollers/default \  --output jsonpath='{.status.endpointPublishingStrategy.type}'; echo              

iii. If the type value from the previous step isHostNetwork, ingress traffic must be enabled through thedefault namespace. Add the labelnetwork.openshift.io/policy-group=ingress to thedefault namespace:

                oc label namespace default 'network.openshift.io/policy-group=ingress'                              

Using the Red Chapeau OpenShift spider web console

  1. Log in to the Cherry Lid OpenShift web console for the cluster as a Cluster Ambassador.
  2. In the navigation pane, clickDwelling >Search. Click to expand Projection,select theopenshift-ingress-operator namespace, and so search for the resourceIngressController.
  3. Click to open thedefault IngressResource and view the YAML to confirm the value ofspec.endpointPublishingStrategy.
  4. If the value of spec.endpointPublishingStrategy.type isHostNetwork, ingress traffic must be enabled through thedefault namespace. In the left navigation pane, clickHome >Search. Search for the resourcesnamespace, select the defaultnamespace, and clickEdit Labels.

5. Add the labelnetwork.openshift.io/policy-grouping=ingress, then click Save.

Expired leaf certificates not automatically refreshed

Issue:Self-signed CA certificate refresh does not automatically refresh leafage certificates, resulting in unavailable services. The user gets an Application Unavailable message when attempting to admission the Platform UI or other capabilities in Cloud Pak for Integration.

Additionally, the direction-ingress pod logs show an fault:

                Error: exit status 1 2021/01/23 16:56:00 [emerg] 44#44: invalid number of arguments in "ssl_certificate" directive in /tmp/nginx-cfg123456789:123 nginx: [emerg] invalid number of arguments in "ssl_certificate" directive in /tmp/nginx-cfg123456789:123 nginx: configuration file /tmp/nginx-cfg123456789 test failed              

Unable to push images to local Azure Container Registry

When attempting to use the cloudctl command to mirror images, the use gets this error:

                  400 Bad Request Asking Header Or Cookie Too Large                

Cause:May exist a limitation of Azure Container Registry, which does not accept larger headers.

Solution: Add the USE_SKOPEO=1 statement to the cloudctl command, which causes it to use Skopeo v1.0+, one at a time, on each epitome (Skopeo v1.0+ doesn't have batching).


Cloud Pak for Integration 2021.4

No support for Red Hat® OpenShift® Container Platform version 4.9

IBM Cloud Pak® for Integration is not currently supported on Red Hat® OpenShift® Container Platform version 4.9, due to limitations from dependencies.

API Examination Generation can fail to add exam cases for large information

The API Test Generation feature can neglect to add together new test cases based on insights when the request or response data is big. After the user has clickedAdd new exam instance, they bank check the results (after the progress bar reaches the end), and observe that no new exam case has been added.


Cloud Pak for Integration 2021.3

Support for Reddish Lid® OpenShift® Container Platform version 4.9

IBM Deject Pak® for Integration is not currently supported on Red Chapeau® OpenShift® Container Platform version 4.9 due to limitations from dependencies.

AI Examination Generation shows as Processing on insight generation

When generating insights for your test suite using the traces observed by the AI Test Generation, this can be seen in 'Processing' for some time. If this does not resolve, it is recommended to restart the insight generation process.

API Director Events shows AsyncAPI 2.1 API documents equally 2.0

Where an API is divers as AsyncAPI 2.one, the UI can evidence it equally AsyncAPI 2.0. To validate the version, open the API in the editor and in the source view find the API version at the tiptop of the document.

Issue Streams operator install reports Failed status

When installing the Event Streams operator, shortly later the install starts the Red Hat OpenShift panel shows a Failed status for the operator install. To resolve this, use the Crimson Hat OpenShift console or run an 'oc get pods' command to view the status of the Event Streams operator pod. If the pod is currently running a container called 'init' this means the operator is doing initialisation steps. Wait for the init container to complete and the operator container to exist created in the pod. Once the operator pod becomes ready the status in the Red Lid OpenShift console volition update to say status "Succeeded". If the condition remains as "Failed" review the "Events" for the operator pod to see what is causing the failure.

Platform Navigator users unable to admission other namespaces

Users onboarded via LDAP are able to see instances of capabilities deployed inside the aforementioned namespace as the Platform Navigator, however not instances from other namespaces. To resolve this, access the Assistants Hub from the left navigational bill of fare and and so Identity Providers. Nether Teams and Service IDs create a new IAM team which yous can add users to for the required namespaces. In the Resourced tab assign permissions to the required namespaces.

IBM Automation home page displays no instance message when both API and Effect Endpoints are enabled

When the Event Endpoint type is enabled in an API Connect instance, the message "No event endpoint direction instance" is displayed in the "Event endpoint management" tile on the IBM Automation dwelling house folio. The message is incorrect, the Issue Endpoint Management capability is available equally part of the API Connect instance. To access the adequacy, click the link to the instance in the "API lifecycle management" tile.

This consequence is too applicative if you enabled API endpoints in an Issue Endpoint Management instance, which would event in the message "No API management instances" being displayed in the "API lifecycle direction" tile.

Cloud Pak for Integration 2021.ii

Unable to reinstall API Connect or Event Endpoint Management

After uninstalling API Connect or Event Endpoint Management, when attempting to reinstall the status of the API Connect CR APIConnectCluster is seen as showing the status0/6.
The ManagementCluster CR will exist in the 'deleting' state, which is preventing the installation starting. To resolve this, follow the steps below:

  1. Remove the new installed API Connect or Event Endpoint Management via the Platform Navigator
  2. Remove theuninstall-pgo finalizer in the ManagementCluster CR under metadata.finalizers, and supersede with an empty assortment
  3. Reinstall the new example of API Connect or Result Endpoint Management

DataPower API Gateway not configured in catalog settings with openTracing enabled

When creating an instance of DataPower API Gateway with openTracing enabled, users tin can meet the configurator chore fail while the gateway pod was re-spinning. To resolve this delete the configurator job in the OCP UI, or using the command:

                  oc delete job <configurator-job-name> -northward <namespace>                

When publishing a production the UI tin can display "requires a Datapower API Gateway, and no gateway service of this blazon is configured in the itemize settings". To resolve this delete the APIC install instance from the Platform Navigator or OCP UI and reinstall ensuring that the gateway is automatically configured via the cloud manager UI after reinstall.

Deleting a namespace can see it staying in the terminating state

When attempting to delete a namespace this tin be prevented due to objects within the namespace stuck in the terminating state. This is due to objects within the namespace, with finalizers attached which are in the terminating land. To resolve this, delete the finalizers from the following objects within a namespace stuck in a terminating state:

Customer

RoleBinding

OperandRequest

Upgrading messaging instances using the Upgrade panel in Platform Navigator gives a 'Custom resource missing fields' mistake

When trying to change version of messaging instances through the Platform Navigator, users may come across an mistake message maxim "Custom resources missing fields", and not get the option to upgrade their messaging instance.

In gild to modify the version, users should instead Edit the custom resource, either through the Platform Navigator UI or using the oc control line tool. Depending on the version being upgraded to, the license value in the CR may need to be updated in lodge to change version.

Unable to delete zen-onboarding warning

There are instances where the PlatformNavigator CR Status will be stuck on warning:

"IBM Automation User Onboarding Task did not 'Complete'. To remove this warning, delete Job [zen-onboarding] and ConfigMap [zen-onboarding-script]. To consummate IBM Automation User Onboarding, delight follow http://ibm.biz/user-onboarding."

Cloud Pak for Integration 2021.1

Inability to log in to the App Connect Dashboard or App Connect Designer equally user kubeadmin

If an instance of the App Connect Dashboard or App Connect Designer was created with a spec.version value that resolves to xi.0.0.xi-r1 or earlier, you lot cannot log in to the example as the kubeadmin user if you choose the Cherry-red Hat OpenShift authentication pick.

Aspera Operator not supported on OCP 4.seven

On Ruby-red Hat OpenShift Container Platform version 4.7 the Aspera operator install fails with Init: CrashLoopBackOff. For Deject Pak for Integration 2021.1, Aspera is only supported on Scarlet Chapeau OpenShift 4.half dozen.

IBM Cloud Pak for Integration Operator does non install Aspera

The Aspera operator is not installed as function of the Cloud Pak for Integration when using the elevation level operator. To workaround this on OCP 4.half dozen, install the Aspera operator manually. Annotation the above limitation that the Aspera operator is non supported on OCP 4.7.

Navigating to IBM Automation or integration links gives HTTP 404

After installation, when attempting to navigate to the IBM Automation homepage, the integration cards display 404 non constitute. The same error is exist displayed if the user attempts to navigate to whatsoever of the integration links in the carte.

To resolve this problem, a cluster administrator will need to either:

  • delete the ibm-nginx (in the namespace the Platform Navigator is installed) pods via the UI
  • or run the following command to remove the nginx pods:
              oc delete pods -fifty component=ibm-nginx -north <installed_namespace>            

This can also employ when a cluster has restarted and the pods need to be restarted.

511 Network Authentication Required warning in the Platform Navigator UI after creating a new instance of a production

For installations with IBM Deject Pak foundational services version 3.vii, when creating a new instance of an integration production or installing an operator in a namespace where no operands have been provisioned before you can run into a 511 - Network Authentication Required error on the Platform Navigator UI. This is due to an authentication pod restarting with new configuration, and it resolves itself after a few minutes - no action needs to exist taken.

Accessing Platform Navigator results in HTTP 504 and pod restarting

When observing a HTTP 504 accessing the Platform Navigator and the pods for the Platform Navigator are restarting this can be the consequence of JavaScript heap space exceeding the pod memory. To resolve this increase the container limits for the `services` container by setting the  the values in the Platform Navigator CR within the spec:

                resources:      services:         limits:             cpu: 'i'             memory: 1.5Gi         requests:             cpu: '0.25'             memory: 1Gi              

Unable to upgrade Automation Avails instance to 2021.1.i via the Platform Navigator Modify Version

The Change version pick from the integration capabilities page does not allow yous to update the license that is required when updating the version of your Automation Assets operand to 2021.ane.i. If yous run into this error then you lot demand to update the spec.version and spec.license.license values in the CR yaml. This tin can be done via the Ruby-red Hat OpenShift Console, Red Chapeau OpenShift CLI, or through the edit option of your Automation Assets instance on the integration capabilities page of the Platform Navigator.

Platform navigator instance errors with 'Validation Fault - Storage class is a required field' even though it was originally created with a storage course

The error message Validation Mistake - Storage course is a required field is observed when there is a Platform Navigator Operator five.4.2 installed, and and then a previous version of the operator is installed on another namespace in the cluster. Whatever existing Platform Navigator instances with version 2021.1.1 or later will go into an Error state.

To resolve this problem, a cluster administrator will need to:

  1. Uninstall and reinstall the Platform Navigator five.4.two operator in the same scope and namespace as previously installed.
  2. Edit the existing navigator instances with version 2021.1.one or later and re-add the spec.storage.class department to the Custom Resources (CR):
                        spec:   storage:     grade: <storage-course>                  

Certificate errors after uninstalling Cloud Pak for Integration and re-installing in a different namespace

After deleting an example of Cloud Pak for Integration and reinstalling it in another namespace, some components of Cloud Pak for Integration might encounter the following certificate issue:

                  Fault [ERR_TLS_CERT_ALTNAME_INVALID]: Hostname/IP does not match certificate'southward altnames                

This can be resolved by uninstalling Cloud Pak for Integration from that namespace, ensuring that the secret internal-nginx-svc-tls gets deleted (and deleting it manually if necessary), and then reinstalling Cloud Pak for Integration to the namespace.

Aspera monitoring dashboard does non appear

When attempting to view the monitoring dashboard for Aspera and a dashboard does not appear, it can be because the Aspera operator did not create the MonitoringDashboard object. To bank check if this is the example, use the OCP UI and search for MonitoringDashboards with the proper name of the Aspera instance. To check this using the OCP CLI run:

                  oc get MonitoringDashboards -n <installed_namespace>                

If at that place is no object for the Aspera example, then delete the ibm-aspera-hsts-operator pod. Wait for this pod to recreate and cheque for the MonitoringDashboard once more, it should have created the object for you.

Platform Navigator declining to discover all capabilities and runtimes when installed Cluster scoped

When running IBM Cloud Pak for Integration on IBM Deject, the Platform Navigator can fail to discover all capabilities and runtimes when installed Cluster scoped. This is due to a limit on corporeality of requests yous can make to Kubernetes.

To resolve this issue, modifying the Platform Navigator CR to scope a listing on namespaces which yous wish the Platform Navigator to watch and find all your capabilities and runtimes on the cluster. To resolve this add a comma separated list of namespaces to the spec of the CR:

                template:   pod:     containers:       - env:           - name: NAMESPACES             value: 'namespace1,namespace2....'         proper name: ibm-cp4i-prod-services              

Installing DataPower in an airgap environment sees the ibm-datapower-operator-catalog in ImagePullBackOff state

With Cloud Pak for Integration 2021.1 when installing DataPower using the Case version i.3.ii, this references the digest for i.3.0. To resolve this modify the ibm-datapower-operator-catalog CatalogSource to reference the i.three.1 digest.

Run the following command to edit the DataPower Catalog source:

                    oc -due north openshift-marketplace edit catsrc ibm-datapower-operator-catalog                  

Replace this prototype assimilate:

                    sha256:4a3486a3df4ec2568a7fbd95f7615ee992f513d173bfd73de26716554f9e6556                  

with:

                      sha256:45d4d09017bc129a7c6a65050c62abb7c7777367f652f3b209395a64b2637b8b                    

Cloud Pak for Integration 2020.iv

Compatibility of IBM Automation Foundation Assets (formerly Asset Repository) with Platform Navigator

IBM Automation Foundation Avails requires a minimum version of Platform Navigator 2020.4.1-i-eus in IBM Cloud Pak for Integration.

Configuring Operations Dashboard 2020.4.ane-1-eus for use within Cloud Pak for Integration

The latest images for Operations Dashboard are not automatically used past API Connect in Cloud Pak for Integration. To use the Operations Dashboard 2020.iv.1-one-eus, fix the openTracing definition in the spec of the CR for the consuming components as:

                          openTracing:     imageAgent: cp/icp4i/od/icp4i-od-agent@sha256:sha256:08d87bed7233a0a42eae8e257965c8b120df9d155b25da2a16d468a3423f4a73     imageCollector: cp/icp4i/od/icp4i-od-collector@sha256:sha256:43c66d8898437fa777547f2f93667523a1696402b3f5932de84f1fc2e61f08cf          

Support for multi-instance MQ

For Cloud Pak for Integration 2020.4.1 the minimum version of OCP is 4.6.eight to run MQ in multi-instance deployment.

APIC installed cluster scoped causes memory growth and OCP to become unresponsive

For Cloud Pak for Integration 2020.4.1, when installing APIC cluster-scoped across all namespaces on OCP 4.half-dozen.8 or earlier, it is observed that memory growth can cause OCP to become unresponsive. The gear up is available in OCP 4.half-dozen.9.

To mitigate this on OCP 4.vi.8 and before at that place are two options:

1. increase the retention resources of the master node to at least 32GB (recommended)

2. install at a namespace scope rather than cluster scope

Aspera intermittently fails to install due to Redis failure

For Deject Pak for Integration 2020.4.i, when installing Aspera the readiness probe for Redis can fail and crusade the Aspera install to neglect. Please retry the install of Aspera until it succeeds. There is a related Red Hat bugzilla study tracking this at https://bugzilla.redhat.com/show_bug.cgi?id=1905761

There are 2 available workarounds

  1. Notation: Although this is the simplest approach, at that place can be security considerations to opening the firewall.
    Yous tin prepare an annotation in the CR to skip the creation of network policies:
    com.ibm.cp4i/skip-network-policies: "true"
    For example:
                    kind: IbmAsperaHsts metadata:   name: development   labels:     app.kubernetes.io/name: ibm-aspera-hsts-dev     app.kubernetes.io/instance: ibm-aspera-hsts     app.kubernetes.io/managed-by: ibm-aspera-hsts-dev     com.ibm.cp4i/skip-network-policies: "true" ...              
  2. Note: This is a more complex solution that involves running HSTS with network policies. Since Red Hat OpenShift by default uses SDN, y'all can already take workloads running on the cluster and the migration can crusade disruption.
    Migrate from SDN to the OVN network provider as described here: https://docs.openshift.com/container-platform/4.six/networking/ovn_kubernetes_network_provider/drift-from-openshift-sdn.html

Upgrading to OCP 4.vi requires set up included in OCP 4.6.viii

OCP version 4.6.6 and four.half-dozen.7 incorporate the issues https://bugzilla.redhat.com/show_bug.cgi?id=1904582 which prevents upgrade to these versions. The fix is included in OCP 4.half-dozen.8 which will enable upgrades from OCP iv.1 or afterward.

Upgrading DataPower to ane.two restricted due to topology constraints

When attempting to upgrade the DataPower operator to 1.2 the pods will not start due to aDoNotSchedule inpodTopologySpreadConstraints that references the zone label, which is non a default label.

This is mitigated by adding azone=annihilation characterization to whatever worker node.
Annotation that multi-zone clusters will upgrade if there is more than one zone, without requiring the above.

DataPower Pod Topology Spread Constraints

DataPower Operator pods fail to schedule, stating that no nodes match pod topology spread constraints (missing required characterization).  For example:

            0/15 nodes are available: 12 node(due south) didn't match pod topology spread constraints (missing required characterization), 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.          

In that location was a Kubernetes problems that allowed for pods to schedule when `topologyKey` was not matched. Kubernetes recently fixed this bug in ane.20, and dorsum-ported the fix to 1.18 and 1.19. Once this set up is installed to a Kubernetes cluster, the scheduler would no longer schedule DataPower Operator pods, due to the `topologyKey` in our pod spec not using a well-known "zone" label.

If yous wish to workaround this upshot without upgrading, manually add a `zone` characterization (with any value) to one of the worker nodes in the Kubernetes cluster. The DataPower Operator pods volition then be scheduled to that worker node.

              kubectl label nodes <node-name> zone=<characterization-value>            

Airgap installation requires cloudctl version iii.six.1-2002 or later

For Cloud Pak for Integration 2020.4.one installing in an environment without internet access, the version of cloudctl that is required is iii.6.ane-2002 or afterwards

Persistent pod restarts inside Platform Navigator when installed cluster scoped

For Deject Pak for Integration 2020.4.1, the Platform Navigator can meet problems when running CP4I cluster scoped with multiple concurrent users. Fault messages in the Platform Navigator UI include "Unable to remember information nigh installed operators" or "Unable to become a response from required backend services", and certain areas of the Platform Navigator UI can fail to load. This is caused past the Platform Navigator pods restarting due to failed liveness and readiness checks, which tin be confirmed in the Red Chapeau OpenShift panel.

To workaround the result, increase the number of Platform Navigator replicas using the spec.replicas field in the PlatformNavigator CR. If the trouble persists, increase the memory limits and requests for each Platform Navigator replica using the spec.resources field in the PlatformNavigator CR. For more than data about editing the Platform Navigator CR, see https://www.ibm.com/support/knowledgecenter/en/SSGT7J_20.4/platform_navigator.html

Links to logging in Platform Navigator practice non resolve when OCP panel is open but non logged in

For Cloud Pak for Integration 2020.four.ane, when a browser has tabs open for both the OCP console and the Platform Navigator, and the user is non logged in to OCP then clicking the Logging links will redirect the user to the OCP login page. Once logged in to OCP the links will resolve.

Uneditable OpenShift UI during upgrade to 2020.4

For Cloud Pak for Integration 2020.four.1, when updating the subscription channel to EUS version for the operator via the Cherry Hat OpenShift UI tin appear uneditable. When this occurs refresh the page and the other available channels tin can be picked up and selected.

Optional field in the deployment of APIC cannot be cleared

For Cloud Pak for Integration 2020.4.1, when deploying APIC the optional field in the example creation page cannot be cleared one time a value is selected. To resolve this and select a different value, migrate away from the page and render to information technology or reopen the page.

Option for Operations Dashboard shows for MQ on zLinux whereas this characteristic is non available on zLinux

The selection for Operations Dashboard shows for MQ on zLinux, whereas this feature is not bachelor on zLinux. If a deployment is attempted with the pick selected then the Queue Manager container logs volition testify following which ostend feature is not enabled:
`Open Tracing is not supported on s390x arch.`
`Open Tracing is disabled`

Monitoring links in the Operations Dashboard comprise the wrong base URL

The workaround for this is to substitute the correct monitoring base of operations URL from `https://kibana-openshift-logging.apps.mylocation.icp4i.com` to `https://cp-console.apps.mylocation.icp4i.com/grafana`

API Connect Gateway pod fails to start due to pulling wrong OD images

The event is that the API Connect Gateway pod fails to pull the images and does not outset.

To resolve this, specify the post-obit paradigm hashes in the API Connect CR:

              spec:   gateway:     openTracing:       imageAgent: 'cp.icr.io/cp/icp4i/od/icp4i-od-agent@sha256:f1abc56564c2e49a3608a334b2238c47bff855736e443c0694efa1613dc173d8'       imageCollector: 'cp.icr.io/cp/icp4i/od/icp4i-od-collector@sha256:25ce2acd5b7fec8b4adf39aee8c281c97c9a4dad40ed96298263a50896e93b90'            

Access webhooks fail to complete on OVNKubernetes

When users endeavor to instantiate components of the Cloud Pak they seeError "access plugin "ValidatingAdmissionWebhook" failed to complete validation in 13s" for field "undefined".

This is a bug seen on RedHat OpenShift clusters with the networkType OVNKubernetes where the SDN controller accost set is not configured to allow same-pod traffic. This is causing Network Policies to not piece of work as expected. Please see https://bugzilla.redhat.com/show_bug.cgi?id=1903651 for further details on this Red Chapeau OpenShift issues.

To workaround this, you can cull to create a Network Policy that allows which exposes all ingress. Still nosotros recommend customers practice not use the networkType OVNKubernetes until the above issue is resolved.

                kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata:   name: web-allow-all   namespace: <namespace> spec:   podSelector: {}   ingress:   - {}              

OpenShift Container Platform (OCP) version 4.half dozen.56 is non compatible with Operations Dashboard 2020.4.1-8-eus

Due to changes in behaviour of Red Lid OpenShift Container Platform (OCP) in version 4.6.56, Operations Dashboard must be upgraded to 2020.4.1-ix-eus or later.


Deject Pak for Integration 2020.three and earlier

ACE Designer stuck in pending - OCP 4.v Cloud Pak for Integration 2020.3

When creating an instance of ACE Designer it tin can go stuck in the awaiting phase.

This is caused by the ACE Designer UI Pod which has failed with the condition CrashLoopBackoff.
Inspecting the events of the pod should include this error from either the Liveness probe or Readiness probe:

            Liveness probe failed: % Full % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (seven) Failed to connect to localhost port 3000: Connectedness refused          

To resolve this add the post-obit lines to the spec  of the YAML for the instance of ACE Designer then restart the pod:

              pod:     containers:       ui:         resources:           requests:             cpu: 400m             memory: 400M           limits:             cpu: 400m             retention: 400M            

Deploying IBM Cloud Pak for Integration on IBM Cloud using the IBM software itemize and gid storage

For existing clusters on IBM Cloud where IBM Cloud Pak for Integration volition be deployed, there is a known limitation on the IBM Deject provided storage. This has been resolved for newly created clusters from 8th Jan 2021. For cluster created prior to that engagement, follow the steps beneath.

When deploying IBM Cloud Pak for Integration on IBM Cloud using IBM software catalog and ibmc-file-*-gid storage classes (where * is bronze, silver, or gilded), the associated PVC created using this storage class currently stays in Pending state. The observed PVC land would exist similar to:

image 7726

From the command line this is observed every bit:

            NAME   Status    Book   Capacity   Access MODES   STORAGECLASS         AGE test   Awaiting                                      ibmc-file-golden-gid   39s          

With an error like to the post-obit in the Events section of the PVC:

              failed: Unable to update GID: storageId: 12345, Error: Mounting of NFS path failed: Mountain failed:  get out status 32  Output: mount.nfs: access denied by server while mounting fsf-dal1003a-fz.adn.networklayer.com:/IBM02SEV1920919_256/data01            

This is due to a known limitation on the IBM Deject provided storage. The recommended mitigation for this is to ensure the cluster is running Reddish Hat OpenShift version4.5.24_1525_openshift or above.

Installation of the IBM Cloud Pak for Integration Operator on OCP 4.five.ane to iv.5.15

When installing Cloud Pak for Integration using the top level operator and the 'unknown' country is shown during install, the cause can be the dependency resolution. To resolve this install the Platform Navigator operator.

This is caused due to a bug with OCP that has been resolved in OCP four.5.16. To resolve this event the options are:

- Upgrade to OCP 4.5.sixteen

- Install the individual operators of the Cloud Pak one at a fourth dimension.

Installing the Cloud Pak one operator at a time:
To workaround the OLM dependencies bugs for airgap offline installations of the Deject Pak, install all operators manually, one operator a a time, in the guild listed beneath.

For each detail on the listing of operators below:

  1. Search for the specified operator on Operator Hub on the OCP UI.
  2. Open the desired operator and click Install.
  3. Select the scope (all namespaces or a specific namespace) you would like to install the Cloud Pak in. The scope must be the same for every unmarried operator.
  4. Click Subscribe.

The operators must be installed in this order if installing the whole Cloud Pak:

  1. IBM Common Service Operator
  2. Operator for Apache CouchDB
  3. IBM Operator for Redis
  4. IBM Aspera HSTS
  5. IBM DataPower Gateway
  6. IBM App Connect
  7. IBM Event Streams
  8. IBM MQ
  9. IBM Cloud Pak for Integration Operations Dashboard
  10. IBM Cloud Pak for Integration Asset Repository
  11. IBM Deject Pak for Integration Platform Navigator
  12. IBM API Connect
  13. IBM Cloud Pak for Integration

To install via the CLI, use thesubscriptions.yaml for each operator in the order above.

Installation of the IBM Cloud Pak for Integration Operator on 4.4.10 or lower

There is a issues with Red Hat OpenShift Container Platform 4.4 preventing users from installing the above operator. Upgrade to OCP four.iv.11 to be able to install the operator successfully.

If you lot cannot upgrade to OCP iv.4.eleven, yous tin instead offset install theIBM API ConnectandIBM Event Streams operators from Operator Hub. After these 2 operators are installed, users can install theIBM Cloud Pak for Integration operator on the same operator group, and it volition install the rest of the components of the Cloud Pak.

If the workaround is non performed, installing the operator volition consequence in the operator staying on theUpgrade Pending stage. Thecatalog-operator pod in theopenshift-operator-lifecycle-managing director will print an mistake message similar to:

                      failed: failed to update installplan bundle lookups: rpc error: code = ResourceExhausted desc = trying to transport bulletin larger than max                              

Installation of the IBM Deject Pak for Integration Operator on OCP iv.4.8 to 4.iv.12

Do not install theIBM Cloud Pak for Integration operator between on OCP versions between four.iv.viii and four.4.12.

OCP introduced a regression where the default channels are not selected when a dependency is installed by the Operator Lifecycle Manager. This results in the wrong version of some operators to be installed by default. Operators will install, but with the incorrect channels.

Symptoms:

  • IBM Common Service operator not on thestable-v1 channel and pods in theibm-mutual-services non working for hours subsequently installation.
  • Component operators non able to reconcile newer versions of Custom Resources.

Workaround:

Users must install the Cloud Pak operators one at a time on the correct channels. If installing operators insingle-namespace mode, this process must be repeated on each namespace.

See the section on this page titledInstalling the Cloud Pak one operator at a time. For each operator, select the default channel on the Reddish Hat OpenShift console. For case, for theIBM Common Service operator it isstable-v1 and for the Platform Navigator it isv4.0.

If users accidentally install operators in the wrong namespace, uninstall all Cloud Pak operators and re-start the installation process. Make sure theIBM Common Service andODLM operator are fully uninstalled. See: https://www.ibm.com/support/knowledgecenter/en/SSHKN6/installer/3.ten.10/uninstallation.html

Persistent pod restarts within Platform Navigator when installed cluster scoped

Under serious load (i.eastward. multiple operators installed, each with 1 operand provisioned) the cluster might not accept sufficient memory to handle the load of multiple users (i.due east. more than 4 concurrent users) using the same globally/cluster telescopic installed Platform Navigator. Therefore, a user might experience error messages within the PN UI which refer to declining to call back some resource from the backend server.

PN pods might experience several restarts inside a brusque timeframe, considering of insufficient memory for the amount of load experienced (we've observed that 1 CPU and 512MB of memory is not enough in such a scenario). Moreover, this can upshot in several liveliness and readiness failed events for the pods experiencing this high number of restarts. To check that, go to Pods > Events and run across if there are any failed liveliness/readiness checks that represent to the timestamps when the pods accept restarted.

The solution is therefore to increase the resources limits on this cluster to 1 CPU and 1Gi memory and the PN pods should non repetitively restart.

Operations Dashboard readiness

Operations Dashboard will appear as ready in the navigator UI before it has finished creating, and the Operations Dashboard UI can not respond at this time. If you encounter this, please await a few minutes and effort accessing the UI over again.

Upgrading Operations Dashboard

If you have the Operations Dashboard installed prior to doing the Common Services 3.four.1 upgrade, you will need to patch the Operations Dashboard secret in gild to view the dashboard after the upgrade is complete.
You volition need to open a terminal window and log into the cluster as cluster administrator and run the following commands

              // Assign new cert to env var consign CERT=`oc go surreptitious cs-ca-certificate-surreptitious -n ibm-common-services  -o jsonpath='{.data.ca\.crt}'`  // Patch the OD secret oc patch secret icp4i-od-oidc-hugger-mugger -p '{"data": {"icp_ca_chain.crt": "'${CERT}'"}}' -due north <OD_NAMESPACE>  // Restart the pod oc delete pod <OD_RELEASE_NAME>-ibm-icp4i-tracing-prod-0 -n <OD_NAMESPACE>                          

One time the pod and all ix containers are upwardly and running, you will be able to view the Operations dashboard again

Operations Dashboard for MQ in airgapped environments

When using the Operations Dashboard and MQ in an airgapped environment, the images for the Operations Dashboard agent and collector are not automatically mirrored with the other images. Users will need to manually mirror two images to their airgapped registry. The images that need to be mirrored are:

Using Helm to instantiate App Connect Integration Servers

There is a known limitation that prevents App Connect integration servers from being installed via the App Connect Dashboard using Helm. This just applies to instances of the ACE Dashboard that accept been installed using Helm, not the latest version of App Connect that uses an operator. It volition occur if the "Helm address" field hasn't been changed during the install of the App Connect Dashboard. In this case, the user will come across a 504 error when trying to create an integration server. In order for the install to work, you will need to upgrade your Helm release of the App Connect Dashboard to include the following line : helmAddress: 'tiller-deploy.ibm-mutual-services:44134'

APIC fails to install on a cluster when the release proper noun and project name combined are too long

On APIC long instance names and projection names are not supported. If the instance name and project name lengths combined are greater than 14 the post-obit error is reported:CR name + namespace must be less than 15 characters in length.

APIC fails to install on a cluster when the hostname is likewise long

APIC fails to install on a cluster when the hostname is too long. To piece of work around this, run: oc edit cm <release-name>-a7s-mtls-gw

And so modify: server_names_hash_bucket_size 128
to:server_names_hash_bucket_size 256
Find the pod that is in a CrashLoopBackoff state by running:oc go pods
And so notice one with your release proper noun at the offset that has the state CrashLoopBackoff. Delete that pod by running:oc delete pod <pod-name>
And then look for the pod to restart.

APIC fails to install on a cluster due to invalid memory address


APIC fails to deploy on a cluster and throws the post-obit exception due to a missing storage form annotation:

                invalid memory address or nil pointer dereference              

the default storage class note of ROKS is:
storageclass.beta.kubernetes.io/is-default-grade: 'true'

This problem is solved past calculation the annotation of the default storageclass using the below control:

                                  oc patch storageclass rook-ceph-block -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-form": "truthful"}}}'                              

Mutual Services uninstallation not successful


Subsequently you uninstall IBM Cloud Platform Common Services from your cluster, some common services components remain in the cluster. To resolve this see https://www.ibm.com/support/knowledgecenter/SSHKN6/installer/3.10.ten/troubleshoot/uninstall_ts1.html

Mutual Services OperandRequests shows Pending status

If y'all see the common services are correctly installed, but the OperandRequests evidence a pending condition, see the resolution at https://www.ibm.com/support/knowledgecenter/SSHKN6/installer/3.x.x/troubleshoot/or_pending.html

Common Services Operator continuously shows Pending status during installation

On Ruby-red Hat OpenShift Container Platform version 4.4.ten, when you are attempting to install or reinstall services, the Identity and Access Management (IAM) service operator (operands) is not installed. Instead, the service operator continuously shows aPending status. To resolve this meet ttps://www.ibm.com/back up/knowledgecenter/SSHKN6/installer/3.x.x/troubleshoot/op_pending.html

Operator shows UpgradePending condition - OLM known consequence

In Red Lid OpenShift Container Platform version four.4, when you reinstall common services, the service operators (operands) are not installed. The service operators continuously show UpgradePending status. To resolve this see https://www.ibm.com/support/knowledgecenter/SSHKN6/installer/3.x.x/troubleshoot/op_hang.html

Common Services Operator fails to initialize resources


The IBM Common Service Operator does not create any resource, including the Operand Deployment Lifecycle Manager subscription. The IBM Common Service Operator log shows the following information:

                E0618 13:13:29.959214       1 master.get:136] InitResources failed: unable to recollect the complete list of server APIs: webhook.certmanager.k8s.io/v1beta1: the server is currently unable to handle the request              

To resolve this see https://world wide web.ibm.com/support/knowledgecenter/SSHKN6/installer/iii.10.x/troubleshoot/fail_resource.html

Compatibility between ACE Designer 2020.2.1.one and Nugget repository 2020.2.1


The result is seen when interacting with ACE Designer 2020.two.1.i integration and the Asset repository 2020.2.i. Interacting with ACE Designer to import a flow from Asset repository or consign a menses from ACE Designer into the Asset repository, the user volition exist presented with a loading spinner, which volition be dismissed and no Asset repository user interface will appear.

To resolve this upgrade the Asset repository to 2020.2.1.1.

Custom resource names causing errors


Bug have been observed in navigator versions 2020.2.1-0 and 2020.2.ane.i-0 when creating multiple integration instances with the same custom resources name. These include errors when importing App Connect artefacts to the asset repository, and errors in the instance status reported past the navigator. To avoid this, when using these versions of the navigator ensure that all of your integration instances have unique custom resource names.

Namespace scoped operators not finding their cluster roles on OCP 4.4


In that location is a issues on Red Hat OpenShift Container Platform 4.four where operators installed namespace-scoped volition get their cluster-function overwritten. This effect has been seen to manifest when:

  • Cluster administrators install a second instance of the same operator in a unlike namespace.
  • The cluster is rebooted.

Notation that this is resolved on OCP iv.five.

For the Platform Navigator Operator on Cherry-red Lid OpenShift 4.4, the pod sometimes fails withInit:CrashLoopBackoff.
Inspecting the pod for the init container logs returns:

                error: fault executing jsonpath "{.items[0].metadata.proper noun}": Error executing template: array index out of bounds: index 0, length 0. Printing more data for debugging the template: 	template was: 		{.items[0].metadata.name} 	object given to jsonpath engine was: 		map[string]interface {}{"apiVersion":"v1", "items":[]interface {}{}, "kind":"List", "metadata":map[string]interface {}{"resourceVersion":"", "selfLink":""}}                              

To debug:

  1. Log into the Scarlet Lid OpenShift cluster CLI every bit a Cluster Ambassador.
  2. Runoc become pods -n [OPERATOR_NAMESPACE].
  3. Note down the pod name for the Platform Navigator Operator.
  4. Runoc logs [OPERATOR_POD_NAME] -north [OPERATOR_NAMESPACE] -c ibm-integration-platform-navigator-operator-init.
  5. Expect logs to comprise the error highlighted above.

To fix:

  1. Uninstall the Operator.
  2. Reinstall the Operator.
  3. This results in the Operator ClusterRole recreating which fixes the above error of being unable to observe the ClusterRole.

Pushing an API to APIC from ACE does not add Security Definition

When pushing to APIC using the 'Share Rest APIs' functionality of the ACE Dashboard, sometimes a security definition is not provided.

To set up, users should create their own Security Definition using the UI, using these details:

  • Type is set to API Key
  • Located In is set to Header
  • Central Type is ready to Client ID
  • Parameter Proper noun is gear up to 10-IBM-Client-Id

Pods stuck in ContainerCreating state across 1 or multiple namespaces

For higher versions of OCP, perform the post-obit steps:

1. List all of the failing pods beyond all namespaces: "oc get pods --all-namespaces -owide --no-headers | grep ContainerCreating"

2.Cordon the node on which they are running: "oc adm cordon <node_from_previous_command>"

iii. Drain the node on which they are running: "oc adm drain <same_node_as_above> --delete-local-data --ignore-daemonsets --forcefulness"

4. Login into the node: "oc debug <node>"

5. Run "chroot /host && rm -rf /var/lib/cni/networks/openshift-sdn/* && leave && go out"

6. Uncordon the node on which they are running: "oc adm uncordon <node_from_previous_command>"

Upgrading the Operations Dashboard (OD) from version 2020.2.1 to 2020.three.1

There is no upgrade path betwixt versions 2020.two.ane and 2020.three.1 of OD. To migrate versions a total uninstall and reinstall is required, Furthermore, in that location are CRD compatibility issues that means the CRD needs to be recreated when uninstalling and reinstalling OD as role of this migration.

OD version 2020.2.ane uses the v1 operator, whereas 2020.three.ane uses the v2 operator. All the resource of the v1 operator need to exist cleaned before users migrate from the 2020.2.1 version to the 2020.3.i version.

In gild to delete OD completely, y'all must complete the following steps:

  1. Delete all instances of OD with:
                        oc delete OperationsDashboard --all                  
  2. (Optional) Delete all PVCs related to OD. List the PVCs in a given namespace and delete the OD ones with:
                        oc become pvcs oc delete pvc <pvc_name_1> <pvc_name_2>                  
  3. Delete the operator subscription (i.e. uninstall the operator) with:
                        oc delete subscription ibm-integration-operations-dashboard-subscription                  
  4. Delete the ClusterServiceVersion:
                        oc delete clusterserviceversion ibm-integration-operations-dashboard.v1.0.0                  
  5. Delete the CRD:
                        oc delete crd operationsdashboards.integration.ibm.com                  

You tin now reinstall OD using the operator on the v2 channel.

[{"Business Unit":{"code":"BU059","label":"IBM Software westward\/o TPS"},"Product":{"code":"SS8QTD","label":"IBM Cloud Pak for Integration"},"ARM Category":[],"Platform":[{"code":"PF025","label":"Platform Contained"}],"Version":"All Version(s)","Line of Business":{"code":"LOB45","characterization":"Automation"}}]

mackinitan1992.blogspot.com

Source: https://www.ibm.com/support/pages/ibm-cloud-pak-integration-known-limitations

0 Response to "The Subscription Cannot Be Run at This Time Please Wait a Few Minutes and Try Again"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel