"Driver seems unresponsive" warning when adding container to NetIQ CLoud Access

  • 7014222
  • 05-Dec-2013
  • 05-Dec-2013


NetIQ Cloud Access 1.5
Active Directory User store
Containers in user store include more than 10,000 users


NetIQ Cloud Access (NCA) setup and working fine against a test Active Directory LDAP server. Users could authenticate to the NCA box, would be provisioned against the GoogleApps connector defined, and single sign on to back end GoogleApps SAML SP without issues.

As part of the move to a production environment, the user store was changed from the test user store, to the production Active Directory LDAP server. Adding the first container where users were located worked fine. Adding the second container caused the healthcheck to report green for a minute then eventually a yellow warning. This rotation from green to yellow would continue for hours. The warning shows:

 "Driver seems unresponsive" | "Provisioning" | "bis_AD_a4uLn" | "Driver seems unresponsive"


This is normal when adding containers that include large volumes of users. In the above case, there were about 20,000 users in the container and the time required to provision such an amount of users can take many hours. During this time, the above warning may get displayed regularly.

An enhancement has been added to warning customers that this may happen when adding containers with large numbers of users.

Additional Information

In cases were we see this error before they are fully provisioned, one thing that could be causing it is that the user they are using doesn't have the correct rights/access to provision the users.  The ldap user that they are using is: CN=Cloud Access which may not be situated correctly above the different OU's that they are trying to provision users from.  So they could try moving this user and see if it yields different results.
Generally if this error is seen after the users have been provisioned fully, it is caused by the 'query' that the connector makes for changes to AD(polling of the driver) taking longer than the health check. This is most likely caused by the number of users (20,000~?).
Because the 'bis_AD' is an ldap driver, set to the polling mode of the driver, it basically queries for all the users, then creates a hashmap of all the users.  At 1 minute intervals, it will poll and re-check the users for any changes.  With that many users, the health job runs during the polling cycle, and the health gets set to red, even though the driver is functioning.