Page tree
Skip to end of metadata
Go to start of metadata

FAQs


Specific

How does Group Expansion Work?

  1. Download all ACLs
  2. Fetch $groups the $user is member of.
  3. For each ACL
    1. Check if $user is the file owner and dm_owner has at least BROWSE permit.

      If so, return with user has read access.

    2. Check if $user belongs in all groups specified in "required groups".

      If not, return with user has no read access.

    3. Check if at least one group from $groups is on "required groups set":

      If not, return with user has no read access.

    4. For each deny:

      If one deny entry matches $user or one group in $groups, return with user has no read access.

    5. For each allow:

      If at least one entry matches $user or one group in $groups, return with user has read access.

    6. If up until now the user has read access, and dm_world has at least BROWSE permit, return user has read access.

Note:

  • If user has read access, then we add the ACL ID as a group name.
  • If user is owner and dm_owner has BROWSE permit or higher, the user is added as group name.

Specific Documentum document level security examples can be found here

General 

Why does an incremental crawl last as long as a full crawl?

Some connectors perform incremental crawls based on snapshot entries, which are meant to match the exact documents that have been indexed by the connector to the search engine. On an incremental crawl, the connector fully crawls the repository the same way as a full crawl, but it only indexes the modified, new or deleted documents during that crawl.

For a discussion on crawling, see Full & Incremental Crawls.


Save your content source before creating or editing another one

Failing to save a content source before creating or editing another content source can result in an error.

ERROR [aspire]: Exception received attempting to get execute component command com.searchtechnologies.aspire.services.AspireException: Unable to find content source

Save the initial content source before creating or working on another.


My connector keeps the same status "Running" and is not doing anything

After a crawl has finished, the connector status may not be updated correctly.  

To confirm this, do the following:

1. In RoboMongo, go to your connector database (like: aspire-nameOfYourConnector).

2. Open the "Status" collection and perform the following query:

db.getCollection('status').find({}).limit(1).sort({$natural:-1})


3, Edit the entry and set the status to "S" (Completed). 


Note:  To see the full options of "Status" values, see MongoDB Collection Status.


My connector is not providing group expansion results

Make sure your connector has a manual scheduler configured for Group Expansion.


1, Go to the Aspire debug console, and look for the respective scheduler (in the fourth table: Aspire Application Scheduler).

2. If you are unsure which scheduler is for Group Expansion, you can check the Schedule Detail.

    • You can identify it with the value: cacheGroups

3.To run the Group Expansion process, click Run.


Troubleshooting


The connector fails with a webtop docbroker error

Example exception is: [DFC_DOCBROKER_REQUEST_FAILED] Request to Docbroker "DocumentumServer:docBrokerPort" failed; ERRORCODE: ff; NEXT: null

This error occurs because Aspire could not connect to the Documentum repository. This occurs because one or more of the necessary services aren't running. To fix this, make sure the following 4 services are running on the server that hosts Documentum:

  • Documentum Docbase Service
  • Documentum Docbroker Service
  • Documentum Java Method Server
  • Documentum Master Service


If not, manually restart them by clicking on each in the Services panel (in Windows, click on Control Panel and do a search for "Services", then click on View local services). Highlight each service and then click Start the Service. Then try starting the connector again.

ERROR com.documentum.fc.common.impl.preferences.PreferencesManager - [DFC_PREFERENCE_LOAD_FAILED] Failed to load persistent preferences from dfc.properties java.io.FileNotFoundException: dfc.properties (The system cannot find the path specified)

This error occurs when the dfc.properties file is not in the path specified on the DFC Properties File field.

ERROR com.documentum.fc.common.DfPreferences - [DFC_PREFERENCE_BAD_VALUE] Bad value for preference "dfc.security.keystore.file", value="dfc.keystore" DfAttributeValueException:: MSG: [DFC_OBJECT_BADATTRVALUE] Directory doesn't exist; ERRORCODE: ff; NEXT: null

This error occurs when the dfc.keystore file is not in the path specified in the dfc.properties file. To specify paths in the dfc.properties on a Windows system use this format: C\:/folder/folder/dfc.keystore.

ERROR com.documentum.fc.client.security.impl.IdentityManager - [DFC_SECURITY_IDENTITY_INIT] no identity initialization or incomplete identity initialization java.lang.SecurityException: Crypto-J is disabled, a FIPS 140 required self-integrity check failed.

This error is related to the dfc.keystore.

This error occurs because Aspire could not locate the dfc.properties file. Please make sure:

  • The path entered in the configuration is correct, e.g., C:/Documentum/config/dfc.properties.
  • The dfc.keystore file is also in the same path.
  • The path specified for the dfc.keystore inside dfc.properties is correct (To specify paths in the dfc.properties on a Windows system use this format: C\:/folder/folder/dfc.keystore.).

[DM_SESSION_E_CLIENT_AUTHENTICATION_FAILURE]error: "Could notauthenticate the client installation for reason: Client hostnamein authentication string doesn't match value in registry"

This error occurs when the dfc.keystore is not matching with the user being used to authenticate against the Documentum Server. Follow the usageInstructions.txt in File:Bug147071 pre60SP1 engr fix.zip to recreate the dfc.keystore.

"Unable to fetch object from Documentum repository followed" by "java.lang.NullPointerException: docbaseSpecString"

This means the Documentum URL you typed contains errors. Be sure it has the correct format (dctm://server:port/docbase/cabinet/folder).

The crawl fails with the "Stream handler unavailable due to: null" error

The following exception is thrown in the Aspire Console but not in the UI:

2016-10-19T21:36:48Z ERROR [/Documentum/ScanPipelineManager/Scan]: Error scanning java.lang.IllegalStateException: Stream handler unavailable due to: null
at org.apache.felix.framework.URLHandlersStreamHandlerProxy.openConnection(URLHandlersStreamHandlerProxy.java:311)
at java.net.URL.openConnection(Unknown Source)
...

If this happens upgrade the felix.jar to the new version in case this issue comes up. Click here to download the latest version.