Documentum Server 16.4 Fundamentals Guide
Documentum Server 16.4 Fundamentals Guide
Server
Version 16.4
Fundamentals Guide
Legal Notice
Preface ................................................................................................................................ 13
Chapter 1 Overview ..................................................................................................... 15
Managed content .............................................................................................. 15
Elements of the content management system .................................................. 15
Check out / check in .................................................................................. 16
Versioning................................................................................................ 17
Virtual documents .................................................................................... 17
Full text indexing...................................................................................... 17
Security ................................................................................................... 17
Repository security ............................................................................... 17
Accountability ...................................................................................... 18
Process management features ............................................................................ 18
Workflows ................................................................................................... 18
Lifecycles ..................................................................................................... 19
Distributed services .......................................................................................... 19
Additional options............................................................................................ 19
Trusted Content Services ............................................................................... 19
Content Services for EMC Centera ................................................................. 20
Content Storage Services ............................................................................... 20
XML Store and XQuery ................................................................................. 21
Documentum products requiring activation on Documentum Server ................... 21
Retention Policy Services............................................................................... 21
Documentum Collaborative Services ............................................................. 22
Internationalization .......................................................................................... 22
Communicating with Documentum Server ........................................................ 22
Applications................................................................................................. 22
Interactive utilities ........................................................................................ 23
3
Table of Contents
4
Table of Contents
5
Table of Contents
6
Table of Contents
7
Table of Contents
8
Table of Contents
9
Table of Contents
10
Table of Contents
Metadata........................................................................................................ 222
Client communications with Documentum Server ............................................ 222
Constraints .................................................................................................... 222
Configuration requirements for internationalization ......................................... 222
Values set during installation ...................................................................... 223
The server config object ........................................................................... 223
Values set during sessions ........................................................................... 223
The client config object ............................................................................ 223
The session config object ......................................................................... 224
How values are set ................................................................................. 224
Where ASCII must be used ............................................................................. 225
Other Requirements........................................................................................ 225
User names, email addresses, and group names............................................ 225
Lifecycles ................................................................................................... 226
Docbasic .................................................................................................... 226
Federations ................................................................................................ 226
Object replication ....................................................................................... 227
Other cross-repository operations ................................................................ 227
11
Table of Contents
List of Figures
12
Preface
This guide describes the fundamental features and behaviors of Documentum Server. It provides an
overview of the server and then discusses the basic features of the server in detail.
IMPORTANT
• Documentum Content Server is now OpenText Documentum Server. OpenText Documentum
Server will be called Documentum Server throughout this guide.
Intended Audience
This guide is written for system and repository administrators, application programmers, and any
other user who wishes to obtain a basic understanding of the services and behavior of Documentum
Server. The guide assumes the reader has an understanding of relational databases, object-oriented
programming, and SQL (Structured Query Language).
Revision History
Revision Date Description
June 2019 Updated the sections, Virtual documents and retention policies, page
146 and Constraints, page 222.
May 2018 Removed information related to Asset Management and Planning
(AMP) tool.
April 2018 Updated for supported platforms.
February 2018 Initial publication.
13
Preface
14
Chapter 1
Overview
This chapter provides an introduction to content management and the features of Documentum
Server.
Managed content
Content, in a broad sense, is information stored as computer data files. It can include word
processing, spreadsheet, graphics, video and audio files.
Most content is stored locally on personal computers, organized arbitrarily, and only available to a
single user. This means that valuable data is subject to loss, and projects are subject to delay when
people cannot get the information they need.
The best way to protect these important assets is to move them to a centralized content management
system.
15
Overview
Everything in a repository is stored as an object. The content file associated with an object is typically
stored in a file system. An object has associated metadata (for example, a file name, storage location,
creation date, and much more). The metadata for each object is stored as a record in a relational
database.
Chapter 4, The Data Model, provides a detailed description of the repository data model.
A data dictionary describes each of the object types in the Documentum system. You can create
custom applications that query this information to automate processes and enforce business rules.
The data dictionary, page 60, gives more detail on what information is available and how it might
be used in your Documentum implementation.
Documentum Server provides the connection to the outside world. When content is added to the
repository, Documentum Server parses the object metadata, automatically generates additional
information about the object, and puts a copy of the content file into the file store. Once stored as an
object in the repository, there are many ways that users can access and interact with the content.
Content in the repository can be checked out, making it available for edit by one user while
preventing other users from making changes. When the edits are complete, the user checks the
content back in to the repository. The changes are then visible to other users, who can check out
and update the content as needed.
Concurrent access control, page 117, provides more detail on access control features of Documentum
Server.
16
Overview
Versioning
Documentum Server maintains information about each version of a content object as it is checked out
and checked in to the repository. At any time, users can access earlier versions of the content object to
retrieve sections that have been removed or branch to create a new content object.
Versioning, page 109, describes how versions are handled by Documentum Server.
Virtual documents
Virtual documents are a way to link individual content objects into one larger document.
A content object can belong to multiple virtual documents. When you change the individual content
object, the change appears in every virtual document that contains that object.
You can assemble and publish all or part of a virtual document. You can integrate the assembly and
publishing services with popular commercial applications such as Arbortext Editor. Assembly can be
controlled dynamically with business rules and data stored in the repository.
Chapter 8, Virtual Documents, provides a detailed description of virtual documents.
Documentum Server supports the Documentum xPlore index server, which provides comprehensive
indexing and search capabilities. By default, all property values and indexable content are indexed,
allowing users to search for documents or other objects. The Documentum xPlore documentation set
describes installation, administration, and customization of the xPlore indexing server.
Security
Documentum Server provides security features to control access and automate accountability.
Repository security
17
Overview
Documentum Server uses a security model based on Access Control Lists (ACLs) to protect
repository objects.
In the ACL model, every content object has an associated ACL. The entries in the ACL define
object-level permissions that apply to the object. Object-level permissions are granted to
individual users and to groups. The permissions control which users and groups can access
the object, and what operations they can perform. There are seven levels of base object-level
permissions and five extended object-level permissions
Chapter 6, Security Services, provides information on all security options. Documentum Server
Administration and Configuration Guide contains information on user administration and working
with ACLs.
Accountability
Documentum Server provides auditing and tracing facilities. Auditing keeps track of specified
operations and stores a record for each in the repository. Tracing provides a record that you can
use to troubleshoot problems when they occur.
Documentum Server also supports electronic signatures. In custom applications, you can require
users to sign off on a document before passing the document to the next activity in a workflow, or
before moving the document forward in its lifecycle. Sign-off information is stored in the repository.
Documentum Server Administration and Configuration Guide contains information on auditing and
tracing facilities.
Workflows
The Documentum Server workflow model lets you develop process and event-oriented applications
for content management. The model supports both automatic and ad hoc workflows.
You can define workflows for individual documents, folders containing a group of documents, and
virtual documents. A workflow definition can include simple or complex task sequences, including
sequences with dependencies. Workflow and event notifications are automatically issued through
standard electronic mail systems, while content remains under secure server control. Workflow
definitions are stored in the repository, allowing you to start multiple workflows based on one
workflow definition.
Workflows are created and managed using Documentum Workflow Manager or Process Builder.
Workflow Manager is the standard interface for creating and managing workflows. Process Builder is
a separately licensed product that provides additional, sophisticated workflow features.
18
Overview
Chapter 9, Workflows, describes basic workflow functionality and introduces the additional features
provided by Process Builder. Documentum Process Builder User Guide describes Documentum Process
Builder features in detail and describes how to use Process Builder. The Documentum Server System
Object Reference Guide describes the object types that support workflows.
Lifecycles
Many documents within an enterprise have a recognizable lifecycle. A document is created, often
through a defined process of authoring and review, and then is used and ultimately superseded or
discarded.
Documentum Server life cycle management services let you automate the stages of document life.
The stages in a lifecycle are defined in a policy object stored in the repository. For each stage, you
can define prerequisites to be met and actions to be performed before an object can move into that
particular stage.
Chapter 10, Lifecycles, describes how lifecycles are implemented. Documentum Server System Object
Reference Guide describes the object types that support lifecycles.
Distributed services
A Documentum system installation can have multiple repositories. Documentum Server provides
built-in, automatic support for a variety of configurations. Documentum Platform and Platform
Extensions Installation Guide provides a complete description of the features supporting distributed
services.
Additional options
The features described in this section provide extended and enhanced functionality, and can be
licensed for an additional fee.
19
Overview
20
Overview
21
Overview
Internationalization
Internationalization refers to the ability of Documentum Server to handle communications and data
transfer between itself and various client applications independent of the character encoding they use.
Documentum Server runs internally with the UTF-8 encoding of Unicode. The Unicode standard
provides a unique number to identify every letter, number, symbol, and character in every language.
Documentum Server uses Unicode to:
• Store metadata using non-English characters
• Store metadata in multiple languages
• Manage multilingual web and enterprise content
The Unicode Consortium web site at http://www.unicode.org/ has more information about Unicode,
UTF-8, and national character sets. Chapter 12, Internationalization Summary, contains a summary of
Documentum Server internationalization requirements.
Applications
The Documentum system provides web-based and desktop client applications.
22
Overview
You can also write your own custom applications. Documentum Server supports all the Documentum
Application Programming Interfaces (APIs).The primary API is the Documentum Foundation Classes
(DFC). This API is a set of Java classes and interfaces that provides full access to Documentum Server
features. Applications written in Java, Visual Basic (through OLE COM), C++ (through OLE COM),
and Docbasic can use the DFC. Docbasic is the proprietary programming language Documentum
Server uses.
For ease of development, the Documentum system provides a web-based and a desktop development
environment. You can develop custom applications and deploy them on the web or desktop. You can
also customize components of the Documentum client applications.
Interactive utilities
Documentum Administrator is a web-based tool that lets you perform administrative tasks for
a single installation or distributed enterprise from one location.
The IDQL interactive utility in Documentum Administrator lets you execute DQL statements directly.
The utility is primarily useful as a testing arena for statements that you want to add to an application.
It is also useful when you want to execute a quick ad hoc query against the repository.
Documentum Server Administration and Configuration Guide contains more information about
Documentum Administrator and IDQL.
23
Overview
24
Chapter 2
Session and Transaction Management
Session Overview
A session is a client connection to a repository. Repository sessions are opened when users or
applications establish a connection to a Documentum Server.
Users or applications can have multiple sessions open at the same time with one or more repositories.
The number of sessions that can be established for a given user or application is controlled by the
dfc.session.max_count entry in the dfc.properties file. The value of this entry is set to 1000 by default
and can be reset.
For a web application, all sessions started by the application are counted towards the maximum.
Note: If the client application is running on a Linux platform, the maximum number of sessions
possible is also limited by the number of descriptors set in the Linux kernel.
Obtaining a session
Typically, sessions are obtained from a session manager. A session manager is an object that
implements the IDfSessionManager interface. Session manager objects are obtained by calling
the newSessionManager method of the IDfClient interface. Obtaining sessions from the session
manager is the recommended way to obtain a session. This is especially true in web applications
because the enhanced resource management features provided by a session manager are most useful
in web applications.
By default, if an attempt to obtain a session fails, DFC automatically tries again. If the second attempt
fails, DFC tries to connect to another server if another is available. If no other server is available, the
client application receives an error message. You can configure the time interval between connection
attempts and the number of retries.
Each session has a session identifier in the format Sn where n is an integer equal to or greater than
zero. This identifier is used in trace file entries, to identify the session to which a particular entry
applies. Session identifiers are not used or accepted in DFC method calls.
25
Session and Transaction Management
Session configuration
A session configuration defines some basic features and functionality for the session. For example,
the configuration defines which connection brokers the client can communicate with, the maximum
number of connections the client can establish, and the size of the client cache.
26
Session and Transaction Management
Configuration parameters for client sessions are recorded in the dfc.properties file. This file is
installed with default values for some configuration parameters. Other parameters are optional
and must be explicitly set. Additionally, some parameters are dynamic and may be changed at
runtime if the deployment environment allows. Every client application must be able to access
the dfc.properties file.
The file is polled regularly to check for changes. The default polling interval is 30 seconds. The
interval is configurable by setting a key in the dfc.properties file.
When DFC is initialized and a session is started, the information in this file is propagated to the
runtime configuration objects.
Documentum Server Administration and Configuration Guide contains instructions for setting the
connection attempt interval and the number of retries.
There are three runtime (nonpersistent) configuration objects that govern a session:
• client config object
• session config object
• connection config object
The client config object is created when DFC is initialized. The configuration values in this object are
derived primarily from the values recorded in the dfc.properties file. Some of the properties in the
client config object are also reflected in the server config object.
The configuration values are applicable to all sessions started through that DFC instance. The session
config and the connection config objects represent individual sessions with a repository. Each session
has one session config object and one connection config object. These objects are destroyed when the
session is terminated.
Documentum Server System Object Reference Guide lists the properties in the configuration objects.
27
Session and Transaction Management
Concurrent sessions
Concurrent sessions are repository sessions that are open at the same time through one Documentum
Server. The sessions can be for one user or multiple users. By default, a Documentum Server can
have 100 connections open concurrently. The limit is configurable by setting the concurrent_sessions
key in the server.ini file. You can edit this file using Documentum Administrator. Each connection
to a Documentum Server, whether an explicit or implicit connection, counts as one connection.
Documentum Server returns an error if the maximum number of sessions defined in the
concurrent_sessions key is exceeded.
Documentum Server Administration and Configuration Guide provides instructions for setting server.ini
file keys.
Restricted sessions
A restricted session is a repository session opened for a user who connects with an expired operating
system password. The only operation allowed in a restricted session is changing the user password.
Applications can determine whether a session they begin is a restricted session by examining the
28
Session and Transaction Management
value of the computed property _is_restricted_session. This property is T (TRUE) if the session
is a restricted session.
Connection brokers
A connection broker is a name server for the Documentum Server. It provides connection information
for Documentum Servers and application servers, and information about the proximity of network
locations.
When a user or application requests a repository connection, the request goes to a connection broker
identified in the client dfc.properties file. The connection broker returns the connection information
for the repository or particular server identified in the request.
Connection brokers do not request information from Documentum Servers, but rely on the servers to
regularly broadcast their connection information to them. Which connection brokers are sent server
information is configured in the server config object of the server.
Which connection brokers a client can communicate with is configured in the dfc.properties file used
by the client. You can define primary and backup connection brokers in the file. Doing so ensures
that users will rarely encounter a situation in which they can not obtain a connection to a repository.
An application can also set the connection broker programmatically. This allows the application
to use a connection broker that may not be included in the connection brokers specified in the
dfc.properties file. The application must set the connection broker information before requesting a
connection to a repository.
Documentum Server Administration and Configuration Guide contains information about how servers,
clients, and connection brokers interact. Documentum Foundation Classes Development Guide contains
information about setting a connection broker programmatically.
29
Session and Transaction Management
To request a secure connection, the client application must have the appropriate value set in the
dfc.properties file or must explicitly request a secure connection when a session is requested. The
security mode requested for the session is defined in the IDfLoginInfo object used by the session
manager to obtain the session.
The security mode requested by the client interacts with the connection type configured for the
server and connection broker to determine whether the session request succeeds and what type
of connection is established.
Documentum Server Administration and Configuration Guide contains information on resetting the
connection default for Documentum Server and how clients request a secure connection. The
interaction between the Documentum Server setting and the client request is described in the
associated Javadocs, in the description of the IDfLoginInfo.setSecurityMode method.
Connection pooling
Connection pooling is an optional feature that allows an explicit repository session to be recycled
and used by more than one user. Connection pooling is an automatic behavior implemented in DFC
through session managers. It provides performance benefits for applications, especially those that
execute frequent connections and disconnections for multiple users.
Whenever a session is released or disconnected, DFC puts the session into the connection pool. This
pool is divided into two levels. The first level is a homogeneous pool. When a session is in the
homogeneous pool, it can be reused only by the same user. If, after a specified interval, the user has
not reclaimed the session, the session is moved to the heterogeneous pool (level-2 pool). From that
pool, the session can be claimed by any user.
When a session is claimed from the heterogeneous pool by a new user, DFC resets automatically
any security and cache-related information as needed for the new user. DFC also resets the error
message stack and rolls back any open transactions.
To obtain the best performance and resource management from connection pooling, connection
pooling must be enabled through the dfc.properties file. If connection pooling is not enabled through
the dfc.properties file, DFC only uses the homogeneous pool. The session is held in that pool for a
longer period of time, and does not use the heterogeneous pool. If the user does not reclaim the
session from the homogeneous pool, the session is terminated.
Simulating connection pooling at the application level is accomplished using an IDfSession.assume
method. The method lets one user assume ownership of an existing primary repository session.
When connection pooling is simulated using an assume method, the session is not placed into the
connection pool. Instead, ownership of the repository session passes from one user to another by
executing the assume method within the application.
When an assume method is issued, the system authenticates the requested new user. If the user
passes authentication, the system resets the security and cache information for the session as needed.
It also resets the error message stack.
Documentum Server Administration and Configuration Guide contains instructions about enabling and
configuring connection pooling. The associated Javadocs contain details about using an assume
method.
30
Session and Transaction Management
Login tickets
A login ticket is an ASCII-encoded string that an application can use in place of a user password
when connecting to a repository. Login tickets can be used to establish a connection with the current
or a different repository.
Each login ticket has a scope that defines who can use the ticket and how many times the ticket can
be used. By default, login tickets may be used multiple times. However, you can create a ticket
configured for only one use. If a ticket is configured for just one use, the ticket must be used by
the issuing server or another designated server.
Login tickets are generated in a repository session, at runtime, using one of the getLoginTicket
methods from the IDfSession interface.
The ASCII-encoded string is comprised of two parts: a set of values describing the ticket and a
signature generated from those values. The values describing the ticket include information such as
when the ticket was created, the repository in which it was created, and who created the ticket. The
signature is generated using the login ticket key installed in the repository.
For troubleshooting purposes, DFC supports the IDfClient.getLoginTicketDiagnostics method, which
returns the encoded values in readable text format.
The scope of a login ticket defines which Documentum Servers accept the login ticket. When you
generate a login ticket, you can define its scope as:
• The server that issues the ticket
• A single server other than the issuing server. In this case, the ticket is automatically a single-use
ticket.
• The issuing repository. Any server in the repository accepts the ticket.
• All servers of trusting repositories. Any server of a repository that considers the issuing repository
a trusted repository may accept the ticket.
A login ticket that can be accepted by any server of a trusted repository is called a global login
ticket. An application can use a global login ticket to connect to a repository that differs from the
ticket issuing repository if:
• The login ticket key (LTK) in the receiving repository is identical to the LTK in the repository in
which the global ticket was generated
• The receiving repository trusts the repository in which the ticket was generated
Trusting and trusted repositories, page 36, describes how trusted repositories are defined and
identified.
31
Session and Transaction Management
32
Session and Transaction Management
of its expiration time, the ticket is considered valid. This three-minute difference allows for minor
differences in machine clock time across host machines. However, it is the responsibility of the system
administrators to ensure that the machine clocks on host machines with applications and repositories
be set as closely as possible to the correct time.
Documentum Server Administration and Configuration Guide contains information about configuring the
default and maximum validity periods in a repository.
33
Session and Transaction Management
server from a non-superuser must be accompanied by a valid token and the connection request
must comply with the constraints in the token.
If you configure a Documentum Server to use AAC tokens, you can control:
• Which applications can access the repository through that server
• Who can access the repository through that server
You can allow any user to access the repository through that server or you can limit access to a
particular user or to members of a particular group.
• Which client host machines can be used to access the repository through that server
These constraints can be combined. For example, you can configure a token that only allows members
of a particular group using a particular application from a specified host to connect to a server.
Application access control tokens are ignored if the user requesting a connection is a superuser. A
superuser can connect without a token to a server that requires a token. If a token is provided, it
is ignored.
Using tokens
Tokens are enabled on a server-by-server basis. You can configure a repository with multiple servers
so that some of its servers require a token and some do not. This provides flexibility in system design.
For example, you can designate one Documentum Server assigned to a repository as the server to
be used for connections coming from outside a firewall. By requiring that server to use tokens,
you can further restrict what machines and applications are used to connect to the repository from
outside the firewall.
When you create a token, you use arguments on the command line to define the constraints that you
want to apply to the token. The constraints define who can use the token and in what circumstances.
For example, if you identify a particular group in the arguments, only members of that group can
use the token. Or, you can set an argument to constrain the token use to the host machine on which
the token was generated. If you want to restrict the token to use by a particular application, you
supply an application ID string when you generate the token, and any application using the token
must provide a matching string in its connection request. All of the constraint parameters you specify
when you create the token are encoded into the token.
When an application issues a connection request to a server that requires a token, the application may
generate a token at runtime or it may rely on the client library to append an appropriate token to the
request. The client library also appends a host machine identifier to the request.
Note: Only 5.3 DFC or later is capable of appending a token or machine identifier to a connection
request. Configuring the DFC to append a token is optional.
If you want to constrain the use to a particular host machine, you must also set the dfc.machine.id key
in the dfc.properties file used by the client on that host machine.
If the receiving server does not require a token or the user is a superuser, the server ignores any token,
application ID, and host machine ID accompanying the request and processes the request as usual.
If the receiving server requires a token, the server decodes the token and determines whether the
constraints are satisfied. If the constraints are satisfied, the server allows the connection. If not, the
server rejects the connection request.
34
Session and Transaction Management
Documentum Server Administration and Configuration Guide contains information about enabling token
use in a server, configuring DFC to append a token or machine identifier to a connection request, and
implementing token use and enabling token retrieval.
The ASCII-encoded string is comprised of two parts: a set of values describing the token and a
signature generated from those values. The values describing the token include such information
as when the token was created, the repository in which it was created, and who created the token.
(For troubleshooting purposes, DFC has the IDfClient.getApplicationTokenDiagnostics method,
which returns the encoded values in readable text format.) The signature is generated using the
repository login ticket key.
The scope of an application access control token identifies which Documentum Servers can accept the
token. The scope of an AAC token can be either a single repository or global. The scope is defined
when the token is generated.
If the scope of a token is a single repository, then the token is only accepted by Documentum Servers
of that repository. The application using the token can send its connection request to any of the
repository servers.
A global token can be used across repositories. An application can use a global token to connect to
repository other than the repository in which the token was generated, if:
• The target repository is using the same login ticket key (LTK) as the repository in which the
global token was generated
• The target repository trusts the repository in which the token was generated
Repositories that accept tokens generated in other repositories must trust these other repositories.
The login ticket key, page 32, describes the login ticket key. Trusting and trusted repositories, page 36,
describes how trust is determined between repositories.
35
Session and Transaction Management
dfc.tokenstorage.enable key. If use is enabled, a token can be retrieved and appended to a connection
request by the DFC when needed.
Application access control tokens are valid for a given period of time. The period may be defined
when the token is generated. If not defined at that time, the period defaults to one year, expressed
in minutes. (Unlike login tickets, you cannot configure a default or maximum validity period for
an application access token.)
Documentum Server Administration and Configuration Guide contains information on using dmtkgen.
36
Session and Transaction Management
property: RepositoryM and RepositoryN. RepositoryK rejects login tickets or tokens from
RepositoryA because RepositoryA is not in the list of trusted repositories. It accepts tickets or tokens
from RepositoryM and RepositoryN because they are listed in the trusted_docbases property.
Transaction management
This section describes transactions and how they are managed.
A transaction is one or more repository operations handled as an atomic unit. All operations in
the transaction must succeed or none may succeed. A repository session can have only one open
transaction at any particular time. A transaction is either internal or explicit.
37
Session and Transaction Management
• You cannot use DFC methods in the transaction if you opened the transaction with the DQL
BEGIN[TRAN] statement.
If you want to use DFC methods in an explicit transaction, open the transaction with a DFC
method.
• You cannot execute dump and load operations inside an explicit transaction.
• You cannot issue a CREATE TYPE statement in an explicit transaction.
• You cannot issue an ALTER TYPE statement in an explicit transaction, unless the ALTER TYPE
statement lengthens a string property.
Managing deadlocks
Deadlock occurs when two connections are both trying to access the same information in the
underlying database. When deadlock occurs, the RDBMS typically chooses one of the connections
as a victim, drops any locks held by that connection, and rolls back any changes made in that
connection transaction.
38
Session and Transaction Management
Documentum Server manages internal transactions and database operations in a manner that reduces
the chance of deadlock as much as possible. However, some situations may still cause deadlocks.
For example, deadlocks can occur if:
• A query that turns off full-text search and tries to read data from a table through an index when
another connection is locking the data while it tries to update the index. (When full-text search is
enabled, properties are indexed and the table is not queried.)
• Two connections are waiting for locks being held by each other.
When deadlock occurs, Documentum Server executes internal deadlock retry logic. The deadlock
retry logic tries to execute the operations in the victim transaction up to 10 times. If an error such as a
version mismatch occurs during the retries, the retries are stopped and all errors are reported. If the
retry succeeds, an informational message is reported.
Documentum Server deadlock retry logic is not available in explicit transactions. If an application
runs under an explicit transaction or contains an explicit transaction, the application should contain
deadlock retry logic.
Documentum Server provides a computed property that you can use in applications to test for
deadlock. The property is _isdeadlocked. This is a Boolean property that returns TRUE if the
repository session is deadlocked.
To test custom deadlock retry logic, Documentum Server provides an administration method called
SET_APIDEADLOCK. This method plants a trigger on a particular operation. When the operation
executes, the server simulates a deadlock, setting the _isdeadlocked computed property and rolling
back any changes made prior to the method execution. Using SET_APIDEADLOCK allows you to
test an application deadlock retry logic in a development environment. Documentum Server DQL
Reference Guide describes the SET_APIDEADLOCK method in detail.
39
Session and Transaction Management
40
Chapter 3
Caching
41
Caching
Additionally, when requested in a fetch method, DFC checks the consistency of its cached version
against the server global cache. If the versions in the caches are found to be mismatched, the object
type definition is updated appropriately. If the server cache is more current, the DFC caches are
updated. If the DFC has a more current version, the server cache is updated.
This mechanism ensures that a user who makes the change sees that change immediately and
other users in other sessions see it shortly thereafter. Stopping and restarting a session or the web
application server is not required to see changes made to these objects.
Documentum Server Administration and Configuration Guide contains instructions for setting the
database refresh interval for the server global cache.
Consistency checking
Consistency checking is the process that ensures that cached data accessed by a client is current and
consistent with the data in the repository. How often the process is performed for any particular set
of query results is determined by the consistency check rule defined in the method that references the
data.
The consistency check rule can be a keyword, an integer value, or the name of a cache config object.
Using a cache config object to group cached data has the following benefits:
• Validates cached data efficiently
It is more efficient to validate a group of data than it is to validate each object or set of query
results individually.
• Helps ensure that applications access current data
• Makes it easy to change the consistency check rule because the rule is defined in the cache config
object rather than in application method calls
• Allows you to define a job to validate cached data automatically
42
Caching
If the rule was specified as a keyword or an integer value, DFC interprets the rule as a directive on
when to perform a consistency check. The directive is one of the following:
• Perform a check every time the data is accessed
This option means that the data is always checked against the repository. If the cached data is an
object, the object is always checked against the object in the repository. If the cached data is a set of
query results, the results are always regenerated. The keyword check_always defines this option.
• Never perform a consistency check
This option directs the DFC to always use the cached data. The cached data is never checked
against the repository if it is present in the cache. If the data is not present in the cache, the data is
obtained from the server. The keyword check_never defines this option.
• Perform a consistency check on the first access only
This option directs the DFC to perform a consistency check the first time the cached data is
accessed in a session. If the data is accessed again during the session, a consistency check is not
conducted. The keyword check_first_access defines this option.
• Perform a consistency check after a specified time interval
This option directs the DFC to compare the specified interval to the timestamp on the cached data
and perform a consistency check only if the interval has expired. The timestamp on the cached
data is set when the data is placed in the cache. The interval is expressed in seconds and can
be any value greater than 0.
43
Caching
If a consistency check rule names a cache config object, the DFC uses information from the cache
config object to determine whether to perform a consistency check on the cached data. The cache
config information is obtained by invoking the CHECK_CACHE_CONFIG administration method
and stored in memory with a timestamp that indicates when the information was obtained. The
information includes the r_last_changed_date and the client_check_interval property values of
the cache config object.
When a method defines a consistency check rule by naming a cache config object, DFC first checks
whether it has information about the cache config object in its memory. If it does not, it issues a
CHECK_CACHE_CONFIG administration method to obtain the information. If it has information
about the cache config object, DFC must determine whether the information is current before using
that information to decide whether to perform a consistency check on the cached data.
To determine whether the cache config information is current, the DFC compares the stored
client_check_interval value to the timestamp on the information. If the interval has expired, the
information is considered out of date and DFC executes another CHECK_CACHE_CONFIG method
to ask Documentum Server to provide current information about the cache config object. If the
interval has not expired, DFC uses the information that it has in memory.
After the DFC has current information about the cache config object, it determines whether the
cached data is valid. To determine that, the DFC compares the timestamp on the cached data against
the r_last_changed_date property value in the cache config object. If the timestamp is later than
the r_last_changed_date value, the cached data is considered usable and no consistency check is
performed. If the timestamp is earlier than the r_last_changed_date value, a consistency check is
performed on the data.
Documentum Server DQL Reference Guide contains reference information about the
CHECK_CACHE_CONFIG administration method.
44
Caching
object against the i_vstamp value of the object in the repository. The default consistency rule is
check_never. This means that DFC uses the cached query results.
45
Caching
46
Chapter 4
The Data Model
47
The Data Model
Standard object types are types that do not fall into one of the remaining categories.
• Aspect property object type
Aspect property object types are internal types used by Documentum Server and Documentum
Foundation Classes (DFC) to manage properties defined for aspects. These types are automatically
created and managed internally when properties are added to aspects. They are not visible to
users and user applications.
• Lightweight object type
Lightweight object types are a special type used to minimize the storage footprint for multiple
objects that share the same system information. A lightweight type is a subtype of its shareable
type.
• Shareable object type
Shareable object types are the parent types of lightweight object types. Only dm_sysobject and
its subtypes can be defined as shareable. A single instance of a shareable type object is shared
among many lightweight objects.
Lightweight and shareable object types are additional types added to Documentum Server to solve
common problems with large content stores. Specifically, these types can increase the rate of object
ingestion into a repository and can reduce the object storage requirements.
An object type category is stored in the type_category property in the dm_type object representing
the object type.
How lightweight subtype instances are stored, page 56, describes how lightweight and shareable
types are associated within the underlying database tables.
48
The Data Model
49
The Data Model
Chapter 8, Virtual Documents, describes virtual documents and their implementation. Adding
content, page 123, describes adding content to objects. Renditions, page 104, describes renditions in
detail.
Properties
Properties are the fields that comprise an object definition. The values in those fields describe
individual instances of the object type. When an object is created, its properties are set to values that
describe that particular instance of the object type. For example, two properties of the document
object type are title and subject. When you create a document, you provide values for the title and
subject properties that are specific to that document.
Property characteristics
Properties have a number of characteristics that define how they are managed and handled by
Documentum Server. These characteristics are set when the property is defined and cannot be
changed after the property is created.
The properties that make up a persistent object type definition are persistent. Their values for
individual objects of the type are saved in the repository. These persistent properties and their
values make up the object metadata.
An object type persistent properties include not only the properties defined for the type, but also
those that the type inherits from it supertype. If the type is a lightweight object type, its persistent
properties also include those it shares with its sharing type.
Many object types also have associated computed properties. Computed properties are nonpersistent.
Their values are computed at runtime when a user requests the property value and lost when the
user closes the session.
Persistent properties have domains. A property domain identifies the property datatype and several
other characteristics of the property, such as its default value or label text for it. You can query the
data dictionary to retrieve the characteristics defined by a domain.
Objects and object types, page 47, explains supertypes and inheritance. Documentum Server System
Object Reference Guide contains information about persistent and computed properties.
All properties are either single-valued or repeating. A single-valued property stores one value or, if it
stores multiple values, stores them in one comma-separated list. Querying a single-valued property
returns the entire value, whether it is one value or a list of values.
50
The Data Model
A repeating property stores multiple values in an indexed list. Index positions within the list are
specified in brackets at the end of the property name when referencing a specific value in a repeating
property. An example is: keywords[2] or authors[17]. Full-text queries can also locate specific values
in a repeating property. Special DQL functions exist to allow you to query values in a repeating
property.
Datatype
All properties have a datatype that determines what kind of values can be stored in the property. For
example, a property with an integer datatype can only store whole numbers. A property datatype is
specified when the object type for which the property is defined is created.
Documentum Server System Object Reference Guide contains complete information about valid datatypes
and the limits and defaults for each datatype.
All properties are either read only or read and write. Read-only properties are those that only
Documentum Server can write. Read and write properties can typically be operated on by users
or applications. In general, the prefix, or lack of a prefix, on a property name indicates whether
the property is read only or can also be written.
User-defined properties are read and write by default. Only superusers can add a read-only property
to an object type.
51
The Data Model
All persistent properties are either global or local. This characteristic is only significant if a repository
participates in object replication or is part of a federation. (A federation is a group of one or more
repositories.)
Object replication creates replica objects, copies of objects that have been replicated between
repositories. When users change a global property in a replica, the change actually affects the source
object property. Documentum Server automatically refreshes all the replicas of the object containing
the property. When a repository participates in a federation, changes to global properties in users
and groups are propagated to all member repositories if the change is made through the governing
repository using Documentum Administrator.
A local property value can be different in each repository participating in the replication or federation.
If a user changes a local property in a replicated object, the source object is not changed and neither
are the other replicated objects.
Note: It is possible to configure four local properties of the dm_user object to make them behave
as global properties.
Documentum Platform and Platform Extensions Installation Guide contains the instructions for creating
global users and for configuring four local user properties to behave as global properties.
Property identifiers
Every property has an identifier. These identifiers are used instead of property names to identify
a property when the property is stored in a property bag. The property identifier is unique within
an object type hierarchy. For example, all the properties of dm_sysobject and its subtypes have
identifiers that are unique within the hierarchy that has dm_sysobject as its top-level supertype.
The identifier is an integer value stored in the attr_identifier property of each type dm_type object.
When a property is stored in a property bag, its identifier is stored as a base64-encoded string in
place of the property name.
A property identifier cannot be changed.
The property bag, page 52, describes the property bag. Documentum Server System Object Reference
Guide contains more information about property identifiers.
52
The Data Model
Implementation
The property bag is implemented in a repository as the i_property_bag property. The i_property_bag
property is part of the dm_sysobject type definition by default. Consequently, each subtype of
dm_sysobject inherits this property. That means that you can define a subtype of the dm_sysobject
or one of its subtypes that includes a nonqualifiable property without specifically naming the
i_property_bag property in the subtype definition.
The i_property_bag property is not part of the definition of the dm_lightweight type. However, if
you create a lightweight subtype whose definition contains a nonqualifiable property, Documentum
Server automatically adds i_property_bag to the type definition. It is not necessary to explicitly name
the property in the type definition.
Similarly, if you include a nonqualifiable property in the definition of an object type that has no
supertype or whose supertype is not in the dm_sysobject hierarchy, the i_property_bag property
is added automatically to the type.
The i_property_bag property is also used to store aspect properties if the properties are optimized for
fetching. Consequently, the object type definitions of object instances associated with the aspect must
include the i_property_bag property. In this situation, you must explicitly add the property bag to
the object type before associating its instances with the aspect.
It is also possible to explicitly add the property bag to an object type using an ALTER TYPE statement.
The property bag cannot be removed once it is added to an object type.
The i_property_bag property is a string datatype of 2000 characters. If the names and values of
properties stored in i_property_bag exceed that size, the overflow is stored in a second property,
called r_property_bag. This is a repeating string property of 2000 characters.
Whenever the i_property_bag property is added to an object type definition, the r_property_bag
property is also added.
Aspects, page 72, describes aspects and aspect properties. Documentum Server System Object Reference
Guide contains the reference description for the property bag property. Documentum Server DQL
Reference Guide contains information on how to alter a type to add a property bag.
Repositories
A repository is where persistent objects managed by Documentum Server are stored. A repository
stores the object metadata and, sometimes, content files. A Documentum system installation can have
multiple repositories. Each repository is uniquely identified by a repository ID, and each object stored
in the repository is identified by a unique object ID.
Repositories contain sets of tables in an underlying relational database installation. Two types
of tables are implemented:
• Object type tables
• Object type index tables
53
The Data Model
The tables that store the values for single-valued properties are identified by the object type name
followed by _s (for example, dm_sysobject_s and dm_group_s). In the _s tables, each column
represents one property and each row represents one instance of the object type. The column values
in the row represent the single-valued property values for that object.
The tables that store values for repeating properties are identified by the object type name followed
by _r (for example, dm_sysobject_r and dm_group_r). In these tables, each column represents one
property.
In the _r tables, there is a separate row for each value in a repeating property. For example, suppose a
subtype called recipe has one repeating property, ingredients. A recipe object that has five values
in the ingredients property will have five rows in the recipe_r table-one row for each ingredient, as
shown in the following table:
r_object_id ingredients
.. . 4 eggs
.. . 1 lb. cream cheese
.. . 2 t vanilla
.. . 1 c sugar
.. . 2 T grated orange peel
The r_object_id value for each row identifies the recipe that contains these five ingredients.
If a type has two or more repeating properties, the number of rows in the _r table for each object is
equal to the number of values in the repeating property that has the most values. The columns for
repeating properties having fewer values are filled in with NULLs.
For example, suppose the recipe type has four repeating properties: authors, ingredients, testers,
and ratings. One particular recipe has one author, four ingredients, and three testers. For this
recipe, the ingredients property has the largest number of values, so this recipe object has four
rows in the recipe_r table:
54
The Data Model
If an object type is a subtype of a standard object type, the tables representing the object type store
only the properties defined for the object type. The values for inherited properties are stored in rows
in the tables of the supertype. In the _s tables, the r_object_id value serves to join the rows from the
subtype _s table to the matching row in the supertype _s table. In the _r tables, the r_object_id
and i_position values are used to join the rows.
For example, suppose you create a subtype of dm_sysobject called proposal_doc, with three
properties: budget_est, division_name, and dept_name, all single-valued properties. The figure
below illustrates the underlying table structure for this type and its instances. The values for the
properties defined for the proposal doc type are stored in the proposal_doc_s table. Those properties
that it inherits from its supertype, dm_sysobject, are stored in rows in the dm_sysobject object type
tables. The rows are associated through the r_object_id column in each table.
55
The Data Model
A lightweight type is a subtype of a shareable type, so the tables representing the lightweight type
store only the properties defined for the lightweight type. The values for inherited properties are
stored in rows in the tables of the shareable type (the supertype of the lightweight type). In standard
objects, the r_object_id property is used to join the rows from the subtype to the matching rows in the
supertype. However, since many lightweight objects can share the properties from their shareable
parent object, the r_object_id values differ from the parent object r_object_id value. For lightweight
objects, the i_sharing_parent property is used to join the rows. Therefore, many lightweight objects,
each with its own r_object_id, can share the property values of a single shareable object.
When a lightweight object shares a parent object with other lightweight objects, the lightweight object
is unmaterialized. All the unmaterialized lightweight objects share the properties of the shared
parent, so, in effect, the lightweight objects all have identical values for the properties in the shared
parent. This situation can change if some operation needs to change a parent property for one of (or a
subset of) the lightweight objects. Since the parent is shared, the change in a property would affect all
the children. If the change only affects one child, that child object has to have its own copy of the
parent. When a lightweight object has its own private copy of a parent, the object is materialized.
Documentum Server creates rows in the tables of the shared type for the object, copying the values of
the shared properties into those rows. The lightweight object no longer shares the property values
with the instance of the shared type, but with its own private copy of that shared object.
For example, if you checkout a lightweight object, it is materialized. A copy of the original parent is
created with the same r_object_id value as the child and the lightweight object is updated to point
to the new parent. Since the private parent has the same r_object_id as the lightweight child, a
materialized lightweight object behaves like a standard object. As another example, if you delete
an unmaterialized lightweight object, the shared parent is not deleted (whether or not there are any
remaining lightweight children). If you delete a materialized lightweight object, the lightweight child
and the private parent are deleted.
When, or if, a lightweight object instance is materialized depends on the object type definition.
You can define a lightweight type such that instances are materialized automatically when certain
operations occur, only on request, or never.
The following is an example of how lightweight objects are stored and how materialization changes
the underlying database records. Note that this example only uses the _s tables to illustrate the
implementation. The implementation is similar for _r tables.
Suppose the following shareable and lightweight object types exist in a repository:
• customer_record, with a SysObject supertype and the following properties:
cust_name string(32),
cust_addr string(64),
cust_city string(32),
cust_state string(2)
cust_phone string(24)
cust_email string(100)
56
The Data Model
This type shares with customer_record and is defined for automatic materialization.
Instances of the order record type will share the values of instances of the customer record object
type. By default, the order record instances are unmaterialized. The figure below shows how the
unmaterialized lightweight instances are represented in the database tables.
The order record instances represented by objID_2 and objID_3 share the property values of
the customer record instance represented by objID_B. Similarly, the order record object instance
represented by objID_5 shares the property values of the customer record object instance represented
by objID_Z. The i_sharing_type property for the parent, or shared, rows in customer_record are set
to reflect the fact that those rows are shared.
There are no order record-specific rows created in customer_record_s for the unmaterialized order
record objects.
Because the order record object type is defined for automatic materialization, certain operations on
an instance will materialize the instance. This does not create a new order record instance, but
instead creates a new row in the customer record table that is specific to the materialized order
record instance. The figure below illustrates how a materialized instance is represented in the
database tables.
57
The Data Model
Materializing the order record instances created new rows in the customer_record_s table, one row for
each order record object, and additional rows in each supertype table in the type hierarchy. The object
ID of each customer record object representing a materialized order record object is set to the object ID
of the order record object it represents, to associate the row with the order record object. Additionally,
the i_sharing_type property of the previously shared customer record object is updated. In the order
record objects, the i_sharing_parent property is reset to the object ID of the order record object itself.
Documentum Server System Object Reference Guide contains information about the identifiers recognized
by Documentum Server.
By default, all object types tables are created in the same tablespace with default extent sizes.
On some databases, you can change the defaults when you create the repository. By setting server.ini
parameters before the initialization file is read during repository creation, you can define:
• The tablespaces in which to create the object-type tables
• The size of the extent allotted for system-defined object types
You can define tablespaces for the object type tables based on categories of size or for specific object
types. For example, you can define separate tablespaces for the object types categorized as large and
another space for those categorized as small. (The category designations are based on the number of
objects of the type expected to be included in the repository.) Or, you can define a separate tablespace
for the SysObject type and a different space for the user object type.
Additionally, you can change the size of the extent allotted to categories of object types or to specific
object types.
Documentum Platform and Platform Extensions Installation Guide contains instructions for changing the
default location and extents of object type tables and the locations of the index tables.
58
The Data Model
Registered tables
In addition to the object type and type index tables, Documentum Server recognizes registered tables.
Registered tables are RDBMS tables that are not part of the repository but are known to Documentum
Server. They are created by the DQL REGISTER statement and automatically linked to the System
cabinet in the repository. They are represented in the repository by objects of type dm_registered.
After an RDBMS table is registered with the server, you can use DQL statements to query the
information in the table or to add information to the table.
A number of views are created in a Documentum Server repository automatically. Some of these
views are for internal use only, but some are available to provide information to users. The views that
59
The Data Model
are available for viewing by users are defined as registered tables. To obtain a list of these views, you
can run the following DQL query as a user with at least Sysadmin privileges:
SELECT "object_name", "table_name", "r_object_id" FROM "dm_registered"
Documentum Server DQL Reference Guide contains information about the REGISTER statement and
querying registered tables.
Localization support
The data dictionary is the mechanism you can use to localize Documentum Server. The data
dictionary supports multiple locales. A data dictionary locale represents a specific geographic
region or linguistic group. For example, suppose your company has sites in Germany and England.
Using the multi-locale support, you can store labels for object types and properties in German and
English. Then, applications can query for the user current locale and display the appropriate labels
on dialog boxes.
Documentum provides a default set of data dictionary information for each of the following locales:
• English
• French
• Italian
• Spanish
• German
• Japanese
• Korean
60
The Data Model
By default, when Documentum Server is installed, the data dictionary file for one of the locales is
installed also. The procedure determines which of the default locales is most appropriate and installs
that locale. The locale is identified in the dd_locales property of the dm_docbase_config object.
The data dictionary support for multiple locales lets you store a variety of text strings in the languages
associated with the installed locales. For each locale, you can store labels for object types and
properties, some help text, and error messages.
61
The Data Model
Using DQL
To retrieve data dictionary information using DQL, use a query against the object types that contain
the published information. These types are dd common info, dd type info, and dd attr info. For
example, the following query returns the labels for dm_document properties in the English locale:
SELECT "label_text" FROM "dmi_dd_attr_info"
WHERE "type_name"='dm_document' AND "nls_key"='en'
If you want to retrieve information for the locale that is the best match for the current client session
locale, use the DM_SESSION_DD_LOCALE keyword in the query. For example:
SELECT "label_text" FROM "dmi_dd_attr_info"
WHERE "type_name"='dm_document' AND "nls_key"=DM_SESSION_DD_LOCALE
To ensure the query returns current data dictionary information, examine the resync_needed
property. If that property is TRUE, the information is not current and you can republish before
executing the query.
Documentum Server DQL Reference Guide provides a full description of the DM_SESSION_DD_LOCALE
keyword.
62
The Data Model
Constraints
A constraint is a restriction applied to one or more property values for an instance of an object type.
Documentum Server does not enforce constraints. The client application must enforce the constraint,
using the constraint data dictionary definition. You can provide an error message as part of the
constraint definition for the client application to display or log when the constraint is violated.
You can define a Check constraint in the data dictionary. Check constraints are most often used to
provide data validation. You provide an expression or routine in the constraint definition that the
client application can run to validate a given property value.
You can define a check constraint at either the object type or property level. If the constraint
expression or routine references multiple properties, you must define the constraint at the type level.
If it references a single property, you can define the constraint at either the property or type level.
You can define check constraints that apply only when objects of the type are in a particular lifecycle
state.
63
The Data Model
Component specifications
Components are user-written routines. Component specifications designate a component as a valid
routine to execute against instances of an object type. Components are represented in the repository
by dm_qual_comp objects. They are identified in the data dictionary by their classifiers and the
object ID of their associated qual comp objects.
A classifier is constructed of the qual comp class_name property and a acronym that represents the
component build technology. For example, given a component whose class_name is checkin and
whose build technology is Active X, its classifier is checkin.ACX.
You can specify only one component of each class for an object type.
Value assistance
Value assistance provides a list of valid values for a property. A value assistance specification defines
a literal list, a query, or a routine to list possible values for a property. Value assistance is typically
used to provide a list of values for a property associated with a field in a dialog box.
Mapping information
Mapping information consists of a list of values that are mapped to another list of values. Mapping is
generally used for repeating integer properties, to define understandable text for each integer value.
Client applications can then display the text to users instead of the integer values.
For example, suppose an application includes a field that allows users to choose between four
resort sites: Malibu, French Riviera, Cancun, and Florida Keys. In the repository, these sites may
be identified by integers-Malibu=1, French Riviera=2, Cancun=3, and Florida Keys=4. Rather than
display 1, 2, 3, and 4 to users, you can define mapping information in the data dictionary so that
users see the text names of the resort areas, and their choices are mapped to the integer values for
use the by application.
64
Chapter 5
Object Type and Instance
Manipulations and Customizations
65
Object Type and Instance Manipulations and Customizations
Object creation
The ability to create objects is controlled by user privilege levels. Anyone can create documents and
folders. To create a cabinet, a user must have the Create Cabinet privilege. To create users, the user
must have the Sysadmin (System Administrator) privilege or the Superuser privilege. To create a
group, a user must have Create Group, Sysadmin, or Superuser privileges. User privilege levels and
object-level permissions are described in User privileges, page 84. Documentum Server Administration
and Configuration Guide contains a description of how to assign privileges.
In the DFC, the interface for each class of objects has a method that allows you to instantiate a new
instance of the object.
In DQL, you use the CREATE OBJECT method to create a new instance of an object. Documentum
Server DQL Reference Guide contains information about DQL and the reference information for the
DQL statements, including CREATE OBJECT.
Object modification
There are a variety of changes that users and applications can make to existing objects. The most
common changes are:
• changing property values
• adding, modifying, or removing content
• changing the object access permissions
• associating the object with a workflow or lifecycle
66
Object Type and Instance Manipulations and Customizations
In the DFC, the methods that change property values are part of the interface that handles the
particular object type. For example, to set the subject property of a document, you use a method in
the IDfSysObject interface.
In the DFC, methods are part of the interface for individual classes. Each interface has methods that
are defined for the class plus the methods inherited from its superclass. The methods associated with
a class can be applied to objects of the class. Documentum Foundation Classes Development Guide or the
associated Javadocs contains information about the DFC and its classes and interfaces.
The Document Query Language (DQL) is a superset of SQL. It allows you to query the repository
tables and manipulate the objects in the repository. DQL has several statements that allow you to
create objects. There are also DQL statements you can use to update objects by changing property
values or adding content.
Creating or updating an object using DQL instead of the DFC is generally faster because DQL uses
one statement to create or modify and then save the object. Using DFC methods, you must issue
several methods-one to create or fetch the object, several to set its properties, and a method to save it.
Object destruction
Destroying an object removes it from the repository.
You must either be the owner of an object or you must have Delete permission on the object to destroy
it. If the object is a cabinet, you must also have the Create Cabinet privilege.
Any SysObject or subtype must meet the following conditions before you can destroy it:
• The object cannot be locked.
• The object cannot be part of a frozen virtual document or snapshot.
• If the object is a cabinet, it must be empty.
• If the object is stored with a specified retention period, the retention period must have expired.
Destroying an object removes the object from the repository and also removes any relation objects
that reference the object. (Relation objects are objects that define a relationship between two objects.)
Only the explicit version is removed. Destroying an object does not remove other versions of the
object. To remove multiple versions of an object, use a prune method. Removing versions, page 112,
describes how the prune method behaves. Documentum Server System Object Reference Guide contains
information about relationships.
By default, destroying an object does not remove the object content file or content object that
associated the content with the destroyed object. If the content was not shared with another
document, the content file and content object are orphaned. To remove orphaned content files and
orphaned content objects, run the dmclean and dmfilescan utilities as jobs, or manually. Documentum
Server Administration and Configuration Guide contains information about the dmclean and dmfilescan
jobs and how to execute the utilities manually.
However, if the content file is stored in a storage area with digital shredding enabled and the content
is not shared with another object, destroying the object also removes the content object from the
repository and shreds the content file.
When the object you destroy is the original version, Documentum Server does not actually remove
the object from the repository. Instead, it sets the object i_is_deleted property to TRUE and removes
67
Object Type and Instance Manipulations and Customizations
all associated objects, such as relation objects, from the repository. The server also removes the object
from all cabinets or folders and places it in the Temp cabinet. If the object is carrying the symbolic
label CURRENT, it moves that label to the version in the tree that has the highest r_modify_date
property value. This is the version that has been modified most recently.
Note: If the object you want to destroy is a group, you can also use the DQL DROP GROUP statement.
68
Object Type and Instance Manipulations and Customizations
69
Object Type and Instance Manipulations and Customizations
A simple module is similar to an SBO, but provides functionality that is specific to a repository. For
example, a simple module would be used if you wanted to customize a behavior that is different
across repository versions.
A BOF module is comprised of the Java archive (JAR) files that contain the implementation classes
and the interface classes for the behavior the module implements, and any interface classes on which
the module depends. The module may also include Java libraries and documentation.
After you have created the files that comprise a module, you use Documentum Composer to package
the module into a Documentum archive (DAR) file and install the DAR file to the appropriate
repositories.
SBOs are installed in the repository that is the global registry. Simple modules, TBOs, and aspects
are installed in each repository that contains the object type or objects whose behavior you want
to modify.
Installing a BOF module creates a number of repository objects. The top-level object is a dmc_module
object. Module objects are subtypes of dm_folder. They serve as a container for the BOF module. The
properties of a module object provide information about the BOF module it represents. For example,
they identify the module type (SBO, TBO, aspect, or simple), its implementation class, the interfaces it
implements, and any modules on which the module depends.
The module folder object is placed in the repository in /System/Modules, under the appropriate
subfolder. For example, if the module represents an TBO and its name is MyTBO, it is found in
/System/Modules/TBO/MyTBO.
Each JAR file in the module is represented by a dmc_jar object. A jar object has properties that
identify the Java version level required by the classes in the module and whether the JAR file contains
implementation or interface classes, or both.
The jar objects representing the module implementation and interface classes are linked directly to
the dmc_module folder. The jar objects representing the JAR files for supporting software are linked
to folders represented by dmc_java_library objects. The java library objects are then linked to the
top-level module folder. The following figure illustrates these relationships.
70
Object Type and Instance Manipulations and Customizations
The properties of a Java library object allow you to specify whether you want to sandbox the libraries
linked to that folder. Sandboxing refers to loading the library into memory in a manner that makes it
inaccessible to any application other than the application that loaded it. DFC achieves sandboxing by
using a standard BOF class loader and separate class loaders for each module. The class loaders try to
load classes first, before delegating to the usual hierarchy of Java class loaders.
In addition to installing the modules in a repository, you must also install the JAR file for a module
interface classes on each client machine running DFC, and the file must be specified in the client
CLASSPATH environment variable.
BOF modules are delivered dynamically to client applications when the module is needed. The
delivery mechanism relies on local caching of modules, on client machines. DFC does not load TBOs,
aspects, or simple modules into the cache until an application tries to use them. After a module is
loaded, DFC checks for updates to the modules in the local cache whenever an application tries to
use a module or after the interval specified by the dfc.bof.cache.currency_check_interval property in
the dfc.properties file. The default interval value is 30 seconds. If a module has changed, only the
changed parts are updated in the cache.
The location of the local cache is specified in the dfc.properties file, in the dfc.data.cache_dir property.
The default value is the cache subdirectory of the directory specified in the dfc.data.dir property. All
applications that use a particular DFC installation share the cache.
Documentum Foundation Classes Development Guide contains instructions for how to create BOF
modules of the various types and how to set up and enable the BOF development mode. Documentum
Composer documentation contains instructions for packaging and deploying modules and information
about deploying the interface classes to a client machine.
Service-based objects
A service-based object (SBO) is a module that implements a service for multiple object types. For
example, if you want to implement a service that automatically handles property validation for a
71
Object Type and Instance Manipulations and Customizations
variety of document subtypes, you would use an SBO. You can also use SBOs to implement utility
functions to be called by TBOs or to retrieve items from external sources, for example, email messages.
An SBO associates an interface with an implementation class. SBOs are stored in the global registry,
in a folder under /System/Modules/SBO. The name of the folder is the name of the SBO. The name of
the SBO is typically the name of the interface.
Type-based objects
A type-based object (TBO) associates a custom object type that extends a Documentum system object
type with an implementation class that extends the appropriate DFC class. For example, suppose you
want to add some validation behavior to a specific document subtype. You would create a TBO for
that subtype with an implementation class that extends IDfDocument, adding the validation behavior.
Because TBOs are specific to an object type, they are stored in each repository that contains the
specified object type. They are stored in a folder under the /System/Modules/TBO. The folder name is
the name of the TBO, which is typically the name of the object type for which it was created.
Aspects
An aspect is a BOF module that customizes behavior or records metadata or both for an instance of
an object type.
You can attach an aspect to any object of type dm_sysobject or its subtypes. You can also attach an
aspect to custom-type objects if the type has no supertype and you have issued an ALTER TYPE
statement to modify the type to allow aspects.
An object can have multiple aspects attached, but can not have multiple instances of one aspect
attached. That is, given object X and aspects a1, a2, and a3, you can attach a1, a2, and a3 to object X,
but you cannot attach any of the aspects to object X more than once.
To attach instance-specific metadata to an object, you can define properties for an aspect.
Documentum Server DQL Reference Guide describes the syntax and use of the ALTER TYPE statement.
Note: Replication of objects with aspects is not supported.
Aspect properties
After you create an aspect, you can define properties for the aspect using Documentum Composer or
the ALTER ASPECT statement. Properties of an aspect can be dropped or modified after they are
added. Changes to an aspect that add, drop, or modify a property affect objects to which the aspect
is currently attached.
Note: You cannot define properties for aspects whose names contain a dot (.). For example, if the
aspect name is "com.mycompany.policy", you can not define properties for that aspect.
Aspect properties are not fulltext-indexed by default. If you want to include the values in the index,
you must use explicitly identify which properties you want indexed. You can use Documentum
72
Object Type and Instance Manipulations and Customizations
Composer or ALTER ASPECT to do this. Documentum Server DQL Reference Guide describes the
syntax and use of the ALTER ASPECT statement.
Aspect properties are stored in internal object types. When you define properties for an aspect,
Documentum Server creates an internal object type that records the names and definitions of
those properties. The name of the internal type is derived from the type object ID and is in the
format: dmi_type_id. Documentum Server creates and manages these internal object types. The
implementation of these types ensures that the properties they represent appear to client applications
as standard properties of the object type to which the aspect is attached.
At the time you add properties to an aspect, you can choose to optimize performance for fetching
or querying those properties by including the OPTIMIZEFETCH keyword in the ALTER ASPECT
statement. That keyword directs Documentum Server to store all the aspect properties and their
values in the property bag of any object to which the aspect is attached, if the object has a property bag.
Default aspects
Default aspects are aspects that are defined for a particular object type using the ALTER TYPE
statement. If an object type has a default aspect, each time a user creates an instance of the object type
or a subtype of the type, the aspects are attached to that instance.
An object type may have multiple default aspects. An object type inherits all the default aspects
defined for its supertypes, and may also have one or more default aspects defined directly for itself.
All of a type default aspects are applied to any instances of the type.
When you add a default aspect to a type, the newly added aspect is only associated with new
instances of the type or subtype created after the addition. Existing instances of the type or its
subtypes are not affected.
If you remove a default aspect from an object type, existing instances of the type or its subtypes are
not affected. The aspect remains attached to the existing instances.
The default_aspects property in an object type dmi_type_info object records those default aspects
defined directly for the object type. At runtime, when a type is referenced by a client application,
DFC stores the type inherited and directly defined default aspects in memory. The in-memory cache
is refreshed whenever the type definition in memory is refreshed.
Simple modules
A simple module customizes or adds a behavior that is specific to a repository version. For example,
you may want to customize a workflow or lifecycle behavior that is different for different repository
versions. A simple module is similar to an SBO, but does not implement the IDfService interface.
Instead, it implements the IDfModule interface.
73
Object Type and Instance Manipulations and Customizations
Simple modules associate an interface with an implementation class. They are stored in each
repository to which they apply, and are stored in /System/Modules. The folder name is the name
of the module.
74
Chapter 6
Security Services
Overview
The security features supported by Documentum Server maintain system security and the integrity of
the repository. They also provide accountability for user actions. Documentum Server supports:
• Standard security features
• Trusted Content Services security features
Feature Description
75
Security Services
Feature Description
76
Security Services
Feature Description
Secure Socket Layer (SSL) communications When you install Documentum Server, the
between Documentum Server and the client installation procedure creates two service names
library (DMCL) on client hosts for Documentum Server. One represents a
native, nonsecure port and the other a secure
port. You can then configure the server and
clients, through the server config object and
dmcl.ini files, to use the secure port.
Privileged Documentum Foundation Classes This feature allows DFC to run under
(DFC) a privileged role, which gives escalated
permissions or privileges for a specific
operation. Privileged DFC, page 97, describes
privileged DFC in detail.
Users and groups, page 78, contains information about users and groups, including dynamic groups.
Documentum Server Administration and Configuration Guide contains more information about setting
the connection mode for servers and configuring clients to request a native or secure connection.
Feature Description
Encrypted file store storage areas Using encrypted file stores provides a way to
ensure that content stored in a file store is not
readable by users accessing it from the operating
system. Encryption can be used on content in
any format except rich media stored in a file
store storage area. The storage area can be a
standalone storage area or it can be a component
of a distributed store. Encrypted file store
storage areas, page 100, describes encrypted
storage areas in detail.
Digital shredding of content files Digital shredding provides a final, complete
way of removing content from a storage area
by ensuring that deleted content files can not
be recovered by any means. Digital shredding,
page 100, provides a description of this feature.
77
Security Services
Electronic signature support using the The addESignature method is used to implement
IDfSysObject.addESignature method an electronic signature requirement through
Documentum Server. The method creates a
formal signature page and adds that page as
primary content (or a rendition) to the signed
document. The signature operation is audited,
and each time a new signature is added, the
previous signature is verified first. Signature
requirement support, page 91, describes
how electronic signatures are supported by
addESignature work.
Ability to add, modify, and delete the additional The types of entries that you can manipulate in
types of entries in an ACL an ACL when you have a TCS license are:
• AccessRestriction and ExtendedRestriction
Repository security
The repository security setting controls whether object-level permissions, table permits, and folder
security are enforced. The setting is recorded in the repository in the security_mode property in the
docbase config object. The property is set to ACL, which turns on enforcement when a repository is
created. Unless you have explicitly turned security off by setting security_mode to none, object-level
permissions and table permits are always enforced.
78
Security Services
Users
This section introduces repository users.
A repository user is an actual person or a virtual user who is defined as a user in the repository. A
virtual user is a repository user who does not exist as an actual person.
Repository users have two states, active and inactive. An active user can connect to the repository
and work. An inactive user is not allowed to connect to the repository.
Users are represented in the repository as dm_user objects. The user object can represent an actual
individual or a virtual person. The ability to define a virtual user as a repository user is a useful
capability. For example, suppose you want an application to process certain user requests and
want to dedicate an inbox to those requests. You can create a virtual user and register that user to
receive events arising from the requests. The application can then read that user inbox to obtain and
process the requests.
The properties of a user object record information that allows Documentum Server to manage the
user access to the repository and to communicate with the user when necessary. For example, the
properties define how the user is authenticated when the user requests repository access. They also
record the user state (active or inactive), the user email address (allowing Documentum Server to
send automated emails when needed), and the user home repository (if any).
Documentum Server System Object Reference Guide describes the properties defined for the dm_user
object type.
In a federated distributed environment, a user is either a local user or a global user. A local user is
managed from the context of the repository in which the user is defined. A global user is a user
defined in all repositories participating in the federation and managed from the federation governing
repository.
Documentum Server Administration and Configuration Guide contains more information about users
in general and instructions about creating local users. Documentum Platform and Platform Extensions
Installation Guide contains information for creating and managing global users.
Groups
Groups are sets of users or groups or a mixture of both. They are used to assign permissions or client
application roles to multiple users. There are several classes of groups in a repository. A group
class is recorded in its group_class property. For example, if group_class is “group,” the group is a
standard group, used to assign permissions to users and other groups.
79
Security Services
A group, like an individual user, can own objects, including other groups. A member of a group
that owns an object or group can manipulate the object just as an individual owner. The group
member can modify or delete the object.
Additionally, a group can be a dynamic group. Membership in a dynamic group is determined at
runtime. Dynamic groups provide a layer of security by allowing you to control dynamically who
Documentum Server treats as a member of a group.
There are several types of groups, as listed below.
• Standard groups:
A standard group consists of a set of users. The users can be individual users or other groups or
both. A standard group is used to assign object-level permissions to all members of the group.
For example, you might set up a group called engr and assign Version permission to the engr
group in an ACL applied to all engineering documents. All members of the engr group then have
Version permission on the engineering documents.
Standard groups can be public or private. When a group is created by a user with Sysadmin
or Superuser privileges, the group is public by default. If a user with Create Group privileges
creates the group, it is private by default. You can override these defaults after a group is created
using the ALTER GROUP statement. Documentum Server DQL Reference Guide describes how to
use ALTER GROUP.
• Role groups:
A role group contains a set of users or other groups or both that are assigned a particular role
within a client application domain. A role group is created by setting the group_class property to
role and the group_name property to the role name.
• Module role groups:
A module role group is a role group that is used by an installed BOF module. It represents a role
assigned to a module of code, rather than a particular user or group. Module role groups are used
internally. The group_class value for these groups is module role.
• Privileged groups:
A privileged group is a group whose members are allowed to perform privileged operations
even though the members do not have the privileges as individuals. A privileged group has a
group_class value of privilege group.
• Domain groups:
A domain group represents a particular client application domain. A domain group contains a set
of role groups corresponding to the roles recognized by the client application.
• Dynamic groups:
A dynamic group is a group, of any group class, with a list of potential members. A setting in the
group definition defines whether the potential members are treated as members of the group or
not when a repository session is started. Depending on that setting, an application can issue a
session call to add or remove a user from the group when the session starts.
A nondynamic group cannot have a dynamic group as a member. A dynamic group can include
other dynamic groups as members or nondynamic groups as members. However, if a nondynamic
80
Security Services
group is a member, the members of the nondynamic group are treated as potential members of
the dynamic group.
• Local and global groups:
In a federated distributed environment, a group is either a local group or a global group. A local
group is managed from the context of the repository in which the group is defined. A global
group is a group defined in all repositories participating in the federation and managed from
the federation governing repository. Documentum Server Administration and Configuration Guide
contains instructions about creating local groups. Documentum Platform and Platform Extensions
Installation Guide contains complete information for creating and managing global groups.
Role and domain groups are used by client applications to implement roles within an application.
The two kinds of groups are used together to achieve role-based functionality. Documentum Server
does not enforce client application roles.
For example, suppose you write a client application called report_generator that recognizes three
roles: readers (users who read reports), writers (users who write and generate reports), and
administrators (users who administer the application). To support the roles, you create three role
groups, one for each role. The group_class is set to role for these groups and the group names are the
names of the roles: readers, writers, and administrators. Then, create a domain group by creating a
group whose group_class is domain and whose group name is the name of the domain. In this case,
the domain name is report_generator. The three role groups are the members of the report_generator
domain group.
When a user starts the report_generator application, the application examines its associated domain
group and determines the role group to which the user belongs. The application then performs only
the actions allowed for members of that role group. For example, the application customizes the
menus presented to the user depending on the role to which the user is assigned.
Note: Documentum Server does not enforce client application roles. It is the responsibility of the
client application to determine if there are role groups defined for the application and apply and
enforce any customizations based on those roles.
Documentum Server Administration and Configuration Guide contains more information about groups
in general.
User authentication
User authentication is the procedure by which Documentum Server ensures that a particular user is
an active and valid user in a repository.
Documentum Server authenticates the user whenever a user or application attempts to open a
repository connection or reestablish a timed-out connection. The server checks that the user is a valid,
active repository user. If not, the connection is not allowed. If the user is a valid, active repository
user, Documentum Server authenticates the user name and password.
Users are also authenticated when they:
• Assume an existing connection
• Change their password
81
Security Services
Password encryption
Password encryption is the automatic process used by Documentum Server to protect certain
passwords.
The passwords used by Documentum Server to connect to third-party products, such as an LDAP
directory server or the RDBMS, as well as those used by many internal jobs to connect to a repository,
are stored in files in the installation. To protect these passwords, Documentum Server automatically
encrypts them. Decrypting the passwords occurs automatically also. When an encrypted password is
passed as an argument to a method, the DFC decrypts the password before passing the arguments to
Documentum Server.
Client applications can use password encryption for their own password by using the DFC method
IDfClient.encryptPassword. The method allows you to use encryption in your applications and
scripts. Use encryptPassword to encrypt passwords used to connect to a repository. All the methods
that accept a repository password accept a password encrypted using the encryptPassword method.
The DFC will automatically perform the decryption.
Passwords are encrypted using the Administration Encryption Key (AEK). The AEK is installed
during Documentum Server installation. After encrypting a password, Documentum Server also
82
Security Services
encodes the encrypted string using Base64 before storing the result in the appropriate password file.
The final string is longer than the clear text source password.
Documentum Server Administration and Configuration Guide contains complete information about
administering password encryption. The associated Javadocs contain more information about
encryptPassword.
83
Security Services
User privileges
Documentum Server supports a set of user privileges that determine what special operations a user
can perform in the repository. There are two types of user privileges: basic and extended. The basic
privileges define the operations that a user can perform on SysObjects in the repository. The extended
privileges define the security-related operations the user can perform.
User privileges are always enforced whether repository security is turned on or not.
84
Security Services
Object-level permissions
Object-level permissions are access permissions assigned to every SysObject (and SysObject subtype)
in the repository. They are defined as entries in ACL objects. The entries in the ACL identify users
and groups and define their object-level permissions to the object with which the ACL is associated.
Each SysObject (or SysObject subtype) object has an associated ACL. For most sysObject subtypes,
the permissions control the access to the object. For dm_folder, however, the permissions are not used
to control access unless folder security is enabled. In such cases, the permissions are used to control
specific sorts of access, such as the ability to link a document to the folder.
ACLs, page 88, describes ACLs in more detail. Folder security, page 88, provides more information
about folder security. The associated Javadocs for the IDfSysObject.link and IDfSysObject.unlink
methods contain a description of privileges necessary to link or unlink an object.
There are two kinds of object-level permissions: base permissions and extended permissions.
85
Security Services
Permission Description
Change Location In conjunction with the appropriate base object-level
permissions, allows the user to move an object from one
folder to another.
86
Security Services
Permission Description
Execute Procedure The user can run the external procedure associated with
the object.
Default permissions
Object owners, because they have Delete permission on the objects they own by default, also have
Change Location and Execute Procedure permissions on those objects. By default, superusers have
Read permission and all extended permissions except Delete Object on any object.
Table permits
The table permits control access to the RDBMS tables represented by registered tables in the
repository. Table permits are only enforced when repository security is on. To access an RDBMS table
using DQL, you must have:
• At least Browse access for the dm_registered object representing the RDBMS table
• The appropriate table permit for the operation that you want to perform
Note: Superusers can access all RDBMS tables in the database using a SELECT statement regardless
of whether the table is registered or not.
There are five levels of table permits, described in the following table.
87
Security Services
The permits are identified in the dm_registered object that represents the table, in the
owner_table_permit, group_table_permit, and world_table_permit properties.
The permits are not hierarchical. For example, assigning the permit to insert does not confer the
permit to update. To assign more than one permit, you add the integers representing the permits
you want to assign, and set the appropriate property to the total. For example, if you want to assign
both insert and update privileges as the group table permit, set the group_table_permit property to 6,
the sum of the integer values for the update and insert privileges.
Folder security
Folder security is a supplemental level of repository security. When folder security is turned on, for
some operations the server checks and applies permissions defined in the ACL associated with the
folder in which an object is stored or on the primary folder of the object. These checks are in addition
to the standard object-level permission checks associated with the object ACL. In new repositories,
folder security is turned on by default.
Folder security does not prevent users from working with objects in a folder. It provides an extra
layer of security for operations that involve linking or unlinking, such as creating a new object,
moving an object, deleting an object, and copying an object.
Folder security is turned on and off at the repository level, using the folder_security property in the
docbase config object.
Documentum Server Administration and Configuration Guide contains complete information about folder
security, including a list of the extra checks it imposes.
ACLs
An Access Control List (ACL) is the mechanism that Documentum Server uses to impose object-level
permissions on SysObjects. An ACL has one or more entries that identify a user or group and the
object-level permissions accorded that user or group by the ACL. Another name for an ACL is a
permission set. An ACL is a set of permissions that apply to an object.
Each SysObject has an ACL. The ACL assigned to a SysObject is used to control access to that object.
For folders, the assigned ACL serves additional functions. If folder security is enabled, the ACL
assigned to the folder sets the folder security permissions. If the default ACL for the Documentum
Server is configured as Folder, then newly created objects in the folder are assigned the folder ACL.
An ACL is represented in the repository as an object of type dm_acl. ACL entries are recorded in
repeating properties in the object. Each ACL is uniquely identified within the repository by its name
and domain. (The domain represents the owner of the ACL.) When an ACL is assigned to an object,
the object acl_name and acl_domain properties are set to the name and domain of the ACL.
After an ACL is assigned to an object, the ACL can be changed. You can modify the ACL itself or you
can remove it and assign a different ACL to the object.
ACLs are typically created and managed using Documentum Administrator. However, you can also
create and manage them through DFC or Document Query Language (DQL).
88
Security Services
ACL entries
The entries in the ACL determine which users and groups can access the object and the level of access
for each. There are several types of ACL entries:
• AccessPermit and ExtendedPermit
• AccessRestriction and ExtendedRestriction
• RequiredGroup and RequiredGroupSet
• ApplicationPermit and ApplicationRestriction
AccessPermit and ExtendedPermit entries grant the base and extended permissions. Creating,
modifying, or deleting AccessPermit and ExtendedPermit entries is supported by all Documentum
Servers.
The remaining entry types provide extended capabilities for defining access. For example, an
AccessRestriction entry restricts a user or group access to a specified level even if that user or group
is granted a higher level by another entry. You must have installed Documentum Server with a
Trusted Content Services license to create, modify, or delete any entry other than an AccessPermit or
ExtendedPermit entry.
Note: A Documentum Server enforces all ACL entries regardless of whether the server was installed
with a Trusted Content Services license or not. The TCS license only affects the ability to create,
modify, or delete entries.
Documentum Server Administration and Configuration Guide contains detailed descriptions of the type
of entries you can place in an ACL and instructions for creating ACLs. Assigning ACLs, page 132,
describes the options for assigning ACLs to objects.
Categories of ACLs
ACLs are either external or internal ACLs:
• External ACLs are created explicitly by users. The name of an external ACL is determined by the
user. External ACLs are managed by users, either the user who creates them or superusers.
• Internal ACLs are created by Documentum Server. Internal ACLs are created in a variety of
situations. For example, if a user creates a document and grants access to the document to HenryJ,
Documentum Server assigns an internal ACL to the document. (The internal ACL is derived from
the default ACL with the addition of the permission granted to HenryJ.) The names of internal
ACL begin with dm_. Internal ACLs are managed by Documentum Server.
The external and internal ACLs are further characterized as public or private ACLs:
• Public ACLs are available for use by any user in the repository. Public ACLs created by the
repository owner are called system ACLs. System ACLs can only be managed by the repository
owner. Other public ACLs can be managed by their owners or a user with Sysadmin or Superuser
privileges.
• Private ACLs are created and owned by a user other than the repository owner. However, unlike
public ACLs, private ACLs are available for use only by their owners, and only their owners
or a superuser can manage them.
89
Security Services
Template ACLs
A template ACL is an ACL that can be used in many contexts. Template ACLs use aliases in place
of user or group names in the entries. The aliases are resolved when the ACL is assigned to an
object. A template ACL allows you to create one ACL that you can use in a variety of contexts and
applications and ensure that the permissions are given to the appropriate users and groups. Chapter
11, Aliases, provides information about aliases
Auditing
Auditing is the process of recording the occurrence of system and application events in the
repository. Events are operations performed on objects in a repository or something that happens
in an application. System events are events that Documentum Server recognizes and can audit.
Application events are user-defined events. They are not recognized by Documentum Server and
must be audited by an application.
Documentum Server audits a large set of events by default. For example, all successful addESignature
events and failed attempts to execute addESignature events are audited. Similarly, all executions of
methods that register or unregister events for auditing are themselves audited.
You can also audit many other operations. For example, you can audit:
• All occurrences of an event on a particular object or object type
• All occurrences of a particular event, regardless of the object on which it occurs
• All workflow-related events
• All occurrences of a particular workflow event for all workflows started from a given process
definition
• All executions of a particular job
There are several methods in the IDfAuditTrailManager interface that can be used to request auditing.
For example, the registerEventForType method starts auditing a particular event for all objects of a
specified type. Typically, you must identify the event you want to audit and the target of the audit.
The event can be either a system event or an application (user-defined) event. The target can be a
particular object, all objects of a particular object type, or objects that satisfy a particular query.
The audit request is stored in the repository in registry objects. Each registry object represents one
audit request.
Issuing an audit request for a system event initiates auditing for the event. If the event is an
application event, the application is responsible for checking the registry objects to determine
whether auditing is requested for the event and, if so, create the audit trail entry.
Users must have Config Audit privileges to issue an audit request.
90
Security Services
The records of audited events are stored in the repository as entries in an audit trail. The entries
are objects of dm_audittrail, dm_audittrail_acl, or dm_audittrail_group. Each entry records the
information about one occurrence of an event. The information is specific to the event and can
include information about property values in the audited object.
Documentum Server Administration and Configuration Guide contains information about auditing,
including a list of those events that are audited by default, how to initiate auditing, and what
information is stored in an audit trail record.
Tracing
Tracing is an feature that logs information about operations that occur in Documentum Server and
DFC. The information that is logged depends on which tracing functionality is turned on.
Documentum Server and DFC support multiple tracing facilities. On Documentum Server, you can
turn on tracing for a variety of server features, such as LDAP operations, content-addressed storage
area operation, and operations on SysObjects. The jobs in the administration tool suite also generate
trace files for their operations.
DFC has a robust tracing facility that allows you to trace method operations and RPC calls. The
facility allows you to configure many options for the generated trace files. For example, you can trace
by user or thread, specify stack depth to be traced, and define the format of the trace file.
Documentum Server DQL Reference Guide contains reference information for the SET_OPTIONS and
MODIFY_TRACE administration methods. Documentum Server Administration and Configuration Guide
contains information about all the jobs in the administration tool suite.
91
Security Services
addDigitalSignature method is executed. Use this option if you want to implement strict signature
support in a client application.
Simple sign-offs are the least rigorous way to supply an electronic signature. Simple sign-offs are
implemented using the IDfPersistentObject.signoff method. This method authenticates a user signing
off a document and creates an audit trail entry for the dm_signoff event.
Electronic signatures
An electronic signature is a signature recorded in formal signature page generated by Documentum
Server and stored as part of the content of the object. Electronic signatures are generated when an
application issues an IDfSysObject.addESignature method.
Electronic signatures are the most rigorous signature requirement that Documentum Server supports.
The electronic signature feature requires a Trusted Content Services license.
Overview of Implementation
Electronic signatures are generated by Documentum Server when an application or user issues an
addESignature method. Signatures generated by addESignature are recorded in a formal signature
page and added to the content of the signed object. The method is audited automatically, and
the resulting audit trail entry is signed by Documentum Server. The auditing feature cannot be
turned off. If an object requires multiple signatures, before allowing the addition of a signature,
Documentum Server verifies the preceding signature. Documentum Server also authenticates the
user signing the object.
All the work of generating the signature page and handling the content is performed by Documentum
Server. The client application is only responsible for recognizing the signature event and issuing the
addESignature method. A typical sequence of operations in an application using the feature is:
1. A signature event occurs and is recognized by the application as a signature event.
A signature event is an event that requires an electronic signature on the object that participated
in the event. For example, a document check-in or lifecycle promotion might be a signature event.
2. In response, the application asks the user to enter a password and, optionally, choose or enter a
justification for the signature.
3. After the user enters a justification, the application can call the createAudit method to create an
audit trail entry for the event.
This step is optional, but auditing the event that triggered the signature is common.
4. The application calls addESignature to generate the electronic signature.
After addESignature is called, Documentum Server performs all the operations required to generate
the signature page, create the audit trail entries, and store the signature page in the repository with
the object. You can add multiple signatures to any particular version of a document. The maximum
number of allowed signatures on a document version is configurable.
Electronic signatures require a template signature page and a method (stored in a dm_method
object) to generate signature pages using the template. The Documentum system provides a default
92
Security Services
signature page template and signature generation method that can be used on documents in PDF
format or documents that have a PDF rendition. You can customize the electronic signature support
in a variety of ways. For example, you can customize the default template signature page, create
your own template signature page, or provide a custom signature creation method for use with a
custom template.
93
Security Services
configuration. The only requirement for using the default functionality is that documents to be signed
must be in PDF format or have a PDF rendition associated with their first primary content page.
The default signature page template is a PDF document generated from a Microsoft Word document.
Both the PDF template and the source Microsoft Word document are installed when Documentum
Server is installed. They are installed in %DM_HOME%\bin ($DM_HOME/bin). The PDF file is
named sigpage.pdf and the Microsoft Word file is named sigpage.doc.
In the repository, the Microsoft Word document that is the source of the PDF template is an object of
type dm_esign_template. It is named Default Signature Page Template and is stored in
Integration/Esignature/Templates
The PDF template document is stored as a rendition of the Microsoft Word document. The page
modifier for the PDF rendition is dm_sig_template.
The default template allows up to six signatures on each version of a document signed using that
template.
The default signature creation method is a Docbasic method named esign_pdf.ebs, stored in
%DM_HOME%\bin ($DM_HOME/bin). The method uses the PDF Fusion library to generate
signature pages. The PDF Fusion library and license is installed during Documentum Server
installation. The Fusion libraries are installed in %DM_HOME%\fusion ($DM_HOME/fusion).
The license is installed in the Microsoft Windows directory on Microsoft Windows hosts and in
$DOCUMENTUM/share/temp on Linux platforms.
The signature creation method uses the location object named SigManifest to locate the Fusion
library. The location object is created during repository configuration.
The signature creation method checks the number of signatures supported by the template page. If
the maximum number is not exceeded, the method generates a signature page and adds that page to
the content file stored in the temporary location by Documentum Server. The method does not read
the content from the repository or store the signed content in the repository.
If you are using the default signature creation method, the content to be signed must be in PDF
format. The content can be the first primary content page of the document or it can be a rendition of
the first content page.
When the method creates the signature page, it appends or prepends the signature page to the
PDF content. (Whether the signature page is added at the front or back of the content to be signed
94
Security Services
is configurable.) After the method completes successfully, Documentum Server adds the content
to the document:
• If the signature is the first signature on that document version, the server replaces the original
PDF content with the signed content and appends the original PDF content to the document as a
rendition with the page modifier dm_sig_source.
• If the signature is a subsequent addition, the server simply replaces the previously signed PDF
content with the newly signed content.
Documentum Server automatically creates an audit trail entry each time an addESignature method is
successfully executed. The entry records information about the object being signed, including its
name, object ID, version label, and object type. The ID of the session in which it was signed is also
recorded. (This can be used in connection with the information in the dm_connect event for the
session to determine what machine was used when the object was signed.)
Documentum Server uses the generic string properties in the audit trail entry to record information
about the signature. The following table lists the use of those properties for a dm_addesignature
event.
95
Security Services
Customizing signatures
If you are using the default electronic signature functionality, signing content in PDF format, you
can customize the signature page template. You can add information to the signature page, remove
information, or just change its look by changing the arrangement, size, and font of the elements on
the page. You can also change whether the signature creation method adds the signature page at
the front or back of the content to be signed.
If you want to embed a signature in content that is not in PDF format, you must use a custom
signature creation method. You can also create a custom signature page template for use by the
custom signature creation method, although using a template is not required.
Documentum Server Administration and Configuration Guide contains complete information about
customizing electronic signatures and tracing the use of electronic signatures.
Signature verification
Electronic signatures added by addEsignature are verified by the verifyESignature method. The
method finds the audit trail entry that records the latest dm_addesignature event for the document
and performs the following checks:
• Calls the IDfAuditTrailManager.verifyAudit method to verify the Documentum Server signature
on the audit trail entry.
• Checks that the hash values of the source content and signed content stored in the audit trail entry
match those of the source and signed content in the repository.
• Checks that the signatures on the document are consecutively numbered.
Only the most recent signature is verified. If the most recent signature is valid, previous signatures
are guaranteed to be valid.
Digital signatures
Digital signatures are electronic signatures, in formats such as PKCS #7, XML Signature, or PDF
Signature, that are generated and managed by client applications. The client application is responsible
for ensuring that users provide the signature and for storing the signature in the repository.
The signature can be stored as primary content or renditions. For example, if the application is
implementing digital signatures based on Microsoft Office XP, the signatures are typically embedded
in the content files and the files are stored in the repository as primary content files for the documents.
96
Security Services
If Adobe PDF signatures are used, the signature is also embedded in the content file, but the file is
typically stored as a rendition of the document, rather than primary content.
Documentum Server supports digital signatures with a property on SysObjects and the
addDigitalSignature method. The property is a Boolean property called a_is_signed to indicate
whether the object is signed. The addDigitalSignature method generates an audit trail entry recording
the signing. The event name for the audit trail entry is dm_adddigsignature. The information in the
entry records who signed the document, when it was signed, and a reason for signing, if one was
provided.
It is possible to require Documentum Server to sign the generated audit trail entries. Because the
addDigitalSignature method is audited by default, there is no explicit registry object for the event.
However, if you want Documentum Server to sign audit trail entries for dm_adddigsignature events,
you can issue an explicit method requesting auditing for the event.
Documentum Server Administration and Configuration Guide contains information about Documentum
Server signatures on audit trail entries. The associated Javadocs provide information about methods
to request auditing for the dm_adddigsignature event, in the IDfAuditTrailManager interface.
Simple sign-offs
Simple sign-offs authenticate the user signing off the object and record information about the sign-off
in an audit trial entry. A simple sign-off is useful in situations in which the sign-off requirement is
not rigorous. For example, you may want to use a simple sign-off when team members are required
to sign a proposal to indicate approval before the proposal is sent to upper management. Simple
sign-offs are the least rigorous way to satisfy a signature requirement.
Simple sign-offs are implemented using a IDfPersistentObject.signoff method. The method accepts a
user authentication name and password as arguments. When the method is executed, Documentum
Server calls a signature validation program to authenticate the user. If authentication succeeds,
Documentum Server generates an audit trail entry recording the sign-off. The entry records what was
signed, who signed it, and some information about the context of the signing. Using sign-off does not
generate an actual electronic signature. The audit trail entry is the only record of the sign-off.
You can use a simple sign-off on any SysObject or SysObject subtype. A user must have at least Read
permission on an object to perform a simple sign-off on the object.
You can customize a simple sign-off by creating a custom signature validation program.
Documentum Server Administration and Configuration Guide contains instructions for creating
a custom signature validation program. The associated Javadocs provide information on the
IDfPersistentObject.signoff usage notes.
Privileged DFC
Privileged DFC is the term used to refer to DFC instances that are recognized by Documentum
Servers as privileged to invoke escalated privileges or permissions for a particular operation. In some
circumstances, an application may need to perform an operation that requires higher permissions or
a privilege than is accorded to the user running the application. In such circumstances, a privileged
97
Security Services
DFC can request the use of a privileged role to perform the operation. The operation is encapsulated
in a privileged module invoked by the DFC instance.
Supporting privileged DFC is a set of privileged group, privileged roles, and the ability to define
type-based objects and simple modules as privileged modules, as follows:
• Privileged groups are groups whose members are granted a particular permission or privileged
automatically. You can add or remove users from these groups.
• Privileged roles are groups defined as role groups that can be used by DFC to give the DFC an
escalated permission or privilege required to execute a privileged module. Only DFC can add or
remove members in those groups.
• Privileged modules are modules that use one or more escalated permissions or privileges to
execute.
By default, each DFC is installed with the ability to request escalated privileges enabled. However, to
use the feature, the DFC must have a registration in the global registry. That registration information
must be defined in each repository in which the DFC will exercise those privileges.
Note: In some workstation environments, it may also be necessary to manually modify the Java
security policy files to use privileged DFC. Documentum Server Administration and Configuration Guide
contains more information.
You can disable the use of escalated privileges by a DFC instance. This is controlled by the
dfc.privilege.enable key in the dfc.properties file.
The dfc.name property in the dfc.properties file controls the name of the DFC instance.
Documentum Server Administration and Configuration Guide contains procedures and instructions for
configuring privileged DFC.
98
Security Services
is enabled in the dfc.properties file. Republishing the credentials causes the creation of another
client registration object and public key certificate object for the DFC instance. Deleting dfc.keystore
causes the DFC instance to register again, and the first registration becomes invalid. Re-creating the
DFC credentials also invalidates the existing client rights, and client rights objects must be created
again for each repository. Documentum Administrator User Guide contains information on creating
client rights objects.
If DFC finds its credentials, the DFC may or may not check to determine if its identity is established in
the global registry. Whether that check occurs is controlled by the dfc.verify_registration key in the
dfc.properties file. That key is false by default, which means that on subsequent initializations, DFC
does not check its identity in the global registry if the DFC finds its credentials.
A client rights object records the privileged roles that a DFC instance can invoke. It also records the
directory in which a copy of the instance public key certificate is located. Client rights objects are
created manually, using Documentum Administrator, after installing the DFC instance. A client rights
object must be created in each repository in which the DFC instance exercises those roles. Creating
the client rights object automatically creates the public key certificate object in the repository.
Client registration objects, client rights objects, and public key certificate objects in the global registry
and other repositories are persistent. Stopping the DFC instance does not remove those objects.
The objects must be removed manually if the DFC instance associated with them is removed or if
its identity changes.
If the client registration object for a DFC instance is removed from the global registry, you can not
register that DFC as a privileged DFC in another repository. Existing registrations in repositories
continue to be valid, but you can not register the DFC in a new repository.
If the client rights objects are deleted from a repository but the DFC instance is not removed, errors are
generated when the DFC attempts to exercise an escalated privilege or invoke a privileged module.
99
Security Services
Digital shredding
Digital shredding is an optional feature available for file store storage areas if you have installed
Documentum Server with a Trusted Content Services license. Using the feature ensures that
content in shredding-enabled storage areas is removed from the storage area in a way that makes
recovery virtually impossible. When a user removes a document whose content is stored in a
shredding-enabled file store storage area, the orphan content object is immediately removed from the
repository and the content file is immediately shredded.
Digital shredding uses the capabilities of the underlying operating system to perform the shredding.
The shredding algorithm is in compliance with DOD 5220.22-M (NISPOM, National Security
Industrial Security Program Operating Manual), option d. This algorithm overwrites all addressable
locations with a character, then its complement, and then a random character.
Digital shredding is supported for file store areas if they are standalone storage areas. You can
also enable shredding for file store storage areas that are the targets of linked store storage areas.
Shredding is not supported for these storage areas if they are components of a distributed storage area.
100
Security Services
Digital shredding is not supported for distributed storage areas, nor for the underlying components.
It is also not supported for blob, turbo, and external storage areas.
101
Security Services
102
Chapter 7
Content Management Services
Document objects
Documents have an important role in most enterprises. They are a repository for knowledge. Almost
every operation or procedure uses documents in some way. In the Documentum system, documents
are represented by dm_document objects, a subtype of dm_sysobject.
SysObjects are the supertype, directly or indirectly, of all object types in the hierarchy that can have
content. SysObject properties store information about the object version, the content file associated
with the object, security permissions on the object, and other important information.
The SysObject subtype most commonly associated with content is dm_document.
You can use a document object to represent an entire document or only a portion of a document. For
example, a document can contain text, graphics, or tables.
A document object can be either a simple document or a virtual document.
• simple document
A simple document is a document with one or more primary content files. Each primary content
file associated with a document is represented by a content object in the repository. All content
objects in a simple document have the same file format.
• virtual document
A virtual document is a container for other document objects, structured in an ordered hierarchy.
The documents contained in a virtual document hierarchy can be simple documents or other
virtual documents. A virtual document can have any number of component documents, nested to
any level.
Using virtual documents allows you to combine documents with a variety of formats into one
document. You can also use the same document in more than one parent document. For example,
you can place a graphic in a simple document and then add that document as a component to
multiple virtual documents.
Chapter 8, Virtual Documents, describes virtual documents.
Document content
Document content is the text, graphics, video clips, and so forth that make up the content of a
document. All content in a repository is represented by content objects. All content associated with a
document is either primary content or a rendition.
103
Content Management Services
Content objects
A content object is the connection between a document object and the file that actually stores the
document content. A content object is an object of type dmr_content. Every content file in the
repository, whether in a repository storage area or external storage, has an associated content object.
The properties of a content object record important information about the file, such as the documents
to which the content file belongs, the format of the file, and the storage location of the file.
Documentum Server creates and manages content objects. The server automatically creates a content
object when you add a file to a document if that file is not already represented by a content object
in the repository. If the file already has a content object in the repository, the server updates the
parent_id property in the content object. The parent_id property records the object IDs of all
documents to which the content belongs.
Typically, there is only one content object for each content file in the repository. However, if you have
a Content Storage Services license, you can configure the use of content duplication checking and
prevention. This feature is used primarily to ensure that numerous copies of duplicate content, such
as an email attachment, are not saved into the storage area. Instead, one copy is saved and multiple
content objects are created, one for each recipient.
Primary content
Primary content refers to the content that is added to the first content file added to a document. It
defines the document primary format. Any other content added in that same format is also called
primary content.
Each primary content file in a document has a page number. The page number is recorded in the
page attribute of the file's content object. This is a repeating attribute. If the content file is part of
multiple documents, the attribute has a value for each document. The file can be a different page in
each document.
Renditions
A rendition is a representation of a document that differs from the original document only in its
format or some aspect of the format. The first time you add a content file to a document, you
specify the content file format. This format represents the primary format of the document. You can
create renditions of that content using converters supported by Documentum Server or through
Documentum CTS Media, an optional product that handles rich media formats such as jpeg and
audio and video formats.
Page numbers are used to identify the primary content that is the source of a rendition.
Converters allows you to:
• Transform one file format to another file format.
• Transform one graphic image format to another graphic image format.
104
Content Management Services
Some of the converters are supplied with Documentum Server, while others must be purchased
separately. You can use a converter that you have written, or one that is not on the current list of
supported converters.
When you ask for a rendition that uses one of the converters, Documentum Server saves and manages
the rendition automatically.
Documentum provides a suite of additional products that perform specific transformations. For
example, CTS Media creates two renditions each time a user creates and saves a document with
a rich media format:
• A thumbnail rendition
• A default rendition that is specific to the primary content format
Additionally, CTS Media supports the use of the TRANSCODE_CONTENT administration method to
request additional renditions.
A rendition format indicates what type of application can read or write the rendition. For example, if
the specified format is maker, the file can be read or written by Adobe FrameMaker, a desktop
publishing application.
A rendition format can be the same format as the primary content page with which the rendition is
associated. However, in such cases, you must assign a page modifier to the rendition, to distinguish it
from the primary content page file. You can also create multiple renditions in the same format for a
particular primary content page. Page modifiers are also used in that situation to distinguish among
the renditions. Page modifiers are user-defined strings, assigned when the rendition is added to
the primary content.
Documentum Server is installed with a wide range of formats. Installing Documentum CTS Media
provides an additional set of rich media formats. You can modify or delete the installed formats or
add new formats. Documentum Server Administration and Configuration Guide contains instructions on
obtaining a list of formats and how to modify or add a format.
Each time you add a content file to an object, Documentum Server records the content's format in a
set of properties in the content object for the file. This internal information includes:
• Resolution characteristics
• Encapsulation characteristics
• Transformation loss characteristics
This information, put together, gives a full format specification for the rendition. It describes the
format's screen resolution, any encoding the data has undergone, and the transformation path taken
to achieve that format.
• Supported conversions on Microsoft Windows platforms, page 106, describes the supported
format conversions on Windows platforms.
• Supported conversions on Linux platforms, page 107, describes the supported format conversions
on Linux platforms.
• Documentum Server DQL Reference Guide contains reference information for
TRANSCODE_CONTENT.
105
Content Management Services
Generated renditions
• Automatic renditions
When an application requests a rendition, the application specifies the rendition of the file. If the
requested rendition exists in the repository, Documentum Server will deliver it to the application.
If there is no rendition in that format, but Documentum Server can create one, it will do so and
deliver the automatically generated rendition to the user.
For example, suppose you want to view a document whose content is in plain ASCII text.
However, you want to see the document with line breaks, for easier viewing. To do so, the
application issues a getFile and specifies that it wants the content file in crtext format. This format
uses carriage returns to end lines. Documentum Server will automatically generate the crtext
rendition of the content file and deliver that to the application.
The Documentum Server transformation engine always uses the best transformation path
available. When you specify a new format for a file, the server reads the descriptions of available
conversion programs from the convert.tbl file. The information in this table describes each
converter, the formats that it accepts, the formats that it can output, the transformation loss
expected, and the rendition characteristics that it affects. The server uses these descriptions to
decide the best transformation path between the current file format and the requested format.
However, note that the rendition that you create may differ in resolution or quality from the
original. For example, suppose you want to display a GIF file with a resolution of 300 pixels
per inch and 24-bits of color on a low-resolution (72 pixels per inch) black and white monitor.
Transforming the GIF file to display on the monitor results in a loss of resolution.
• User-generated renditions
At times you may want to use a rendition that cannot be generated by Documentum Server. In
such cases, you can create the file outside of Documentum and add it to the document using an
addRendition method in the IDfSysObject interface.
To remove a rendition, use a removeRendition method. You must have at least Write permission
on the document to remove a rendition of a document.
Documentum Server supports conversions between the three types of ASCII text files. The following
table lists the acceptable ASCII text input formats and the obtainable output formats.
106
Content Management Services
On Linux, Documentum Server supports format conversion by using the converters in the
$DM_HOME/convert directory. This directory contains the following subdirectories:
• filtrix
• pmbplus
• pdf2text
• psify
• scripts
• soundkit
• troff
Additionally, Documentum Server uses Linux utilities to perform conversions.
You can also purchase and install document converters. Documentum provides demonstration
versions of Filtrix converters, which transform structured documents from one word processing
format to another. The Filtrix converters are located in the $DM_HOME/convert/filtrix directory. To
make these converters fully operational, you must contact Blueberry Software, Inc., and purchase a
separate license.
You can also purchase and install Frame converters from Adobe Systems Inc. If you install the Frame
converters in the Documentum Server bin path, the converters are incorporated automatically when
you start the Documentum system. The server assumes that the conversion package is found in the
Linux bin path of the server account and that this account has the FMHOME environment variable set
to the FrameMaker home.
To transform images, the server uses the PBMPLUS package available in the public domain.
PBMPLUS is a toolkit that converts images from one format to another. This package has four parts:
• PBM - For bitmaps (1 bit per pixel)
• PGM - For gray-scale images
• PPM - For full-color images
• PNM - For content-independent manipulations on any of the other three formats and external
formats that have multiple types.
The parts are upwardly compatible. PGM reads both PBM and PGM and writes PGM. PPM reads
PBM, PGM, and PPM, and writes PPM. PNM reads all three and, in most cases, writes the same type
as it read. That is, if it reads PPM, it writes PPM. If PNM does convert a format to a higher format, it
issues a message to inform you of the conversion.
The PBMPLUS package is located in the $DM_HOME/convert/pbmplus directory. The source code
for these converters is found in the $DM_HOME/unsupported/pbmplus directory.
The following table lists the acceptable input formats for PBMPLUS.
107
Content Management Services
Miscellaneous converters
Documentum Server also uses Linux utilities to provide some miscellaneous conversion capabilities.
These utilities include tools for converting to and from Windows DOS format, for converting text into
PostScript, and for converting troff and man pages into text. They also include tools for compressing
and encoding files.
The following table lists the acceptable input formats for Linux conversion utilities.
108
Content Management Services
The following table lists the acceptable output formats for Linux conversion utilities.
A rendition can be connected to its source document through a content object or a relation object.
Renditions created by Documentum Server or AutoRenderPro™ are always connected through a
content object. For these renditions, the rendition property in the content object is set to indicate that
the content file represented by the content object is a rendition. The page property in the content
object identifies the primary content page with which the rendition is associated.
Renditions created by the media server can be connected to their source either through a content
object or using a relation object. The object used depends on how the source content file is
transformed. If the rendition is connected using a relation object, the rendition is stored in the
repository as a document whose content is the rendition content file. The document is connected to
its source through the relation object.
Translations
Documentum Server contains support for managing translations of original documents using
relationships.
Managing translations, page 137, has more information about setting up translation relationships.
Versioning
Documentum Server provides comprehensive versioning services for all SysObjects except folders
and cabinets and their subtypes. Folder and cabinet SysObject subtypes cannot be versioned.
Versioning is an automated process that creates a historical record of a document. Each time you
check in or branch a document or other SysObject, Documentum Server creates a new version of the
109
Content Management Services
object without overwriting the previous version. All the versions of a particular document are stored
in a virtual hierarchy called a version tree. Each version on the tree has a numeric version label and,
optionally, one or more symbolic version labels.
Version labels
Version labels are recorded in the r_version_label property defined for the dm_sysobject object type.
This is a repeating property. The first index position (r_version_label[0]) is reserved for an object
numeric version label. The remaining positions are used for storing symbolic labels.
Version labels are used to uniquely identify a version within a version tree. There are several
kinds of labels.
• Numeric version labels
The numeric version label is a number that uniquely identifies the version within the version
tree. The numeric version label is generally assigned by the server and is always stored in the
first position of the r_version_label attribute (r_version_label[0]). By default, the first time you
save an object, the server sets the numeric version label to 1.0. Each time you check out the object
and check it back in, the server creates a new version of the object and increments the numeric
version label (1.1, 1.2, 1.3, and so forth). The older versions of the object are not overwritten. If you
want to jump the version level up to 2.0 (or 3.0 or 4.0), you must do so explicitly while checking
in or saving the document.
Note: If you set the numeric version label manually the first time you check in an object, you can
set it to any number you wish, in the format n.n, where n is zero or any integer value.
• Symbolic version labels
A symbolic version label is either system- or user-defined. Using symbolic version labels lets you
provide labels that are meaningful to applications and the work environment.
Symbolic labels are stored starting in the second position (r_version_label[1]) in the r_version_label
property. To define a symbolic label, define it in the argument list when you check in or save
the document.
An alternative way to define a symbolic label is to use an IDfSysObject.mark method. A mark
method assigns one or more symbolic labels to any version of a document. For example, you can
use a mark method, in conjunction with an unmark method, to move a symbolic label from
one document version to another.
A document can have any number of symbolic version labels. Symbolic labels are case sensitive
and must be unique within a version tree.
• The CURRENT label
The symbolic label CURRENT is the only symbolic label that the server can assign to a document
automatically. When you check in a document, the server assigns CURRENT to the new version,
unless you specify a label. If you specify a label (either symbolic or implicit), then you must also
explicitly assign the label CURRENT to the document if you want the new version to carry the
CURRENT label. For example, the following checkin call assigns the labels inprint and CURRENT
to the new version of the document being checked in:
IDfId newSysObjId = sysObj.checkin(false, "CURRENT,inprint");
110
Content Management Services
If you remove a version that carries the CURRENT label, the server automatically reassigns
the label to the parent of the removed version.
Because both numeric and symbolic version labels are used to access a version of a document,
Documentum Server ensures that the labels are unique across all versions of the document. The
server enforces unique numeric version labels by always generating an incremental and unique
sequence number for the labels.
Documentum Server also enforces unique symbolic labels. If a symbolic version label specified with a
checkin, save, or mark method matches a symbolic label already assigned to another version of the
same object, then the existing label is removed and the label is applied to the version indicated by
the checkin, save, or mark method.
Note: Symbolic labels are case sensitive. Two symbolic labels are not considered the same if their cases
differ, even if the word is the same. For example, the labels working and Working are not the same.
Version trees
A version tree refers to an original document and all of its versions. The tree begins with the original
object and contains all versions of the object derived from the original.
To identify which version tree a document belongs to, the server uses the document i_chronicle_id
property value. This property contains the object ID of the original version of the document root of
the version tree. Each time you create a new version, the server copies the i_chronicle_id value
to the new document object. If a document is the original object, the values of r_object_id and
i_chronicle_id are the same.
To identify the place of a document on a version tree, the server uses the document numeric version
label.
Branching
A version tree is often a linear sequence of versions arising from one document. However, you can
also create branches. The figure below shows a version tree that contains branches.
111
Content Management Services
Figure 6. Branching
The numeric version labels on versions in branches always have two more digits than the version at
the origin of the branch. For example, looking at the preceding figure, version 1.3 is the origin of two
branches. These branches begin with the numeric version labels 1.3.1.0 and 1.3.2.0. If a branch off
version 1.3.1.2 were created, the number of its first version would be 1.3.1.2.1.0.
Branching takes place automatically when you check out and then check back in an older version of a
document because the subsequent linear versions of the document already exist and the server cannot
overwrite a previously existing version. You can also create a branch by using the IDfSysObject.branch
method instead of the checkout method when you get the document from the repository.
When you use a branch method, the server copies the specified document and gives the copy a
branched version number. The method returns the IDfID object representing the new version. The
parent of the new branch is marked immutable (unchangeable).
After you branch a document version, you can make changes to it and then check it in or save it. If
you use a checkin method, you create a subsequent version of your branched document. If you use a
save method, you overwrite the version created by the branch method.
A branch method is particularly helpful if you want to check out a locked document.
Removing versions
Documentum Server provides two ways to remove a version of a document. If you want to remove
only one version, use a IDfPersistentObject.destroy method. If you want to remove more than one
version, use a IDfSysObject.prune method.
112
Content Management Services
With a prune method, you can prune an entire version tree or only a portion of the tree. By default,
prune removes any version that does not belong to a virtual document and does not have a symbolic
label.
To prune an entire version tree, identify the first version of the object in the method arguments. (The
object ID of the first version of an object is found in the i_chronicle_id property of each subsequent
version.) Query this property if you need to obtain the object ID of the first version of an object.
To prune only part of the version tree, specify the object ID of the version at the beginning of the
portion you want to prune. For example, to prune the entire tree, specify the object ID for version 1.0.
To prune only version 1.3 and its branches, specify the object ID for version 1.3.
You can also use an optional argument to direct the method to remove versions that have symbolic
labels. If the operation removes the version that carries the symbolic label CURRENT, the label is
automatically reassigned to the parent of the removed version.
When you prune, the system does not renumber the versions that remain on the tree. The system
simply sets the i_antecedent_id property of any remaining version to the appropriate parent.
For example, look at the following figure. Suppose the version tree shown on the left is pruned,
beginning the pruning with version 1.2 and that versions with symbolic labels are not removed.
The result of this operation is shown on the right. Notice that the remaining versions have not
been renumbered.
113
Content Management Services
Changeable versions
You can modify the most recent version on any branch of a version tree. For instance, in the Figure 6,
page 112, you can modify the following versions:
• 1.3
• 1.3.1.2
• 1.3.2.1
• 1.1.1.1
The other versions are immutable. However, you can create new, branched versions of immutable
versions.
Immutability
Immutability is a characteristic that defines an object as unchangeable. An object is marked
immutable if one of the following occurs:
• The object is versioned or branched.
• An IDfSysObject.freeze method is executed against the object.
• The object is associated with a retention policy that designates controlled documents as immutable.
In previous releases, you can only mark documents immutable. Starting with Documentum 7.0, you
can apply immutability rules to folders. To enable this feature in a repository, you must append a
key-value pair (retainer_strategy_immutability, 1) to the r_module_name and r_module_mode
properties of the dm_docbase_config object respectively. In these two properties, the indexes that the
key (retainer_strategy_immutability) and value (1) use must match. In the following example, both
properties use the index [7] to hold the key-value pair.
114
Content Management Services
115
Content Management Services
116
Content Management Services
Database-level locking
Database-level locking places a physical lock on an object in the RDBMS tables. Access to the object is
denied to all other users or database connections.
Database locking is only available in an explicit transaction—a transaction opened with an explicit
method or statement issued by a user or application. For example, the DQL BEGINTRAN statement
starts an explicit transaction. The database lock is released when the explicit transaction is committed
or aborted.
A system administrator or superuser can lock any object with a database-level lock. Other users must
have at least Write permission on an object to place a database lock on the object. Database locks are
set using the IDfPersistentObject.lock method.
Database locks provide a way to ensure that deadlock does not occur in explicit transactions and
that save operations do not fail due to version mismatch errors.
If you use database locks, using repository locks is not required unless you want to version an object.
If you do want to version a modified object, you must place a repository-level lock on the object also.
Repository-level locking
Repository-level locking occurs when a user or application checks out a document or object. When a
checkout occurs, Documentum Server sets the object r_lock_owner, r_lock_date, and r_lock_machine
properties. Until the lock owner releases the object, the server denies access to any user other than
the owner.
Use repository-level locking in conjunction with database-level locking in explicit transactions if you
want to version an object. If you are not using an explicit transaction, use repository-level locking
whenever you want to ensure that your changes can be saved.
To use a checkout method, you must have at least Version permission for the object or have superuser
privileges.
Repository locks are released by check-in methods (IDfSysObject.checkin or IDfSysObject.checkinEx).
A check-in method creates a new version of the object, removes the lock on the old version, and gives
you the option to place a lock on the new version.
117
Content Management Services
If you use a save method to save your changes, you can choose to keep or relinquish the repository
lock on the object. Save methods, which overwrite the current version of an object with the changes
you made, have an argument that allows you to direct the server to hold the repository lock.
A cancelCheckOut method also removes repository locks. This method cancels a checkout. Any
changes you made to the document are not saved to the repository.
Optimistic locking
Optimistic locking occurs when you use a fetch method to access a document or object. It is called
optimistic because it does not actually place a lock on the object. Instead, it relies on version stamp
checking when you issue the save to ensure that data integrity is not lost. If you fetch an object and
change it, there is no guarantee your changes will be saved.
When you fetch an object, the server notes the value in the object i_vstamp attribute. This value
indicates the number of committed transactions that have modified the object. When you are finished
working and save the object, the server checks the current value of the object i_vstamp property
against the value that it noted when you fetched the object. If someone else fetched (or checked out)
and saved the object while you were working, the two values will not match and the server does not
allow you to save the object.
Additionally, you cannot save a fetched object if someone else checks out the object while you are
working on it. The checkout places a repository lock on the object.
For these reasons, optimistic locking is best used when:
• There are a small number of users on the system, creating little or no contention for desired objects.
• There are only a small number of noncontent-related changes to be made to the object.
Object-level permissions, page 85, introduces the object-level permissions.
User privileges, page 84, introduces user privileges.
118
Content Management Services
Using retention policies requires a Retention Policy Services license. If Documentum Server is
installed with that license, you can define and apply retention policies through Retention Policy
Services Administrator (an administration tool that is similar to, but separate from, Documentum
Administrator). Retention policies can be applied to documents in any storage area type.
Using retention policies is the recommended way to manage document retention.
• Content-addressed storage area retention periods
If you are using content-addressed storage areas, you can configure the storage area to enforce
a retention period on all content files stored in that storage area. The period is either explicitly
specified by the user when saving the associated document or applied as a default by the Centera
host system.
Retention policies
A retention policy defines how long an object must be kept in the repository. The retention period
can be defined as an interval or a date. For example, a policy might specify a retention interval of
five years. If so, then any object to which the policy is applied is held for five years from the date
on which the policy is applied. If a date is set as the retention period, then any object to which the
policy is applied is held until the specified date.
A retention policy is defined as either a fixed or conditional policy. If the retention policy is a fixed
policy, the defined retention period is applied to the object when the policy is attached to the object.
For example, suppose a fixed retention policy defines a retention period of five years. If you attach
that policy to an object, the object is held in the repository for five years from the date on which
the policy was applied.
If the retention policy is a conditional policy, the retention period is not applied to the object until
the event occurs. Until that time, the object is held under an infinite retention (that is, the object is
retained indefinitely). After the event occurs, the retention period defined in the policy is applied
to the object. For example, suppose a conditional retention policy requires employment records to
be held for 10 years after an employee leaves a company. This conditional policy is attached to all
employment records. The records of any employee are retained indefinitely until the employee leaves
the company. At that time, the conditional policy takes effect and the employee records are marked
for retention for 10 years from the date of termination.
You can apply multiple retention policies to an object. In general, the policies can be applied at
any time to the object.
The date an object retention expires is recorded in an object i_retain_until property. However, if there
are conditional retention policies attached to the object, the value in that property is null until the
condition is triggered. If there are multiple conditional retention policies attached to the object, the
property is updated as each condition is triggered if the triggered policy retention period is further in
the future than the current value of i_retain_until. However, Documentum Server ignores the value,
considering the object under infinite retention, until all conditions are triggered.
A policy can be created for a single object, a virtual document, or a container such as a folder. If the
policy is created for a container, all the objects in the container are under the control of the policy.
An object can be assigned to a retention policy by any user with Read permission on the object or any
user who is a member of either the dm_retention_managers group or the dm_retention_users group.
These groups are created when Documentum Server is installed. They have no default members.
119
Content Management Services
Policies apply only to the specific version of the document or object to which they are applied. If the
document is versioned or copied, the new versions or copies are not controlled by the policy unless
the policy is explicitly applied to them. Similarly, if a document under the control of a retention
policy is replicated, the replica is not controlled by the policy. Replicas may not be associated with a
retention policy.
120
Content Management Services
121
Content Management Services
122
Content Management Services
Adding content
When users create a SysObject using a Documentum client, adding content is a seamless operation.
Creating a new document using Web Publisher or Webtop typically invokes an editor that allows
users to create the content for the document as part of creating the object. If you are creating the object
using an application or DQL, you must create the content before creating the object and then add the
content to the object. You can add the content before or after you save the object.
Content can be a file or a block of data in memory. The method used to add the content to the object
depends on whether the content is a file or data block.
The first content file added to an object determines the primary format for the object. The format is set
and recorded in the a_content_type property of the object. Thereafter, all content added to the object
as primary content must have the same format as that first primary content file.
Note: If you discover that the a_content_type property is set incorrectly for an object, it is not
necessary to re-add the content. You can check out the object, reset the property, and save (or check
in) the object.
After you create content, you can add more content by appending a new file to the end of the the
object, or you can insert the file into the list.
The content can be a file or a block of data, but it must reside on the same machine as the client
application.
123
Content Management Services
Renditions are typically copies of the primary content in a different format. You can add as many
renditions of primary content as needed.
You cannot use DQL to add a file created on a Macintosh machine to an object. You must use a DFC
method. Older Macintosh-created files have two parts: a data fork (the actual text of the file) and a
resource fork. The DFC, in the IDfSysObject interface, includes methods that allow you to specify
both the content file and its resource fork when adding content to a document.
Storing content
Documentum supports a variety of storage area options for storing content files. The files can be
stored in a file system, in content-addressable storage, on external storage devices, or even within
the RDBMS, as metadata. For the majority of documents, the storage location of their content files is
typically determined by site administration policies and rules. These rules are enforced by using
content assignment policies or by the default storage algorithm. End users and applications create
documents and save or import them into the repository without concern for where they are stored.
Exceptions to business rules can be assigned to a specific storage area on a one-by-one basis as
they are saved or imported into the repository.
This section provides an overview of the ways in which the storage location for a content file is
determined.
The default storage algorithm uses values in a document associated object, format object, or type
definition to determine where to assign the content for storage.
124
Content Management Services
You can override a storage policy or the default storage algorithm by explicitly setting the
a_storage_type attribute for an object before you save the object to the repository.
• content_attr_data_type
125
Content Management Services
This identifies the metadata field that contains the retention period value.
When setContentAttrs executes, the metadata name and value pairs are stored first in the content
object properties. Then, the plug-in library is called to copy them from the content object to the
storage system metadata fields. Only those fields that are identified in both content_attr_name in the
content object and in either a_content_attr_name or a_retention_attr_name in the storage object are
copied to the storage area.
If a_retention_attr_required is set to T (TRUE) in the ca store object, the user or application must
specify a retention period for the content when saving the content. That is accomplished by including
the metadata field identified in the a_retention_attr_name property of the storage object in the list of
name and value pairs when setting the content properties.
If a_retention_attr_required is set to F (FALSE), then the content is saved using the default retention
period, if one is defined for the storage area. However, the user or application can overwrite the
default by including the metadata field identified in the a_retention_attr_name property of the
storage object when setting the content properties.
The value for the metadata field identified in a_retention_attr_name can be a date, a number, or a
string. For example, suppose the field name is "retain_date" and content must be retained in storage
until January 1, 2016. The setContentAttrs parameter argument would include the following name
and value pair:
'retain_date=DATE(01/01/2016)'
You can specify the date value using any valid input format that does not require a pattern
specification. Do not enclose the date value in single quotes.
To specify a number as the value, use the following format:
'retain_date=FLOAT(number_of_seconds)'
For example, the following sets the retention period to 1 day (24 hours):
'retain_date=FLOAT(86400)'
The string value must be numeric characters that Documentum Server can interpret as a number of
seconds. If you include characters that cannot be translated to a number of seconds, Documentum
Server sets the retention period to 0 by default, but does not report an error.
When using administration methods to set the metadata, use a SET_CONTENT_ATTRS to set the
content object attributes and a PUSH_CONTENT_ATTRS to copy the metadata to the storage system.
Setcontentattrs must be executed after the content is added to the SysObject and before the object is
saved to the repository. SET_CONTENT_ATTRS and PUSH_CONTENT_ATTRS must be executed
after the object is saved to the repository.
126
Content Management Services
Each object of type SysObject or SysObject subtype has one ACL that controls access to that object.
The server automatically assigns a default ACL to a new SysObject if you do not explicitly assign an
ACL to the object when you create it. If a new object is stored in a room (a secure area in a repository)
and is governed by that room, the ACL assigned to the object is the default ACL for that room.
The ACL associated with an object is identified by two properties of the SysObject: acl_name and
acl_domain. The acl_name is the name of the ACL and acl_domain records the owner of the ACL.
• Renditions, page 104, contains information about creating and adding renditions.
• Assigning ACLs, page 132, contains information about setting permissions for a SysObject.
• Documentum Server Administration and Configuration Guide contains information about:
— The implementation and use of the options for determining where content is stored
— The behavior and implementation of content assignment policies and creating them
— How the default storage algorithm behaves
— Configuring a storage area to require a retention period for content stored in that area
127
Content Management Services
128
Content Management Services
When you add a value, you can append it to the end of the values in the repeating property or you
can replace an existing value. If you remove a value, all the values at higher index positions within
the property are adjusted to eliminate the space left by the deleted value. For example, suppose
a keywords property has 4 values:
keywords[0]=engineering
keywords[1]=productX
keywords[2]=metal
keywords[3]=piping
If you removed productX, the values for metal and piping are moved up and the keywords property
now contains the following:
keywords[0]=engineering
keywords[1]=metal
keywords[2]=piping
The time it takes the server to append or insert a value for a repeating property increases in direct
proportion to the number of values in the property. Consequently, if you want to define a repeating
property for a type and you expect that property to hold hundreds or thousands of values, it is
recommended that you create an RDBMS table to hold the values instead and then register the table.
When you query the type, you can issue a SELECT statement that joins the type and the table.
Adding content
There are several methods in the IDfSysObject interface for adding content.
A document can have multiple primary content files, but all the files must have the same format.
When you add an additional primary content files, you specify the file page number, rather than
its format.
The page number must be the next number in the object sequence of page numbers. Page numbers
begin with zero and increment by one. For example, if a document has three primary content files,
they are numbered 0, 1, and 2. If you add another primary content file, you must assign it page
number 3.
If you fail to include a page number, the server assumes the default page number, which is 0. Instead
of adding the file to the existing content list, it replaces the content file previously in the 0 position.
129
Content Management Services
To replace a primary content file, use an insertFile or insertContent method. Alternative acceptable
methods are setFileEx or setContentEx. The new file must have the same format as the other primary
content files in the object.
Whichever method you use, you must identify the page number of the file you want to replace in
the method call. For example, suppose you want to replace the current table of contents file in a
document referenced as mySysObject and the current table of contents file is page number 2. The
following call replaces that file in the object "mySysObject":
mySysObject.insertFile("toc_new",2)
130
Content Management Services
Use checkin or checkinEx to create a new version of a object. You must have at least Version
permission for the object. The methods work only on checked-out documents.
The checkinEx method is specifically for use in applications. It has four arguments an application
can use for its specific needs. Refer to the Javadocs for details.
Both methods return the object ID of the new version.
Use a save or saveLock method when you want to overwrite the version that you checked out or
fetched. To use either, you must have at least Write permission on the object. A save method works
on either checked-out or fetched objects. A saveLock method works only on checked-out objects.
If the document has been signed using addESignature, using save to overwrite the signed version
invalidates the signatures and will prohibit the addition of signatures on future versions.
• Attributes that remain changeable, page 116, contains a list of the changeable properties in
immutable objects.
• Application-level control of SysObjects, page 83, describes application-level control of SysObjects.
• Concurrent access control, page 117, describes the types of locks and locking strategies in detail.
• Documentum Server DQL Reference Guide contains information about the CREATE OBJECT and
UPDATE OBJECT statements.
Managing permissions
Access permissions for an object of type SysObject or its subtypes are controlled by the ACL
associated with the object. Each object has one associated ACL. An ACL is assigned to each SysObject
when the SysObject is created. That ACL can be modified or replaced as needed, as the object moves
through its lifecycle.
131
Content Management Services
An object primary folder is the folder in which the object is first stored when it is created. If
the object was placed directly in a cabinet, the server uses the ACL associated with the cabinet
as the folder default.
• The ACL associated with the object creator
Every user object has an ACL. It is not used to provide security for the user but only as a potential
default ACL for any object created by the user.
• The ACL associated with the object type
Every object type has an ACL associated with its type definition. You can use that ACL as a
default ACL for any object of the type.
In a newly configured repository, the default_acl property is set to the value identifying the user ACL
as the default ACL. You can change the setting through Documentum Administrator.
Template ACLs
A template ACL is identified by a value of 1 in the acl_class property of its dm_acl object. A Template
ACL typically uses aliases in place of actual user or group names in the access control entries in the
ACL. When the template is assigned to an object, Documentum Server resolves the aliases to actual
user or group names.
Template ACLs are used to make applications, workflows, and lifecycles portable. For example, an
application that uses a template ACL could be used by a variety of departments within an enterprise
because the users or groups within the ACL entries are not defined until the ACL is assigned to
an actual document.
Assigning ACLs
When you create a document or other object, you can:
• Assign a default ACL (either explicitly or allow the server to choose)
A Documentum Server automatically assigns the designated default ACL to new objects if
the user or application does not explicitly assign a different ACL or does not explicitly grant
permissions to the object.
• Assign an existing nondefault ACL
A document owner or a superuser can assign any private ACL that you own or any public ACL,
including any system ACL, to the document.
If the application is designed to run in multiple contexts with each having differing access
requirements, assigning a template ACL is recommended. The aliases in the template are resolved
to real user or group names appropriate for the context in the new system ACL.
To assign an ACL, set the acl_name and, optionally, the acl_domain attributes. You must set the
acl_name attribute. When only the acl_name is set, Documentum Server searches for the ACL
among the ACLs owned by the current user. If none is found, the server looks among the public
ACLs.
132
Content Management Services
If acl_name and acl_domain are both set, the server searches the given domain for the ACL. You
must set both attributes to assign an ACL owned by a group to an object.
• Generate a custom ACL for the object
Custom ACLs are created by using a grantPermit or revokePermit method against an object to define
access control permissions for the object. There are four common situations that generate a custom
ACL:
• Granting permissions to a new object without assigning an ACL
The server creates a custom ACL when you create a SysObject and grants permissions to it, but
does not explicitly associate an ACL with the object.
The server bases the new ACL on the default ACL identified in the default_acl property of the
server config object. It copies that ACL, makes the indicated changes, and then assigns the
custom ACL to the object.
• Modifying the ACL assigned to an new object
The server creates a custom ACL when you create a SysObject, associate an ACL with the object,
and then modify the access control entries in the ACL before saving the object. To identify the
ACL to be used as the basis for the custom ACL, use a useACL method.
The server copies the specified ACL, applies the changes to the copy, and assigns the new ACL to
the document.
• Using grantPermit when no default ACL is assigned
The server creates a custom ACL when you create a new document, direct the server not to assign
a default ACL, and then use a grantPermit method to specify access permissions for the document.
In this situation, the object's owner is not automatically granted access to the object. If you create a
new document this way, be sure to set the owner's permission explicitly.
To direct the server not to assign a default ACL, you issue a useacl method that specifies none
as an argument.
The server creates a custom ACL with the access control entries specified in the grantPermit
methods and assigns the ACL to the document. Because the useacl method is issued with none as
an argument, the custom ACL is not based on a default ACL.
• Modifying the current, saved ACL
If you fetch an existing document and change the entries in the associated ACL, the server creates
a new custom ACL for the document that includes the changes. The server copies the document's
current ACL, applies the specified changes to the copy, and then assigns the new ACL to the
document.
A custom ACL name is created by the server and always begins with dm_. Generally, a custom ACL
is only assigned to one object. However, a custom ACL can be assigned to multiple objects.
Documentum Server Administration and Configuration Guide and Documentum Foundation Classes
Development Guide contains more information on ACLs.
133
Content Management Services
Removing permissions
At times, you might need to remove user access or extended permissions to a document. For example,
an employee might leave a project or be transferred to another location. A variety of situations can
make it necessary to remove user permissions.
You must be the owner of the object, a superuser, or have Change Permit permission to change the
entries in an object's ACL.
You must have installed Documentum Server with a Trusted Content Services license to revoke any of
the following permit types:
• AccessRestriction or ExtendedRestriction
• RequiredGroup or RequiredGroupSet
• ApplicationPermit or ApplicationRestriction
When you remove user access or extended permissions, you can either:
• Remove permissions to one document
To remove permissions to a document, call the IDfSysobject revokePermit method against the
document. The server copies the ACL, changes the copy, and assigns the new ACL to the
document. The original ACL is not changed. The new ACL is a custom ACL.
• Remove permissions to all documents using a particular ACL
To remove permissions to all documents associated with the ACL, you must alter the ACL. To do
that, call the IDfACL revokePermit method against the ACL. Documentum Server modifies the
specified ACL. Consequently, the changes affect all documents that use that ACL.
Use a revokePermit method to remove object-level permissions. That method is defined for both the
IDfACL and IDfSysObject interfaces.
Each execution of revokePermit removes a specific entry. If you revoke an entry whose permit type is
AccessPermit without designating the specific base permission to be removed, the AccessPermit entry
is removed, which also removes any extended permissions for that user or group. If you designate a
specific base permission level, only that permission is removed but the entry is not removed if there
are extended permissions identified in the entry.
134
Content Management Services
If the user or group has access through another entry, the user or group retains that access permission.
For example, suppose janek has access as an individual and also as a member of the group engr in a
particular ACL. If you issue a revokePermit method for janek against that ACL, you remove only
janek's individual access. The access level granted through the engr group is retained.
Replacing an ACL
It is possible to replace the ACL assigned to an object with another ACL. To do so requires at least
Write permission on the object. Users typically replace an ACL using facilities provided by a client
interface. To replace the ACL programmatically, reset the object attributes acl_name, acl_domain, or
both. These two attributes identify the ACL assigned to an object.
Documentum Server Administration and Configuration Guide describes the types of ACLs, types of
entries, and how to create ACLs and the entries.
135
Content Management Services
Replicas are copies of an object. Replicas are generated by object replication jobs. A replication job
copies objects in one repository to another. The copies in the target repository are called replicas.
• Documentum Platform and Platform Extensions Installation Guide contains complete information
about:
— Reference links and the underlying architecture that supports them
— How operations on mirror objects and replicas are handled
— Object replication
System-defined relationships
Installing Documentum Server installs a set of system-defined relationships. For example,
annotations are implemented as a system-defined relationship between a SysObject, generally a
document, and a note object. Another system-defined relationship is DM_TRANSLATION_OF, used
to create a relationship between a source document and a translated version of the document.
You can obtain the list of system-defined relationships by examining the dm_relation_type objects in
the repository. The relation name of system-defined relationships begin with dm_.
User-defined relationships
You can create custom relationships. Additionally, the dm_relation object type can be subtyped, so
that you can create relationships between objects that record business-specific information, if needed.
User-defined relationships are not managed by Documentum Server. The server only enforces
security for user-defined relationships. Applications must provide or invoke user-written procedures
to enforce any behavior required by a user-defined relationship. For example, suppose you define a
relationship between two document subtypes that requires a document of one subtype to be updated
automatically when a document of the other subtype is updated. The server does not perform this
136
Content Management Services
kind of action. You must write a procedure that determines when the first document is updated and
then updates the second document.
• Documentum Server System Object Reference Guide has information about relationships, including
instructions for creating relationship types and relationships between objects.
• Managing translations, page 137, describes how the system-defined translation relationship
can be used.
• Annotation relationships, page 138, describes annotations and how to work with them.
Managing translations
Documents are often translated into multiple languages. Documentum Server supports managing
translations with two features:
• The language_code attribute defined for SysObjects
• The built-in relationship functionality
The language_code attribute allows you identify the language in which the content of a document
is written and the document country of origin. Setting this attribute will allow you to query for
documents based on their language. For example, you might want to find the German translation of
a particular document or the original of a Japanese translation.
Translation relationships
You can also use the built-in relationship functionality to create a translation relationship between
two SysObjects. Such a relationship declares one object (the parent) the original and the second
object (the child) a translation of the original. Translation relationships have a security type of child,
meaning that security is determined by the object type of the translation. A translation relationship
has the relation name of DM_TRANSLATION_OF.
When you define the child in the relationship, you can bind a specific version of the child to
relationship or bind the child by version label. To bind a specific version, you set the child_id
property of the dm_relation object to object ID of the child. To bind by version label, you set the
child_id attribute to the chronicle ID of the version tree that contains the child, and the child_label to
the version label of the translation. The chronicle ID is the object ID of the first version on the version
137
Content Management Services
tree. For example, if you want the APPROVED version of the translation to always be associated with
the original, set child_id to the translation chronicle ID and child_label to APPROVED.
• Documentum Server System Object Reference Guide has more information about:
— Recommended language and country codes
— Properties defined for the relation object type
Annotation relationships
Annotations are comments that a user attaches to a document (or any other SysObject or SysObject
subtype). Throughout document development, and often after it is published, people might want to
record editorial suggestions and comments. For example, several managers might want to review
and comment on a budget. Or perhaps several marketing writers working on a brochure want
to comment on each other's work. In situations such as these, the ability to attach comments to
a document without modifying the original text is very helpful.
Annotations are implemented as note objects, which are a SysObject subtype. The content file
you associate with the note object contains the comments you want to attach to the document.
After the note object and content file are created and associated with each other, you use the
IDfNote.addNoteEx method to associate the note with the document. A single document can have
multiple annotations. Conversely, a single annotation can be attached to multiple documents.
When you attach an annotation to a document, the server creates a relation object that records and
describes the relationship between the annotation and the document. The relation object parent_id
attribute contains the document object ID and its child_id attribute contains the note object ID. The
relation_name attribute contains dm_annotation, which is the name of the relation type object that
describes the annotation relationship.
You can create, attach, detach, and delete annotations. Documentum Server Administration and
Configuration Guide contains the instructions.
138
Content Management Services
If the replication mode is federated, then any annotations associated with a replicated object are
replicated also.
• Documentum Server System Object Reference Guide has a complete description of relation objects,
relation type objects, and their attributes.
• The associated Javadocs have more information about the addNote and removeNote methods in
the IDfSysObject interface.
139
Content Management Services
140
Chapter 8
Virtual Documents
Overview
This section describes virtual documents, a feature supported by Documentum Server that allows
you to create documents with varying formats.
Users create virtual documents using the Virtual Document Manager, a graphical user interface that
allows them to build and modify virtual documents. However, if you want to write an application
that creates or modifies a virtual document with no user interaction, you must use DFC.
Although the components of a virtual document can be any SysObject or SysObject subtype except
folders, cabinets, or subtypes of folders or cabinets, the components are often simple documents. Be
sure that you are familiar with the basics of creating and managing simple documents, described in
Chapter 7, Content Management Services, before you begin working with virtual documents.
A virtual document is a hierarchically organized structure composed of component documents. The
components of a virtual document are of type dm_sysobject, or a subtype of dm_sysobject (but
excluding cabinets and folders). Most commonly, the components are of type dm_document or a
subtype. The child components of a virtual document can be simple documents (that is, nonvirtual
documents), or they can themselves be virtual documents. Documentum Server does not impose any
restrictions on the depth of nesting of virtual documents.
Note: A compound document (for example, an OLE or XML document) cannot be a child in a virtual
document.
The root of a virtual document is version-specific and identified by an object identity (on Documentum
Server, an r_object_id). The child components of a virtual document are not version-specific, and
are identified by an i_chronicle_id. The relationship between a parent component and its children
are defined in containment objects (dmr_containment), each of which connects a parent object to a
single child object. The order of the children of the parent object is determined by the order_no
property of the containment object.
The figure below illustrates these relationships.
141
Virtual Documents
The version of the child component is determined at the time the virtual document is assembled. A
virtual document is assembled when it is retrieved by a client, and when a snapshot of the virtual
document is created. The assembly is determined at runtime by a binding algorithm governed by
metadata set on the dmr_containment objects.
142
Virtual Documents
Implementation
This section briefly describes how virtual documents are implemented within the Documentum
system.
The components of a virtual document are associated with the containing document by containment
objects. Containment objects contain information about the components of a virtual document.
Each time you add a component to a virtual document, a containment object is created for that
component. Containment objects store the information that links a component to a virtual document.
For components that are themselves virtual documents, the objects also store information that the
server uses when assembling the containing document.
You can associate a particular version of a component with the virtual document or you can associate
the entire component version tree with the virtual document. Binding the entire version tree to
the virtual document allows you to select which version is included at the time you assemble the
document. This feature provides flexibility, letting you assemble the document based on conditions
specified at assembly time.
The components of a virtual document are ordered within the document. By default, the order is
managed by the server. The server automatically assigns order numbers when you add or insert a
component.
If you bypass the automatic numbering provided by the server, you can use your own numbers. The
insertPart, updatePart, and removePart methods allow you to specify order numbers. However, if
you define order numbers, you must also perform the related management operations. The server
does not manage user-defined ordering numbers.
The number of direct components contained by a virtual document is recorded in the document's
r_link_cnt property. Each time you add a component to a virtual document, the value of this property
is incremented by 1.
The r_is_virtual_doc property is an integer property that helps determine whether Documentum
client applications treat the object as a virtual document. If the property is set to 1, the client
applications always open the document in the Virtual Document Manager. The property is usually
set to 1 when you use the Virtual Document Manager to add the first component to the containing
document. Programmatically, you can set it using the IDfSysObject.setIsVirtualDocument method.
You can set the property for any SysObject subtype except folders, cabinets, and their subtypes.
However, clients will also treat an object as a virtual document if r_is_virtual_doc is set to 0, and
r_link_cnt is greater than 0. A document is not a virtual document only when both properties are set
to 0. If either property is not 0, the object is treated as a virtual document.
Versioning
You can version a virtual document and manage its versions just as you do a simple document.
143
Virtual Documents
144
Virtual Documents
XML support
XML documents are supported as virtual documents in Documentum Server. When you import
or create an XML document using the DFC, the document is created as a virtual document.
Other documents referenced in the content of the XML document as entity references or links are
automatically brought into the repository and stored as directly contained components of the virtual
document.
The connection between the parent and the components is defined in two properties of containment
objects: a_contain_type and a_contain_desc. DFC uses the a_contain_type property to indicate
whether the reference is an entity or link. It uses the a_contain_desc to record the actual identification
string for the child.
These two properties are also defined for the dm_assembly type, so applications can correctly create
and handle virtual document snapshots using the DFC.
145
Virtual Documents
To reference other documents linked to the parent document, you can use relationships of type
xml_link.
Virtual documents with XML content are managed by XML applications, which define rules for
handling and chunking the XML content.
146
Virtual Documents
147
Virtual Documents
148
Virtual Documents
use_node_ver_label
The use_node_ver_label property determines how the server selects late-bound descendants of
an early-bound component.
If a component is early bound and use_node_ver_label in its associated containment object is set to
TRUE, the server uses the component early bound version label to select all late-bound descendants
of the component. If another early bound component is found that has use_node_ver_label set to
TRUE, then that component label is used to resolve descendants from that point.
Late bound components that have no early bound parent or that have an early bound parent with
use_node_ver_label set to FALSE are chosen by the binding conditions specified in the SELECT
statement.
The figure below illustrates how use_node_ver_label works. In the figure, each component is
labeled as early or late bound. For the early bound components, the version label specified when
the component was added to the virtual document is shown. Assume that all the components in the
virtual document have use_node_ver_label set to TRUE.
Component B is early bound-the specified version is the one carrying the approved version label
. Because Component B is early bound and use_node_ver_label is set to TRUE, when the server
determines which versions of the Component B late bound descendants to include, it will choose
the versions that have the approved symbolic version label. In our sample virtual document,
Component E is a late-bound descendant of Component B. The server will pick the approved version
of Component E for inclusion in the virtual document.
Descending down the hierarchy, when the server resolves the Component E late bound descendant,
Component F, it again chooses the version that carries the approved version label. All late-bound
149
Virtual Documents
descendant components are resolved using the version label associated with the early-bound parent
node until another early bound component is encountered with use_node_ver_label set to TRUE.
In the example, Component G is early bound and has use_node_ver_label set to TRUE. Consequently,
when the server resolves any late bound descendants of Component G, it will use the version label
associated with Component G, not the label associated with Component B. The early bound version
label for Component G is released. When the server chooses which version of Component H to use, it
picks the version carrying the released label.
Component C, although late bound, has no early bound parent. For this component, the server
uses the binding condition specified in the IN DOCUMENT clause to determine which version to
include. If the IN DOCUMENT clause does not include a binding condition, the server chooses the
version carrying the CURRENT label.
follow_assembly
The follow_assembly property determines whether the server selects component descendants using
the containment objects or a snapshot associated with the component.
If you set follow_assembly to TRUE, the server selects component descendants from the snapshot
associated with the component. If follow_assembly is TRUE and a component has a snapshot, the
server ignores any binding conditions specified in the SELECT statement or mandated by the
use_node_ver_label property.
If follow_assembly is FALSE or a component does not have a snapshot, the server uses the
containment objects to determine component descendants.
Copy behavior
When a user copies a virtual document, the server can make a copy of each component or it can create
an internal reference or pointer to the source component. (The pointer or reference is internal. It is not
an instance of a dm_reference object.) Which option is used is controlled by the copy_child property
in a component containment object. It is an integer property with three valid settings:
• 0, which means that the copy or reference choice is made by the user or application when the
copy operation is requested
• 1, which directs the server to create a pointer or reference to the component
• 2, which directs the server to copy the component
Whether the component is copied or referenced, a new containment object for the component linking
the component to the new copy of the virtual document is created.
Regardless of which option is used, when users open the new copy in the Virtual Document Manager,
all document components are visible and available for editing or viewing, subject to user access
permissions.
150
Virtual Documents
Snapshots
A snapshot is a record of the virtual document as it existed at the time you created the snapshot.
Snapshots are a useful shortcut if you often assemble a particular subset of virtual document
components. Creating a snapshot of that subset of components lets you assemble the set quickly
and easily.
A snapshot consists of a collection of assembly objects. Each assembly object represents one
component of the virtual document. All the components represented in the snapshot are absolutely
linked to the virtual document by their object IDs.
Only one snapshot can be assigned to each version of a virtual document. If you want to define
more than one snapshot for a virtual document, you must assign the additional snapshots to other
documents created specifically for the purpose.
Modifying snapshots
You can add or delete components (by adding or deleting the assembly object representing the
component) or you can modify an existing assembly object in a snapshot.
Any modification that affects a snapshot requires at least Version permission on the virtual document
for which the snapshot was defined.
If you add an assembly object to an snapshot programmatically, be sure to set the following properties
of the new assembly object:
• book_id, which identifies the topmost virtual document containing this component. Use the
document object ID.
• parent_id, which identifies the virtual document that directly contains this component. Use the
document object ID.
• component_id, which identifies the component. Use the component object ID.
• comp_chronicle_id, which identifies the chronicle ID of the component.
• depth_no, which identifies the depth of the component within the document specified in the
book_id.
• order_no, which specifies the position of the component within the virtual document. This
property has an integer datatype. You can query the order_no values for existing components to
decide which value you want to assign to a new component.
You can add components that are not actually part of the virtual document to the document snapshot.
However, doing so does not add the component to the virtual document in the repository. That
is, the virtual document r_link_cnt property is not incremented and a containment object is not
created for the component.
151
Virtual Documents
Deleting an assembly object only removes the component represented by the assembly object from
the snapshot. It does not remove the component from the virtual document. You must have at
least Version permission for the topmost document (the document specified in the assembly object
book_id property) to delete an assembly object.
To delete a single assembly object or several assembly objects, use a destroy method. Do not use
destroy to delete each object individually in an attempt to delete the snapshot.
You can change the values in the properties of an assembly object. However, if you do, be very sure
that the new values are correct. Incorrect values can cause errors when you attempt to query the
snapshot. (Snapshots are queried using the USING ASSEMBLIES option of the SELECT statement IN
DOCUMENT clause.)
Deleting a snapshot
Use a IDfSysObject.disassemble method to delete a snapshot. This method destroys the assembly
objects that make up the snapshot. You must have at least Version permission for a virtual document
to destroy its snapshot.
152
Virtual Documents
Freezing a document
Freezing sets the following properties of the virtual document to TRUE:
• r_immutable_flag
This property indicates that the document is unchangeable.
• r_frozen_flag
This property indicates that the r_immutable_flag was set by a freeze method (instead of a checkin
method).
If you freeze an associated snapshot, the r_has_frzn_assembly property is also set to TRUE.
Freezing a snapshot sets the following properties for each component in the snapshot:
• r_immutable_flag
• r_frzn_assembly_cnt
The r_frzn_assembly count property contains a count of the number of frozen snapshots that
contain this component. If this property is greater than zero, you cannot delete or modify the
object.
Unfreezing a document
Unfreezing a document makes the document changeable again.
Unfreezing a virtual document sets the following properties of the document to FALSE:
• r_immutable_flag
If the r_immutable_flag was set by versioning prior to freezing the document, then unfreezing
the document does not set this property to FALSE. The document remains unchangeable even
though it is unfrozen.
• r_frozen_flag
If you chose to unfreeze the document snapshot, the server also sets the r_has_frzn_assembly
property to FALSE.
Unfreezing a snapshot resets the following properties for each component in the snapshot:
• r_immutable_flag
This is set to FALSE unless it was set to TRUE by versioning prior to freezing the snapshot. In
such cases, unfreezing the snapshot does not reset this property.
• r_frzn_assembly_cnt
153
Virtual Documents
This property, which contains a count of the number of frozen snapshots that contain this
component, is decremented by 1.
154
Chapter 9
Workflows
This chapter describes workflows, part of the process management services of Documentum Server.
Workflows allow you to automate business processes.
Overview
A workflow is a sequence of activities that represents a business process, such as an insurance claims
procedure or an engineering development process. Workflows can describe simple or complex
business processes. Workflow activities can occur one after another, with only one activity in progress
at a time. A workflow can consist of multiple activities all happening concurrently. A workflow
might combine serial and concurrent activity sequences. You can also create a cyclical workflow, in
which the completion of an activity restarts a previously completed activity.
Implementation
Workflows are implemented as two separate parts: a workflow definition and a runtime instantiation
of the definition.
The workflow definition is the formalized definition of the business process. A workflow definition
has two major parts, the structural, or process, definition and the definitions of the individual
activities. The structural definition is stored in a dm_process object. The definitions of individual
activities are stored in dm_activity objects. Storing activity and process definitions in separate
objects allows activity definitions to be used in multiple workflow definitions. When you design a
workflow, you can include existing activity definitions in addition to creating any new activity
definitions needed.
When a user starts a workflow, the server uses the definition in the dm_process object to create a
runtime instance of the workflow. Runtime instances of a workflow are stored in dm_workflow
objects for the duration of the workflow. When an activity starts, it is instantiated by setting
properties in the workflow object. Running activities may also generate work items and packages.
Work items represent work to be performed on the objects in the associated packages. Packages
generally contain one or more documents.
The following figure illustrates how the components of a workflow definition and runtime instance
work together.
155
Workflows
Users can repeatedly perform the business process. It is based on a stored definition, and the essential
process is the same each time. Separating a workflow definition from its runtime instantiation allows
multiple workflows based on the same definition to run concurrently.
Template workflows
You can create a template workflow definition, a workflow definition that can be used in many
contexts. This is done by including activities whose performers are identified by aliases instead of
actual performer names. When aliases are used, the actual performers are selected at runtime.
For example, a typical business process for new documents has four steps: authoring the document,
reviewing it, revising it, and publishing the document. However, the actual authors and reviewers of
various documents will be different people. Rather than creating a new workflow for each document
with the authors and reviewers names hard-coded into the workflow, create activity definitions for
the basic steps that use aliases for the authors and reviewers names and put those definitions in one
workflow definition. Depending on how you design the workflow, the actual values represented by
156
Workflows
the aliases can be chosen by the workflow supervisor when the workflow is started or later, by the
server when the containing activity is started.
Installing Documentum Server installs one system-defined workflow template. Its object name is
dmSendToList2. It allows a user to send a document to multiple users simultaneously. This template
is available to users of Desktop Client (through the File menu) and Webtop (through the Tools menu).
Workflow definitions
A workflow definition consists of:
• one process definition
• a set of activity definitions
• port and package definitions
The following sections provide some basic information about the components of a definition.
157
Workflows
Process definitions
A process definition defines the structure of a workflow. The structure represents a picture of the
business process emulated by the workflow. Process definitions are stored as dm_process objects.
A process object has properties that identify the activities that make up the business process, a
set of properties that define the links connecting the activities, and a set of properties that define
the structured data elements and correlation sets that may be associated the workflow. It also has
properties that define some behaviors for the workflow when an instance is running.
Note: Structured data elements and correlation sets for a workflow may only be defined using
Business Processs Manager. Refer to that documentation for more information about these features.
Activities represent the tasks that comprise the business process. When you create a workflow
definition, you must decide how to model your business process in the sequence of activities that
make up a workflow structure.
Each activity in a workflow is defined as one of the following kinds of activities:
• Initiate
Initiate activities link to a Begin activity. These activities record how a workflow may be started.
For example, a workflow might have two Initiate activities, one that allows the workflow to be
started manually from Webtop, and one that allows the workflow to be started by submitting a
form. Initiate activities may only be linked to Begin activities. Initiate activities may only be
defined for a workflow using Process Builder.
• Begin
Begin activities start the workflow. A process definition must have at least one beginning activity.
• Step
Step activities are the intermediate activities between the beginning and the end. A process
definition can have any number of Step activities.
• End
An End activity is the last activity in the workflow. A process definition can have only one
ending activity.
• Exception
An exception activity is associated with an automatic activity, to provide fault-handling
functionality for the activity. Each automatic activity can have one exception activity.
You can use activity definitions more than once in a workflow definition. For example, suppose
you want all documents to receive two reviews during the development cycle. You might design a
workflow with the following activities: Write, Review1, Revise, Review2, and Publish. The Review1
and Review2 activities can be the same activity definition.
An activity that can be used more than once is called a repeatable activity. Whether an activity is
repeatable is defined in the activity's definition.
158
Workflows
A repeatable activity is an activity that can be used more than once in a particular workflow. By
default, activities are defined as repeatable activities.
The repeatable_invoke property controls this feature. It is TRUE by default. To constrain an activity's
use to only once in a workflow's structure, the property must be set to FALSE.
In a process definition, the activities included in the definition are referenced by the object IDs of the
activity definitions. In a running workflow, activities are referenced by the activity names specified in
the process definition.
When you add an activity to a workflow definition, you must provide a name for the activity that is
unique among all activities in the workflow definition. The name you give the activity in the process
definition is stored in the r_act_name property. If the activity is used only once in the workflow
structure, you can use the name assigned to the activity when the activity was defined (recorded in
the activity's object_name property). However, if the activity is used more than once in the workflow,
you must provide a unique name for each use.
Links
A link connects two activities in a workflow through their ports. A link connects an output port of
one activity to an input port of another activity. Think of a link as a one-way bridge between two
activities in a workflow.
An input port on a Begin activity participates in a link, but it can only connect to an output port of
an Initiate activity. Similarly, an output port of an Initiate activity may only connect to an input
port of a Begin activity.
Output ports on End activities are not allowed to participate in links.
Each link in a process definition has a unique name.
Activity definitions
Activity definitions describe tasks in a workflow. Documentum implements activity definitions as
dm_activity objects. The properties of an activity object describe the characteristics of the activity,
including:
• How the activity is executed
• Who performs the work
• What starts the activity
• The transition behavior when the activity is completed
The definition also includes a set of properties that define the ports for the activities, the packages
that each port can handle, and the structured data that is accessible to the activity.
159
Workflows
Manual activities
A manual activity represents a task performed by an actual person or persons. Manual activities can
allow delegation or extension. Any user can create a manual activity.
Automatic activities
An automatic activity represents a task whose work is performed, on behalf of a user, by a script
defined in a method object. Automatic activities cannot be delegated or extended. Additionally, you
must have Sysadmin or superuser privileges to create an automatic activity.
If the method executed by the activity is a Java method, you can configure the activity so that the
method is executed by the dm_bpm servlet. This is a Java servlet dedicated to executing workflow
methods. To configure the method to execute in this servlet, you must set the a_special_app property
of the method object to a character string beginning with workflow. Additionally, the classfile of the
Java method must be in a location that is included in the classpath of the dm_bpm_servlet.
If a Java workflow method is not executed by the dm_bpm_servlet, it is executed by the Java method
server.
Note: The dm_server_config.app_server_name for the dm_bpm_servlet is do_bpm. The URL for
the servlet is in the app_server_uri property, at the corresponding index position as do_bpm in
app_server_name.
• Delegation and extension, page 161, describes delegation and extension.
• Documentum Server Administration and Configuration Guide contains instructions for creating a
method for an automatic activity.
Activity priorities
Priority values are used to designate the execution priority of an activity. Any activity may have a
priority value defined for it in a process definition that contains the activity. An activity assigned
to a work queue may have an additional priority assigned that is specific to the work queue. The
uses of these two priority values are different.
A work queue can be chosen as an activity performer only if the workflow definition was created in
Process Builder.
160
Workflows
When you create a workflow definition in either WFM or Process Builder, you can set a priority for
each activity in the workflow. The priority value is recorded in the process definition and is only
applied to automatic tasks. Documentum Server ignores the value for manual tasks.
The workflow agent (the internal server facility that controls execution of automatic activities) uses
the priority values in r_act_priority to determine the order of execution for automatic activities.
When an automatic activity is instantiated, Documentum Server sends a notification to the workflow
agent. In response, the agent queries the repository to obtain information about the activities ready
for execution. The query returns the activities in priority order, highest to lowest.
In Process Builder, you can set up work queues to automate the distribution of manual tasks to
appropriate performers. For more information about work queues, refer to the Process Builder
documentation or online Help. Every work item on a work queue is governed by a work queue policy
object. The work queue policy defines how the item is handled on the queue. Among other things,
the policy defines the priority of the work items on the queue. Every work item on a work queue is
assigned a priority value at runtime, when the work item is generated.
The priority assigned by a work queue policy does not affect or interact with a priority value assigned
to an activity in the process definition. Work queue policies are applied to manual activities, because
only manual activities can be placed on a work queue. The priority values in the process definition
are used by Documentum Server only for execution of automatic activities.
For more information about how the workqueue policy is handled at runtime, refer to Process
Builder documentation.
There are three possible states for process and activity definitions: draft, validated, and installed.
A definition in the draft state has not been validated since it was created or last modified. A definition
in the validated state has passed the server's validation checks, which ensure that the definition is
correctly defined. A definition in the installed state is ready for use in an active workflow.
You cannot start a workflow from a process definition that is in the draft or validated state. The
process definition must be in the installed state. Similarly, you cannot successfully install a process
definition unless the activities it references are in the installed state.
Delegation and extension are features that you can set for manual activities.
Delegation allows the server or the activity performer to delegate the work to another performer. If
delegation is allowed, it can occur automatically or be forced manually.
161
Workflows
Automatic delegation occurs when the server checks the availability of an activity performer or
performers and determines that the person or persons is not available. When this happens, the
server automatically delegates the work to the user identified in the user_delegation property of the
original performer user object.
If there is no user identified in user_delegation or that user is not available, automatic delegation
fails. When delegation fails, Documentum Server reassigns the work item based on the value in the
control_flag property of the activity object that generated the work item. If control_flag is set to 0 and
automatic delegation fails, the work item is assigned to the workflow supervisor. If control_flag is set
to 1, the work item is reassigned to the original performer. The server does not attempt to delegate the
task again. In either case, the workflow supervisor receives a DM_EVENT_WI_DELEGATE_F event.
Manual delegation occurs when an IDfWorkitem.delegateTask method is explicitly issued. Typically,
only the work item performer, the workflow supervisor, or a superuser can execute the method.
If delegation is disallowed, automatic delegation is prohibited. However, the workflow supervisor or
a superuser can delegate the work item manually.
Extension
Extension allows the activity performer to identify a second performer for the activity after he or
she completes the activity the first time. If extension is allowed, when the original performers
complete activity work items, they can identify a second round of performers for the activity. The
server will generate new work items for the second round of performers. Only after the second
round of performers completes the work does the server evaluate the activity transition condition
and move to the next activity.
A work item can be extended only once. Programmatically, a work item is extended by execution
of an IDfWorkitem.repeat method.
If extension is disallowed, only the workflow supervisor or a superuser can extend the work item.
Activities with multiple performers performing sequentially (user category 9), cannot be extended.
Performer choices
When you define a performer for an activity, you must first choose a performer category. Depending
on the chosen category, you may also be required to identify the performer also. If so, you can
either define the actual performer at that time or configure the activity to allow the performer to be
chosen at one of the following times:
• When the workflow is started
• When the activity is started
• When a previous activity is completed
If you choose to define the performer during the design phase, Process Builder allows you to either
name the performer directly for many categories or define a series of conditions and associated
performers. At runtime, the workflow engine determines which condition is satisfied and selects the
performer defined as the choice for that condition.
162
Workflows
There are multiple options when choosing a performer category. Some options are supported for
both manual and automatic activities. Others are only valid choices for manual activities.
Task subjects
The task subject is a message that provides a work item performer with information about the work
item. The message is defined in the activity definition, using references to one or more properties. At
runtime, the actual message is constructed by substituting the actual property values into the string.
For example, suppose the task subject is defined as:
Please work on the {dmi_queue_item.task_name} task
(from activity number {dmi_queue_item.r_act_seqno})
of the workflow {dmi_workflow.object_name}.
The attached package is {dmi_package_r_package_name}.
The text of a task subject message is recorded in the task_subject property of the activity definition.
The text can be up to 255 characters and can contain references to the following object types and
properties:
• dm_workflow, any property
• dmi_workitem, any property
At runtime, references to dmi_workitem are interpreted as references to the work item associated
with the current task.
• dmi_queue_item, any property except task_subject
At runtime, references to dmi_queue_item are interpreted as references to the queue item
associated with the current task.
• dmi_package, any property
The format of the object type and property references must be:
{object_type_name.property_name}
The server uses the following rules when resolving the string:
• The server does not place quotes around resolved object type and property references.
• If the referenced property is a repeating property, the server retrieves all values, separating them
with commas.
• If the constructed string is longer than 512 characters, the server truncates the string.
• If an object type and property reference contains an error, for example, if the object type or
property does not exist, the server does not resolve the reference. The unresolved reference
appears in the message.
The resolved string is stored in the task_subject property of the associated task queue item object.
Once the server has created the queue item, the value of the task_subject property in the queue item
will not change, even if the values in any referenced properties change.
163
Workflows
Starting conditions
A starting condition defines the starting criteria for an activity. At runtime, the server will not start
an activity until the activity starting condition is met. A starting condition consists of a trigger
condition and, optionally, a trigger event.
The trigger condition is the minimum number of input ports that must have accepted packages. For
example, if an activity has three input ports, you may decide that the activity can start when two
of the three have accepted packages.
A trigger event is an event queued to the workflow. The event can be a system-defined event, such as
dm_checkin, or you can make up an event name, such as promoted or released. However, because
you cannot register a workflow to receive event notifications, the event must be explicitly queued to
the workflow using an IDfWorkflow.queue method.
Port definitions
Each port in an activity participates in one link. A port's type and the package definitions associated
with the port define the packages the activity can receive or send through the link. The types
of port include:
• Input
An input port accepts a package as input for an activity. The package definitions associated with
an input port define what packages the activity accepts. Each input port is connected through a
link to an output port of a previous activity.
• Output
An output port sends a package from an activity to the next activity. The package definitions
associated with an output port define what packages the activity can pass to the next activity or
activities. Each output port is connected by a link to an input port of a subsequent activity.
• Revert
A revert port is a special input port that accepts packages sent back from a subsequent performer.
A revert port is connected by a link to an output port of a subsequent activity.
• Exception
An exception port is an output port that links an automatic activity to the input port of an
Exception activity. Exception ports do not participate in transitions. The port is triggered only
when the automatic activity fails. You must create the workflow definition using Process Builder
to define exception ports and Exception activities.
164
Workflows
Package definitions
Documents are moved through a workflow as packages moving from activity to activity through the
ports. Packages are defined in properties of the activity definition.
Each port must have at least one associated package definition, and may have multiple package
definitions. When an activity is completed and a transition to the next activity occurs, Documentum
Server forwards to the next activity the package or packages defined for the activated output port.
If the package you define is an XML file, you can identify a schema to be associated with that file. If
you later reference the package in an XPath expression in route case conditions of a manual activity
for an automatic transition, the schema is used to validate the path. The XML file and the schema
are associated using a relationship.
The actual packages represented by package definitions are generated at runtime by the server as
needed and stored in the repository as dmi_package objects. You cannot create package objects
directly.
In Process Builder, you can define a package with no contents. This lets you design workflows that
allow an activity performer to designate the contents of the outgoing package at the time he or
she completes the activity.
If you create the workflow using Workflow Manager, a package definition is associated with the input
and output port connected by the selected link (flow). In Workflow Manager, you must define the
package or packages for each link in the workflow.
If you are using Process Builder to create the workflow, a package definition is global. When you
define a package in Process Builder, the definition is assigned to all input and output ports in all
activities in the workflow. It is not necessary to define packages for each link individually.
Note: Process Builder allows you to choose, for each activity, whether to make the package visible or
invisible to that activity. So, even though packages are globally assigned, if a package is not needed
for a particular activity, you can make it invisible to that activity. When the activity starts, the package
is ignored-none of the generated tasks will reference that package.
Package compatibility
The package definitions associated with two ports connected by a link must be compatible.
The two ports referenced by a link must meet the following criteria to be considered compatible:
• They must have the same number of package definitions.
165
Workflows
For example, if ActA_OP1 is linked to ActB_IP2 and ActA_OP1 has two package definitions,
ActB_IP2 must have two package definitions.
• The object types of the package components must be related as subtypes or supertypes in the
object hierarchy. One of the following must be true:
— The outgoing package type is a supertype of the incoming package type.
— The outgoing package type is a subtype of the incoming package type.
— The outgoing package type and the incoming package type are the same.
• Package acceptance, page 166, describes how the implementation actually moves packages from
one activity to the next.
• The Documentum Process Builder documentation or online help describes how to use those
features, such as visibility and skill levels for packages, that are only available through
Documentum Process Builder.
Package acceptance
When packages arrive at an input port, the server checks the port definition to see if the packages
satisfy the port package requirements and verifies the number of packages and package types
against the port definition.
If the port definitions are satisfied, the input port accepts the arriving packages by changing the
r_act_seqno, port_name, and package_name properties of those packages.
The following figure illustrates this process.
In the figure, the output port named OUT1 of the source activity is linked to the input port named IN1
of the destination activity. OUT1 contains a package definition: Package A of type dm_document.
166
Workflows
IN1 takes a similar package definition but with a different package name: Package B. When the
package is delivered from the port OUT1 to the port IN1 during execution, the content of the package
changes to reflect the transition:
• r_package_name changes from Package A to Package B
• r_port_name changes from OUT1 to IN1
• r_activity_seq changes from Seqno 1 to Seqno 2
• i_acceptance_date is set to the current time
In addition, at the destination activity, the server performs some bookkeeping tasks, including:
• Incrementing r_trigger_revert if the triggered port is a revert port
As soon as a revert port is triggered, the activity becomes active and no longer accepts any
incoming packages (from input or other revert ports).
• Incrementing r_trigger_input if the triggered port is an input port
As soon as this number matches the value of trigger_threshold in the activity definition, the
activity stops accepting any incoming packages (from revert or other input ports) and starts
its precondition evaluation.
• Setting r_last_performer
This information comes directly from the previous activity.
Packages that are not needed to satisfy the trigger threshold are dropped. For example, in the
following figure, Activity C has two input ports: CI1, which accepts packages P1 and P2, and CI2,
which accepts packages P1 and P3. Assume that the trigger threshold for Activity C is 1-that is, only
one of the two input ports must accept packages to start the activity.
Suppose Activity A completes and sends its packages to Activity C before Activity B and that the
input port, CI1 accepts the packages. In that case, the packages arriving from Activity B are ignored.
167
Workflows
Transition behavior
When an activity is completed, a transition to the next activity or activities occurs. The transition
behavior defined for the activity defines when the output ports are activated and which output ports
are activated. Transition behavior is determined by:
• The number of tasks that must be completed to trigger the transition
By default, all generated tasks must be completed.
• The transition type
If the number of completed tasks you specify is greater than the total number of work items
for an activity, Documentum Server requires all work items for that activity to complete before
triggering the transition. An activity transition type defines how the output ports are selected
when the activity is complete. There are three types of transition:
— Prescribed
If an activity transition type is prescribed, the server delivers packages to all the output ports.
This is the default transition type.
— Manual
If the activity transition type is manual, the activity performers must indicate at runtime
which output ports receive packages.
— Automatic
If the activity transition type is automatic, you must define one or more route cases for the
transition.
168
Workflows
Suspend timers are not part of an activity definition. They are defined by a method argument, at
runtime, when an activity is halted with a suspension interval.
Package control
Package control is an optional feature. It is a specific constraint on Documentum Server that
stops the server from recording package component object names specified in an addPackage or
addAttachment method in the generated package or wf attachment object. By default, package
control is not enabled. This means that if an addPackage or addAttachment method includes the
component names as an argument, the names are recorded in the r_component_name property of
the generated package or wf attachment object. If package control is enabled, Documentum Server
sets the r_component_name property to a single blank even if the component names are specified
in the methods.
If the control is enabled at the repository level, the setting in the individual workflow definitions
is ignored. If the control is not enabled at the repository level, then you must decide whether to
enable it for an individual workflow.
If you want to reference package component names in the task subject for any activities in the
workflow, do not enable package control. Use package control only if you do not want to expose the
object names of package components.
To enable package control in an individual workflow definition, set the package_control property to 1.
Documentum Server Administration and Configuration Guide describes how to enable or disable package
control at the repository level.
169
Workflows
Workflow execution
Workflow execution is implemented with the following object types:
• dm_workflow
170
Workflows
Workflow objects
Workflow objects are created when the workflow is started by an application or a user. Workflow
objects are subtypes of the persistent object type, and consequently, have no owner. However, every
workflow has a designated supervisor (recorded in the supervisor_name property). This person
functions much like the owner of an object, with the ability to change the workflow properties and its
state.
A workflow object contains properties that describe the activities in the workflow. These properties
are set automatically, based on the workflow definition, when the workflow object is created. They
are repeating properties, and the values at the same index position across the properties represent
one activity instance.
The properties that make up the activity instance identify the activity, its current state, its warning
timer deadlines (if any), and a variety of other information. As the workflow executes, the values
in the activity instance properties change to reflect the status of the activities at any given time
in the execution.
• The workflow supervisor, page 175, describes the workflow supervisor.
• Documentum Server System Object Reference Guide provides a full list of the properties that make up
an activity instance.
171
Workflows
Priority values
Each work item inherits the priority value defined in the process definition for the activity that
generated the work item. Documentum Server uses the inherited priority value of automatic
activities, if set, to prioritize execution of the automatic activities. Documentum Server ignores
priority values assigned to manual activities. A work item priority value can be changed at runtime.
Changing a work item priority generates an event that can be audited. Changing a priority value also
changes the priority value recorded in any queue item object associated with the work item.
Frequently, a business process requires the performers to sign off the work they do. Documentum
Server supports three options to allow users to electronically sign off work items: electronic
signatures, digital signatures, or simple sign-offs. You can customize work item completion to use
any of these options.
• Documentum Server System Object Reference Guide lists the properties in the dmi_workitem and
dmi_queue_item object types.
• Signature requirement support, page 91, describes the options for signing off work items.
172
Workflows
Package objects
Packages contain the objects on which the work is performed. Packages are implemented as
dmi_package objects. Package object properties:
• Identify the package and its contained objects
• Record the activity with which the package is associated
• Record when the package arrived at the activity
• Record information about any notes attached to the package
(At runtime, an activity performer can attach notes to packages, to pass information or instructions
to the persons performing subsequent activities.)
• Record whether the package is visible or invisible.
If a particular skill level is required to perform the task associated with the package, that information
is stored in a dmc_wf_package_skill object. A wf package skill object identifies a skill level and a
package. The objects are subtypes of dm_relation and are related to the workflow, with the workflow
as the parent in the relationship. In this way, the information stays with the package for the life
of the workflow.
A single instance of a package does not move from activity to activity. Instead, the server
manufactures new copies of the package for each activity when the package is accepted and new
copies when the package is sent on.
Package notes
Package notes are annotations that users can add to a package. Notes are used typically to provide
instructions or information for a work item performer. A note can stay with a package as it moves
through the workflow or it can be available only in the work items associated with one activity.
If an activity accepts multiple packages, Documentum Server merges any notes attached to the
accepted packages.
If notes are attached to package accepted by a work item generated from an automatic activity, the
notes are held and passed to the next performer of the next manual task.
Notes are stored in the repository as dm_note objects.
173
Workflows
Activity timers
There are three types of timers for an activity. An activity can have a
• Pre-timer that alerts the workflow supervisor if an activity has not started within a designated
number of hours after the workflow starts
• Post-timer that alerts the workflow supervisor if an activity has not completed within a designated
number of hours after the activity starts
• Suspend timer that automatically resumes the activity after a designated interval when the
activity is halted
Pre-timer instantiation
When a workflow instance is created from a workflow definition, Documentum Server determines
which activities in the workflow have pre-timers. For each activity with a pre-timer, it creates a
dmi_wf_timer object. The object records the workflow object ID, information about the activity,
the date and time at which to trigger the timer, and the action to take when the timer is triggered.
The action is identified through a module config object ID. Module config objects point to business
object modules stored in the Java method server.
If the activity is not started by the specified date and time, the timer is considered to be expired. Each
execution of the dm_WfmsTimer job finds all expired timers and invokes the dm_bpm_timer method
on each. Both the dm_WfmsTimer job and the dm_bpm_method are Java methods. The job passes the
module config object ID to the method. The method uses the information in that object to determine
the action. The dm_bpm_method executes in the Java method server.
Post-timer instantiation
A post-timer is instantiated when the activity for which it is defined is started. When the activity is
started, Documentum Server creates a dmi_wf_timer object for the post-timer. The timer records the
workflow object ID, information about the activity, the date and time at which to trigger the timer,
and the action to take when the timer is triggered.
174
Workflows
Attachments
Attachments are objects that users attach to a running workflow or an uncompleted work item.
Typically, the objects support the work required by the workflow activities. For example, if a
workflow is handling an engineering proposal under development, a user might attach a research
paper supporting that proposal. Attachments can be added at any point in a workflow and can
be removed when they are no longer needed. After an attachment is added, it is available to the
performers of all subsequent activities.
Attachments can be added by the workflow creator or supervisor, a work item performer, or a user
with Sysadmin or superuser privileges. Users cannot add a note to an attachment.
Internally, an attachment is saved in the repository as a dmi_wf_attachment object. The wf attachment
object identifies the attached object and the workflow to which it is attached.
175
Workflows
Users with Sysadmin or Superuser user privileges can act as the workflow supervisor. In addition,
superusers are treated like the creator of a workflow and can change object properties, if necessary.
However, messages that warn about execution problems are sent only to the workflow supervisor,
not to superusers.
A workflow supervisor is recorded in the supervisor_name property of the workflow object.
Instance states
This section describes:
• workflow states
A workflow current state is recorded in the r_runtime_state property of the dm_workflow object.
• activity states
• work item states
176
Workflows
Workflow states
Every workflow instance exists in one of five possible states: dormant, running, finished, halted,
or terminated. A workflow current state is recorded in the r_runtime_state property of the
dm_workflow object.
The state transitions are driven by API methods or by the workflow termination criterion that
determines whether a workflow is finished.
The following figure illustrates the states.
When a workflow supervisor first creates and saves a workflow object, the workflow is in the
dormant state. When the Execute method is issued to start the workflow, the workflow state is
changed to running.
Typically, a workflow spends its life in the running state, until either the server determines that
the workflow is finished or the workflow supervisor manually terminates the workflow with the
IDfWorkflow.abort method. If the workflow terminates normally, its state is set to finished. If the
workflow is manually terminated with the abort method, its state is set to terminated.
A supervisor can halt a running workflow, which changes the workflow state to halted. From a halted
state, the workflow supervisor can restart, resume, or abort the workflow.
177
Workflows
During a typical workflow execution, an activity state is changed by the server to reflect the activity
state within the executing workflow.
When an activity instance is created, the instance is in the dormant state. The server changes the
activity instance to the active state after the activity starting condition is fulfilled and server begins to
resolve the activity performers and generate work items.
If the server encounters any errors, it changes the activity instance state to failed and sends a warning
message to the workflow supervisor.
The supervisor can fix the problem and restart a failed activity instance. An automatic activity
instance that fails to execute can also change to the failed state, and the supervisor or the application
owner can retry the activity instance.
The activity instance remains active while work items are being performed. The activity instance
enters the finished state only when all its generated work items are completed.
A running activity can be halted. Halting an activity sets its state to halted. By default, only the
workflow supervisor or a user with Sysadmin or Superuser privileges can halt or resume an activity
instance.
Depending on how the activity was halted, it can be resumed manually or automatically. If a
suspension interval is specified when the activity is halted, then the activity is automatically resumed
after the interval expires. If a suspension interval is not specified, the activity must be manually
resumed. Suspension intervals are set programmatically as an argument in the IDfWorkflow.haltEx
method. Resuming an activity sets its state back to its previous state prior to being halted.
178
Workflows
A work item state is recorded in the r_runtime_state property of the dmi_workitem object.
When the server generates a work item for a manual activity, it sets the work item state to dormant
and places the peer queue item in the performer inbox. The work item remains in the dormant state
until the activity performer acquires it. Typically, acquisition happens when the performer opens the
associated inbox item. At that time, the work item state is changed to acquired.
When the server generates a work item for an automatic activity, it sets the work item state to
dormant and places the activity on the queue for execution. The application must issue the Acquire
method to change the work item state to acquired.
After the activity work is finished, the performer or the application must execute the Complete
method to mark the work item as complete. This changes the work item's state to finished.
A work item can be moved manually to the paused state by the activity performer, the workflow
supervisor, or a user with Sysadmin or superuser privileges. A paused work item requires a manual
state change to return to the dormant or acquired state.
Activity timers, page 174, describes how suspension intervals are implemented.
179
Workflows
workflow through a Documentum client interface, such as Webtop, the user must also be defined
as a Contributor.
This section describes how a typical workflow executes. It describes what happens when a workflow
is started and how execution proceeds from activity to activity. It also describes how packages are
handled and how a warning timer behaves during workflow execution.
The following figure illustrates the general execution flow described in detail in the text of this section.
180
Workflows
181
Workflows
An activity starting condition defines the number of ports that must accept packages and, optionally,
an event that must be queued in order to start the activity. The starting condition is defined in
the trigger_threshold and trigger_event properties in the activity definition. When a workflow is
created, these values are copied to the r_trigger_threshold and r_trigger_event properties in the
workflow object.
When an activity input port accepts a package, the server increments the activity instance
r_trigger_input property in the workflow object and then compares the value in r_trigger_input
to the value in r_trigger_threshold.
182
Workflows
If the two values are equal and no trigger event is required, the server considers that the activity has
satisfied its starting condition. If a trigger event is required, the server will query the dmi_queue_item
objects to determine whether the event identified in r_trigger_event is queued. If the event is in the
queue, then the starting condition is satisfied.
If the two values are not equal, the server considers that the starting condition is not satisfied.
The server also evaluates the starting condition each time an event is queued to the workflow.
After a starting condition that includes an event is satisfied, the server removes the event from the
queue. If multiple activities use the same event as part of their starting conditions, the event must
be queued for each activity.
When the starting condition is satisfied, the server consolidates the accepted packages if necessary
and then resolves the performers and generates the work items. If it is a manual activity, the server
places the work item in the performer inbox. If it is an automatic activity, the server passes the
performer name to the application invoked for the activity.
Package consolidation
If activity input ports have accepted multiple packages with the same r_package_type value, the
server consolidates those packages into one package.
For example, suppose that Activity C accepts four packages: two Package_typeA, one Package_typeB,
and one Package_typeC. Before generating the work items, the server will consolidate the two
Package_typeA package objects into one package, represented by one package object. It does this by
merging the components and any notes attached to the components.
The consolidation order is based on the acceptance time of each package instance, as recorded in the
i_acceptance_date property of the package objects.
After the starting condition is met and packages consolidated if necessary, the server determines the
performers for the activity and generates the work items.
For manual activities, the server uses the value in the performer_type property in conjunction with
the performer_name property, if needed, to determine the activity performer. After the performer is
determined, the server generates the necessary work items and peer queue items.
If the server cannot assign the work item to the selected performer because the performer has
workflow_disabled set to TRUE in his or her user object, the server attempts to delegate the work
item to the user listed in the user_delegation property of the performer user object.
If automatic delegation fails, the server reassigns the work item based on the setting of the control_flag
property in the definition of the activity that generated the work item.
Note: When a work item is generated for all members of a group, users in the group who are
workflow disabled do not receive the work item, nor is the item assigned to their delegated users.
If the server cannot determine a performer, a warning is sent to the performer who completed the
previous work item and the current work item is assigned to the supervisor.
183
Workflows
For automatic activities, the server uses the value in the performer_type property in conjunction with
the performer_name property, if needed, to determine the activity performer. The server passes the
name of the selected performer to the invoked program.
The server generates work items but not peer queue items for work items representing automatic
activities.
When the performer_name property contains an alias, the server resolves the alias using a resolution
algorithm determined by the value found in the activity's resolve_type property.
If the server cannot determine a performer, a warning is sent to the workflow supervisor and the
current work item is assigned to the supervisor.
• Executing automatic activities, page 184, describes how automatic activities are executed.
• Resolving aliases in workflows, page 217, describes the resolution algorithms for performer aliases.
The master session of the workflow agent controls the execution of automatic activities. The workflow
agent is an internal server facility.
After the server determines the activity performer and creates the work item, the server notifies the
workflow agent master session that an automatic activity is ready for execution. The master session
handles activities in batches. If the master session is not currently processing a batch when the
notification arrives, the session wakes up and does the following:
1. Executes an update query to claim a batch of work items generated by automatic activities.
A workflow agent master session claims a batch of work items by setting the a_wq_name
property of the work items to the name of the server config object representing the Documentum
Server. The maximum number of work items in a batch is the lesser of 2000 or 30 times the
number of worker threads.
2. Selects the claimed work items and dispatches the returned items to the execution queue.
The work items are dispatched one item at a time. If the queue is full, the master session checks
the size of the queue (the number of items in the queue). If the size is greater than a set threshold,
it waits until it receives notification from a worker thread that the queue has been reduced. A
worker thread checks the size of the queue each time it acquires a work item. When the size of
the queue equals the threshold, the thread sends the notification to the master session. The
notification from the worker thread tells the master session it can resume putting work items on
the queue.
The queue can have a maximum of 2000 work items. The threshold is equal to fives times the
number of worker threads.
3. After all claimed work items are dispatched, the master agent returns to sleep until another
notification arrives from Documentum Server or the sleep interval passes.
Note: If the Documentum Server associated with the workflow agent should fail while there are work
items claimed but not processed, when the server is restarted, the workflow agent will pick up the
184
Workflows
processing where it left off. If the server cannot be restarted, you can use an administration method to
recover those work items for processing by another workflow agent.
When a workflow agent worker session takes an activity from the execution queue, it retrieves
the activity object from the repository and locks it. It also fetches some related objects, such as the
workflow. If any of the objects cannot be fetched or if the fetched workflow is not running, the worker
session sets a_wq_name to a message string that specifies the problem and drops the task without
processing it. Setting a_wq_name also ensures that the task will not be picked up again.
After all the fetches succeed and after verifying the ready state of the activity, the worker thread
executes the method associated with the activity. The method is always executed as the server
regardless of the run_as_server property setting in the method object.
Note: If the activity is already locked, the worker session assumes that another workflow agent is
executing the activity. The worker session simply skips the activity and no error message is logged.
This situation can occur in repositories with multiple servers, each having its own workflow agent.
If an activity fails for any reason, the selected performer receives a notification.
The server passes the following information to the invoked program:
• Repository name
• User name (this is the selected performer)
• Login ticket
• Work item object ID
• Mode value
The information is passed in the following format:
-docbase_name repository_name -user user_name -ticket login_ticket
-packageId workitem_id mode mode_value
The mode value is set automatically by the server. The following table lists the values for the mode
parameter.
Value Meaning
0 Normal
1 Restart (previous execution failed)
2 Termination situation (re-execute because
workflow terminated before automatic activity
user program completed)
The method program can use the login ticket to connect back to the repository as the selected
performer. The work item object ID allows the program to query the repository for information about
the package associated with the activity and other information it may need to perform its work.
• The workflow agent, page 176, describes the workflow agent.
• Documentum Server Administration and Configuration Guide contains instructions for recovering
work items for execution by an alternate workflow agent in case of a Documentum Server failure.
185
Workflows
Completing an activity
When a performer completes a work item, the server increments the r_complete_witem property
in the workflow object and then evaluates whether the activity is complete. To do so, the server
compares the value of the r_complete_witem property to the value in the workflow r_total_workitem
property. The r_total_witem property records the total number of work items generated for the
activity. The r_complete_witem property records how many of the activity work items are completed.
If the two values are the same and extension is not enabled for the activity, the server considers that
the activity is completed. If extension is enabled, the server:
• Collects the second-round performers from the r_ext_performer property of all generated work
items
• Generates another set of work items for the user or users designated as the second-round
performers and removes the first round of work items
• Sets the i_performer_flag to indicate that the activity is in the extended mode and no more
extension is allowed
The following figure illustrates the decision process when the properties are equal.
186
Workflows
If the number of completed work items is lower than the total number of work items, the server then
uses the values in transition_eval_cnt and, for activities with a manual transition, the transition_flag
property to determine whether to trigger a transition. The transition_eval_cnt property specifies how
many work items must be completed to finish the activity. The transition_flag property defines
how ports are chosen for the transition. The following figure illustrates the decision process when
r_complete_witem and r_total_workitem are not equal.
If an activity transition is triggered before all the activity work items are completed, Documentum
Server marks the unfinished work items as pseudo-complete and removes them from the inboxes
of the performers. The server also sends an email message to the performers to notify them that
the work items have been removed.
Note: Marking an unfinished work item as pseudo-complete is an auditable event. The event name is
dm_pseudocompleteworkitem.
Additionally, if an activity transition is triggered before all work items are completed, any extended
work items are not generated even if extension is enabled.
After an activity is completed, the server selects the output ports based on the transition type defined
for the activity.
187
Workflows
If the transition type is prescribed, the server delivers packages to all the output ports.
If the transition type is manual, the user or application must designate the output ports. The choices
are passed to Documentum Server using one of the Setoutput methods. The number of choices
may be limited by the activity's definition. For example, the activity definition may only allow
a performer to choose two output ports. How the selected ports are used is also specified in the
activity's definition. For example, if multiple ports are selected, the definition may require the server
to send packages to the selected revert ports and ignore the forward selections.
If the transition type is automatic, the route cases are evaluated to determine which ports will receive
packages. If the activity's r_condition_id property is set, the server evaluates the route cases. If the
activity's r_predicate_id property is set, the server invokes the dm_bpm_transition method to evaluate
the route cases. The dm_bpm_transition method is a Java method that executes in the Java method
server. The server selects the ports associated with the first route case that returns a TRUE value.
After the ports are determined, the server creates the needed package objects. If the package creation
is successful, the server considers that the activity is finished. At this point, the cycle begins again
with the start of the next activity's execution.
Distributed workflow
A distributed workflow consists of distributed notification and object routing capability. Any object
can be bound to a workflow package and passed from one activity to another.
Distributed workflow works best in a federated environment where users, groups, object types, and
ACLs are known to all participating repositories.
In such an environment, users in all repositories can participate in a business process. All users are
known to every repository, and the workflow designer treats remote users no differently than local
users. Each user designates a home repository and receives notification of all work item assignments
in the home inbox.
All process and activity definitions and workflow runtime objects must reside in a single repository.
A process cannot refer to an activity definition that resides in a different repository. A user cannot
execute a process that resides in a repository different from the repository where the user is currently
connected.
Distributed notification
When a work item is assigned to a remote user, a work item and the peer queue item are generated
in the repository where the process definition and the containing workflow reside. The notification
agent for the source repository replicates the queue item in the user home repository. Using these
queue items, the home inbox connects to the source repository and retrieves all information necessary
for the user to perform the work item tasks.
A remote user must be able to connect to the source repository to work on a replicated queue item.
188
Workflows
2. The notification agent replicates the queue item in user A home repository.
3. User A connects to the home repository and acquires the queue item. The user home inbox
makes a connection to the source repository and fetches the peer work item. The home inbox
executes the Acquire method for the work item.
4. User A opens the work item to find out about arriving packages. The user home inbox executes a
query that returns a list of package IDs. The inbox then fetches all package objects and displays
the package information.
5. When user A opens a package and wants to see the attached instructions, the user home inbox
fetches the attached notes and contents from the source repository and displays the instructions.
6. User A starts working on the document bound to the package. The user home inbox retrieves and
checks out the document and contents from the source repository. The inbox decides whether to
create a reference that refers to the bound document.
7. When user A is done with the package and wants to attach an instruction for subsequent activity
performers, the user home inbox creates a note object in the source repository and executes the
addNote method to attach notes to the package. The inbox then executes the Complete method
for the work item and cleans up objects that are no longer needed.
189
Workflows
Tasks are sent to the inbox automatically, when the task is generated. Users must register to receive
events. Users can register to receive notifications of system-defined events. When a system-defined
event occurs, Documentum Server sends an event notification automatically to any user who is
registered to receive the event.
Users cannot register for application-defined events. Generating application-defined events and
triggering notifications of the events are managed completely by the application.
• Documentum Server Administration and Configuration Guide contains tables listing all system-defined
events.
• Work item and queue item objects, page 171, describes work items.
• Inboxes, page 190, describes inboxes.
Inboxes
In the Documentum system, you have an electronic inbox. that holds various items that require
your attention.
An inbox is a virtual container that holds tasks, event notifications, and other items sent to users
manually (using a queue method). For example, one of your employees might place a vacation
request in your inbox, or a coworker might ask you to review a presentation. Each user in a
repository has an inbox.
Accessing an Inbox
Users access their inboxes through the Documentum client applications. If your enterprise has
defined a home repository for users, the inboxes are accessed through the home repository. All inbox
items, regardless of the repository in which they are generated, appear in the home repository inbox.
Users must login to the home repository to view their inbox.
If you do not define home repositories for users, Documentum Server maintains an inbox for each
repository. Users must log in to each repository to view the inbox for that repository. The inbox
contains only those items generated within the repository.
Applications access inbox items by querying and referencing dmi_queue_item objects.
All items that appear in an inbox are managed by the server as objects of type dmi_queue_item.
The properties of a queue item object contain information about the queued item. For example,
the sent_by property contains the name of the user who sent the item and the date_sent property
tells when it was sent.
The dmi_queue_item objects are persistent. They remain in the repository even after the items they
represent have been removed from an inbox, providing a persistent record of completed tasks. Two
190
Workflows
properties that are set when an item is removed from an inbox contain the history of a project with
which tasks are associated. These properties are:
• dequeued_by contains the name of the user that removed the item from the inbox.
• dequeued_date contains the date and time that the item was removed.
• Documentum Server System Object Reference Guide contains the reference information for the
dmi_queue_item object type.
To determine whether to refresh an inbox, you can use an IDfSession.hasEvents method to check for
new items. A new item is defined as any item queued to the inbox after the previous execution of
getEvents for the user. The method returns TRUE if there are new items in the inbox or FALSE if
there are no new items.
• Documentum Server DQL Reference Guide contains instructions on using GET_INBOX.
• Documentum Server System Object Reference Guide contains the reference information about the
properties of a queue item object.
191
Workflows
using a queue method. You can also manually or programmatically take an item out of an inbox by
dequeuing the item.
Queuing items
Use a queue method to place an item in an inbox. Executing a queue method creates a queue item
object. You can queue a SysObject or a user- or application-defined event.
When you queue an object, including an event name is optional. You may want to include one,
however, to be manipulated by the application. Documentum Server ignores the event name.
When you queue a workflow-related event, the event value is not optional. The value you assign to the
parameter should match the value in the trigger_event property for one of the workflow's activities.
Although you must assign a priority value to queued items and events, your application can ignore
the value or use it. For example, the application might read the priorities and present the items to the
user in priority order. The priority is ignored by Documentum Server.
You can also include a message to the user receiving the item.
192
Workflows
Removing a registration
To remove an event registration, use Documentum Administrator or an IDfSysObject.unRegister
method.
Only a user with Sysadmin or superuser privileges can remove another user registration for an
event notification.
If you have more than one event defined for an object, the unRegister method only removes the
registration that corresponds to the combination of the object and the event. Other event registrations
for that object remain in place.
Documentum Server Administration and Configuration Guide contains information about the system
events for which you may register for notification.
193
Workflows
194
Chapter 10
Lifecycles
Overview
A lifecycle is one of the process management services provided with Documentum Server. Lifecycles
automate management of documents throughout their “lives” in the repository.
A lifecycle is a set of states that define the stages in the life of an object. The states are connected
linearly. An object attached to a lifecycle progresses through the states as it moves through its
lifetime. A change from one state to another is governed by business rules. The rules are implemented
as requirements that the object must meet to enter a state and actions to be performed on entering a
state. Each state can also have actions to be performed after entering a state.
Lifecycles contain:
• States
A lifecycle can be in one of a normal progression of states or in an exception state.
• Attached objects
Any system object or subtype (except a lifecycle object itself) can have an attached lifecycle.
• Entry and post entry actions
A lifecycle can trigger custom behavior in the repository when an object enters or leaves a lifecycle
state.
You use the Lifecycle Editor, accessed through Documentum Composer, to create a lifecycle. Design
states in the lifecycle can then attach an object (for example, a document) to the lifecycle. Entry
criteria apply to each state defined in the lifecycle.
For example, a lifecycle for a Standard Operating Procedure (SOP) might have states representing the
draft, review, rewrite, approved, and obsolete states of an SOP life. Before an SOP can move from
the rewrite state to the approved state, business rules might require the SOP to be signed off by a
company vice president, and converted to HTML format for publishing on a company web site. After
the SOP enters the approved state, an action can send an email message to employees informing
them the SOP is available.
195
Lifecycles
you can resume the lifecycle for the object by moving the object out of the exception state back to the
normal state or returning it to the base state.
For example, if a document describes a legal process, you can create an exception state to temporarily
halt the lifecycle if the laws change. The document lifecycle cannot resume until the document is
updated to reflect the changes in the law.
Figure 21, page 196, shows an example of a lifecycle with exception states. Like normal states,
exception states have their own requirements and actions.
Which normal and exception states you include in a lifecycle depends on which object types will be
attached to the lifecycle. The states reflect the stages of life for those particular objects. When you
are designing a lifecycle, after you have determined which objects you want the lifecycle to handle,
decide what the life states are for those objects. Then, decide whether any or all of those states
require an exception state.
196
Lifecycles
the target state is an exception state, Documentum Server also sets r_resume_state to identify the
normal state to which the object can be returned. After changing the state, the server performs any
post-entry actions defined for the target state. The actions can make fundamental changes (such as
changes in ownership, access control, location, or properties) to an object as that object progresses
through the lifecycle.
If an object is demoted back to the previous normal state, Documentum Server only performs the
actions associated with the state and resets the properties. It does not evaluate the entry criteria.
Objects cannot skip normal steps as they progress through a lifecycle.
The following figure shows an example of a simple lifecycle with three states: preliminary, reviewed,
and published. Each state has its own requirements and actions. The preliminary state is the base state.
Attaching objects
An object may be attached to any attachable state. By default, unless another state is explicitly
identified when an object is attached to a lifecycle, Documentum Server attaches the object to the first
attachable state in the lifecycle. Typically, this is the base state.
A state is attachable if the allow_attach property is set for the state.
When an object is attached to a state, Documentum Server tests the entry criteria and performs the
actions on entry. If the entry criteria are not satisfied or the actions fail, the object is not attached
to the state.
Programmatically, attaching an object is accomplished using an IDfSysObject.attachPolicy method.
Objects move between states in a lifecycle through promotions, demotions, suspensions, and
resumptions. Promotions and demotions move objects through the normal states. Suspensions and
resumptions are used to move objects into and out of the exception states.
Promotions
Promotion moves an object from one normal state to the next normal state. Users who own an
object or are superusers need only Write permission to promote the object. Other users must have
Write permission and Change State permission to promote an object. If the user has only Change
State permission, Documentum Server will attempt to promote the object as the user defined in the
a_bpaction_run_as property in the docbase config object. In those instances, that user must be either
197
Lifecycles
the owner or a superuser with Write permission or have Write and Change State permission on
the object.
A promotion only succeeds if the object satisfies any entry criteria and actions on entry defined for
the target state.
It is possible to bypass the entry criteria. If you choose to do that, the server does not enforce the
entry criteria, but simply performs the actions associated with the destination state and, on their
completion, moves the object to the destination state. You must own the lifecycle policy object or be
a superuser to bypass entry criteria.
Promotions are accomplished programmatically using one of the promote methods in the
IDfSysObject interface. Bypassing the entry criteria is accomplished by setting the override argument
in the method to true.
Batch promotion is the promotion of multiple objects in batches. Documentum Server supports
batch promotions using the BATCH_PROMOTE administration method. You can use it to promote
multiple objects in one operation.
Demotions
Demotion moves an object from a normal state back to the previous normal state or back to the base
state. Demotions are only supported by states that are defined as allowing demotions. The value of
the allow_demote property for the state must be TRUE. Additionally, to demote an object back to the
base state, the return_to_base property value must be TRUE for the current state.
Users who own an object or are superusers need only Write permission to demote the object. Other
users must have Write permission and Change State permission to demote an object. If the user has
only Change State permission, Documentum Server will attempt to demote the object as the user
defined in the a_bpaction_run_as property in the docbase config object. In those instances, that
user must be either the owner or a superuser with Write permission or have Write and Change
State permission on the object.
If the object current state is a normal state, the object can be demoted to either the previous normal
state or the base state. If the object current state is an exception state, the object can be demoted only
to the base state. Demotions are accomplished programmatically using one of the demote methods in
the IDfSysObject interface.
Suspensions
Suspension moves an object from the current normal state to the state exception state. Users who
own an object or are superusers need only Write permission to suspend the object. Other users
must have Write permission and Change State permission to suspend an object. If the user has
only Change State permission, Documentum Server will attempt to suspend the object as the user
defined in the a_bpaction_run_as property in the docbase config object. In those instances, that
user must be either the owner or a superuser with Write permission or have Write and Change
State permission on the object.
When an object is moved to an exception state, the server checks the state entry criteria and executes
the actions on entry. The criteria must be satisfied and the actions completed to successfully move the
object to the exception state.
198
Lifecycles
It is possible to bypass the entry criteria. If you choose to do that, the server does not enforce the
entry criteria, but simply performs the actions associated with the destination state and, on their
completion, moves the object to the destination state. You must own the lifecycle policy object or be
a superuser to bypass entry criteria.
Suspending an object is accomplished programmatically using one of the suspend methods in the
IDfSysObject interface. Bypassing the entry criteria is accomplished by setting the override argument
in the method set to true.
Resumptions
Resumption moves an object from an exception state back to the normal state from which it
was suspended or back to the base state. Users who own an object or are superusers need only
Write permission to resume the object. Other users must have Write permission and Change State
permission to resume an object. If the user has only Change State permission, Documentum Server
will attempt to resume the object as the user defined in the a_bpaction_run_as property in the
docbase config object. In those instances, that user must be either the owner or a superuser with Write
permission or have Write and Change State permission on the object.
Additionally, to resume an object back to the base state, the exception state must have the
return_to_base property set to TRUE.
When an object is resumed to either the normal state or the base state, the object must satisfy the
target state entry criteria and action on entry. The criteria must be satisfied and the actions completed
to successfully resume the object to the destination state.
It is possible to bypass the entry criteria. If you choose to do that, the server does not enforce the
entry criteria, but simply performs the actions associated with the destination state and, on their
completion, moves the object to the destination state. You must own the lifecycle policy object or be
a superuser to bypass entry criteria.
Programmatically, resuming an object is accomplished using one of the resume methods in the
IDfSysObject interface. Bypassing the entry criteria is accomplished by setting the override argument
in the method set to true.
Scheduled transitions
A scheduled transition is a transition from one state to another at a predefined date and time. If a
lifecycle state is defined as allowing scheduled transitions, you can automate moving objects out of
that state with scheduled transitions. All of the methods that move objects between states have a
variation that allows you to schedule a transition for a particular date and time. If you issue a method
that schedules the movement between states, Documentum Server creates a job for the state change.
The job scheduling properties are set to the specified date and time. The job runs as the user who
issued the initial method that created the job, unless the a_bpaction_run_as property is set in the
repository configuration object. If that is set, the job runs as the user defined in that property.
The destination state for a scheduled change can be an exception state or any normal state except the
base state. You cannot schedule the same object for multiple state transitions at the same time.
199
Lifecycles
You can unschedule a scheduled transition. Each of methods governing movement also has a
variation that allows you to cancel a schedule change. For example, to cancel a scheduled promotion,
you would use cancelSchedulePromote.
Installing Documentum Server installs a set of methods, implemented as method objects, that support
lifecycle operations. There is a set for lifecycles that use Java and a corresponding set for lifecycles
that use Docbasic. The following table lists the methods.
Method name
Java Docbasic Purpose
dm_bp_transition_java dm_bp_transition Executes state transitions.
dm_bp_batch_java dm_bp_batch Invoked by BATCH
_PROMOTE to promote
objects in batches.
dm_bp_schedule_java dm_bp_schedule Invoked by jobs created for
scheduled state changes. Calls
bp_transition to execute the
actual change.
dm_bp_validate_java dm_bp_validation Validates the lifecycle
definition.
State changes
Movement from one state to another is handled by the dm_bp_transition_java and dm_bp_transition
methods. The dm_bp_transition_java method is used for Java-based lifecycles. The dm_bp_transition
method is used for Docbasic-based lifecycles.
When a user or application issues a promote, demote, suspend, or resume method that does
not include a scheduling argument, the appropriate transition method is called immediately.
If the state-change method includes a scheduling argument, the dm_bp_schedule_java (or
dm_bp_schedule) method is invoked to create a job for the operation. The job scheduling properties
are set to the date and time identified in the scheduling argument of the state-change method. When
the job is executed, it invokes dm_bp_transition_java or dm_bp_transition.
Note: The dm_bp_transition_java and dm_bp_transition methods are also invoked by an attach
method.
The dm_bp_transition_java and dm_bp_transition methods perform the following actions:
1. Use the supplied login ticket to connect to Documentum Server.
2. If the policy does not allow the object to move from the current state to the next state, return an
error and exit.
3. Open an explicit transaction.
200
Lifecycles
201
Lifecycles
a particular lifecycle can be as broad or as narrow as needed. You can design a lifecycle to which
any SysObject or SysObject subtype can be attached. You can also create a lifecycle to which only a
specific subtype of dm_document can be attached.
If the lifecycle handles multiple types, the chosen object types must have the same supertype or one
of the chosen types must be the supertype for the other included types.
The chosen object types are recorded internally in two properties: included_type and
include_subtypes. These are repeating properties. The included_type property records, by name, the
object types that can be attached to a lifecycle. The include_subtypes property is a Boolean property
that records whether subtypes of the object types specified in included_type may be attached to
the lifecycle. The value at a given index position in include_subtypes is applied to the object type
identified at the corresponding position in included_type.
An object can be attached to a lifecycle if either
• The lifecycle included_type property contains the document type, or
• The lifecycle included_type contains the document supertype and the value at the corresponding
index position in the include_subtypes property is set to TRUE
For example, suppose a lifecycle definition has the following values in those properties:
included_type[0]=dm_sysobject
included_type[1]=dm_document
include_subtypes[0]=F
include_subtypes[1]=T
For this lifecycle, users can attach any object that is the dm_sysobject type. However, the only
SysObject subtype that can be attached to the lifecycle is a dm_document or any of the dm_document
subtypes.
The object type defined in the first index position (included_type[0]) is called the primary object
type for the lifecycle. Object types identified in the other index positions in included_type must be
subtypes of the primary object type.
You can define a default lifecycle for an object type. If an object type has a default lifecycle, when
users create an object of that type, they can attach the lifecycle to the object without identifying the
lifecycle specifically. Default lifecycles for object types are defined in the data dictionary.
202
Lifecycles
Repository storage
The definition of a lifecycle is stored in the repository as a dm_policy object. The properties of the
object define the states in the lifecycle, the object types to which the lifecycle may be attached, whether
state extensions are used, and whether a custom validation program is used.
The state definitions within a lifecycle definition consist of a set of repeating properties. The values at
a particular index position across those properties represent the definition of one state. The sequence
of states within the lifecycle is determined by their position in the properties. The first state in the
lifecycle is the state defined in index position [0] in the properties. The second state is the state
defined in position [1], the third state is the state defined in index position [2], and so forth.
State definitions include such information as the name of the state, a state type, whether the state is a
normal or exception state, entry criteria, and actions to perform on objects in that state.
203
Lifecycles
Validation of a lifecycle definition ensures that the lifecycle is correctly defined and ready for use
after it is installed. There are two system-defined validation programs: dm_bp_validate_java and
dm_bp_validate. The Java method is invoked by Documentum Server for Java-based lifecycles. The
other method is invoked for Docbasic-based lifecycles. Each method checks the following when
validating a lifecycle:
• The policy object has at least one attachable state.
• The primary type of attachable object is specified, and all subtypes defined in the later position of
the included_type property are subtypes of the primary attachable type.
• All objects referenced by object ID in the policy definition exist.
• For Java-based lifecycles, that all Service Based Objects (SBOs) referenced by service name exist.
In addition to the system-defined validation, you can write a custom validation program for use. If
you provide a custom program, Documentum Server executes the system-defined validation first and
then the custom program. Both programs must complete successfully to successfully validate the
definition.
Validating a lifecycle definition requires at least Write permission on the policy object.
Lifecycles that have passed validation can be installed. Only after installation can users begin to attach
objects to the lifecycle. A user must have Write permission on the policy object to install a lifecycle.
Internally, installation is accomplished using an install method.
• Lifecycle state definitions, page 205, contains a detailed list of the information that makes up a
state definition.
• Custom validation programs, page 208, contains more information on writing a custom validation
program.
Designing a lifecycle
Lifecycles are typically created using the Lifecycle Editor, which is accessed through Documentum
Composer. It is possible to create a lifecycle definition by directly issuing the appropriate methods or
DQL statements to create, validate, and install the definition. However, using the Lifecycle Editor is
the recommended and easiest way to create a lifecycle.
When you design a lifecycle, you must make the following decisions:
• What objects will use the lifecycle
• What normal and exception states the lifecycle will contain and the definition of each state
A state definition includes a number of items, such as whether it is attachable, what its entry
criteria, actions on entry, and post-entry actions are, and whether it allows scheduled transitions.
• Whether to include an alias set in the definition
• Whether you want to assign state types
If objects attached to the lifecycle will be handled by the Documentum clients DCM or WCM,
you must assign state types to the states in the lifecycle. Similarly, if the objects will be handled
by a custom application whose behavior depends upon a lifecycle state type, you must assign
state types.
204
Lifecycles
• Types of objects that can be attached to lifecycles, page 201, describes how the object types whose
instances may be attached to a lifecycle are specified.
• Lifecycle state definitions, page 205, contains guidelines for defining lifecycle states.
• Lifecycles, alias sets, and aliases, page 209, describes how alias sets are used with lifecycles.
• State types, page 210, describes the purpose and use of state types.
• State extensions, page 209, describes the purpose and use of state extensions.
205
Lifecycles
When an object is demoted, Documentum Server does not check the entry criteria of the target
state. However, Documentum Server does perform the system and user-defined actions on entry
and post-entry actions.
• Scheduled transitions
A scheduled transition moves an object from one state to another at a scheduled date and time.
Normal states can allow scheduled promotions to the next normal state or a demotion to the base
state. Exception states can allow a scheduled resumption to a normal state or a demotion to
the base state.
Whether a state can be scheduled to transition to another state is recorded in the allow_schedule
property. This property is set to TRUE if you decide that transitions out of the state may be
scheduled. It is set to FALSE if you do not allow scheduled transitions for the state.
The setting of this property only affects whether objects can be moved out of a particular state at
scheduled times. It has no effect on whether objects can be moved into a state at a scheduled time.
For example, suppose StateA allows scheduled transitions and StateB does not. Those settings
mean that you can promote an object from StateA to StateB on a scheduled date, but you cannot
demote an object from StateB to StateA on a scheduled date.
• Entry criteria
Entry criteria are the conditions an object must meet before the object can enter a normal or
exception state when promoted, suspended, or resumed. The entry criteria are not evaluated if the
action is a demotion. Each state may have its own entry criteria.
If the lifecycle is Java-based, the entry criteria can be:
— A Java program
Access the lifecycle through the interface IDfLifecycleUserEntryCriteria.
— One or more Boolean expressions
— Both Boolean expressions and a Java program
Java-based programs are stored in the repository as SBO modules and a jar file. Documentum
Foundation Classes Development Guide contains information about SBO modules.
If the lifecycle is Docbasic-based, the entry criteria can be:
— A Docbasic program
— One or more Boolean expressions
— Both Boolean expressions and a Docbasic program
206
Lifecycles
If you define your own actions on entry, the program must be a Java program if the lifecycle is
Java-based. Java-based actions on entry are stored in the repository as SBO modules and a JAR file. If
the lifecycle is Docbasic-based, the actions on entry program must be a Docbasic program.
If both system-defined and user-defined actions on entry are specified for a state, the server performs
the system-defined actions first and then the user-defined actions. An object can only enter the state
when all actions on entry complete successfully.
Actions on entry include:
• System-defined actions
A set of pre-defined actions on entry are available for use. When you create or modify a lifecycle
using Lifecycle Editor, you can choose one or more of these actions.
• Java programs
A Java program used as an action on entry program must implement the interface
IDfLifecycleUserAction.
• Docbasic programs
Docbasic actions on entry programs are stored in the repository as dm_procedure objects. The
object IDs of the procedure objects are recorded in the user_action_id property. These properties
are set internally when you identify the programs while creating or modifying a lifecycle using
Lifecycle Editor.
207
Lifecycles
if the user sign-off does not succeed, the entry criteria or action does not complete successfully and
the object is not moved to the state.
208
Lifecycles
means that if those programs are written in Java, the custom validation program must be in Java also.
If the programs are written in Docbasic, the validation program must be in Docbasic also.
Note: Docbasic is a deprecated language.
After you write the program, use Documentum Composer to add the custom validation program
to the lifecycle definition. You must own the lifecycle definition (the policy object) or have at least
Version permission on it to add a custom validation program to the lifecycle.
• The Documentum Composer documentation contains instructions for creating a custom validation
program and adding to a lifecycle definition.
State extensions
State extensions are used to provide additional information to applications for use when an object is
in a particular state. For example, an application may require a list of users who have permission
to sign off a document when the document is in the Approval state. You can provide such a list by
adding a state extension to the Approval state.
Note: Documentum Server does not use information stored in state extensions. Extensions are
solely for use by client applications.
You can add a state extension to any state in a lifecycle. State extensions are stored in the repository
as objects. The objects are subtypes of the dm_state_extension type. The dm_state_extension type is
a subtype of dm_relation type. Adding state extensions objects to a lifecycle creates a relationship
between the extension objects and the lifecycle.
209
Lifecycles
If you want to use state extensions with a lifecycle, determine what information is needed by the
application for each state requiring an extension. When you create the state extensions, you will
define a dm_state_extension subtype that includes the properties that store the information required
by the application for the states. For example, suppose you have an application called EngrApp that
will handle documents attached to LifecycleA. This lifecycle has two states, Review and Approval,
that require a list of users and a deadline date. The state extension subtype for this lifecycle will have
two defined properties: user_list and deadline_date. Or perhaps the application needs a list of users
for one state and a list of possible formats for another. In that case, the properties defined for the state
extension subtypes will be user_list and format_list.
State extension objects are associated with particular states through the state_no property, inherited
from the dm_state_extension supertype.
State extensions must be created manually. The Lifecycle Editor does not support creating state
extensions.
State types
A state type is a name assigned to a lifecycle state that can be used by applications to control behavior
of the application. Using state types makes it possible for a client application to handle objects in
various lifecycles in a consistent manner. The application bases its behavior on the type of the state,
regardless of the state's name or the including lifecycle.
Documentum Document Control Management (DCM) and Documentum Web Content Management
(WCM) expect the states in a lifecycle to have certain state types. The behavior of either Documentum
client when handling an object in a lifecycle is dependent on the state type of the object's current
state. When you create a lifecycle for use with objects that will be handled using DCM or WCM, the
lifecycle states must have state types that correspond to the state types expected by the client. (Refer
to the DCM and WCM documentation for the state type names recognized by each.)
Custom applications can also use state types. Applications that handle and process documents can
examine the state_type property to determine the type of the object's current state and then use the
type name to determine the application behavior.
In addition to the repeating property that defines the state types in the policy object, state types
may also be recorded in the repository using dm_state_type objects. State type objects have two
properties: state_type_name and application_code. The state_type_name identifies the state type and
application_code identifies the application that recognizes and uses that state type. You can create
these objects for use by custom applications. For example, installing DCM creates state type objects
for the state types recognized by DCM. DCM uses the objects to populate pick lists displayed to
users when users are creating lifecycles.
Use the Lifecycle Editor to assign state types to states and to create state type objects. If you have
subtyped the state type object type, you must use the API or DQL to create instances of the subtype.
210
Lifecycles
211
Lifecycles
212
Chapter 11
Aliases
This appendix describes how aliases are implemented and used. Aliases support Documentum
Server's process management services.
Overview
Aliases are placeholders for user names, group names, or folder paths. You can use an alias in the
following places:
• In SysObjects or SysObject subtypes, in the owner_name, acl_name, and acl_domain properties
• In ACL template objects, in the r_accessor_name property
Note: Aliases are not allowed as the r_accessor_name for ACL entries of type RequiredGroup or
RequiredGroupSet.
• In workflow activity definitions (dm_activity objects), in the performer_name property
• In a link or lnlink method, in the folder path argument
You can write applications or procedures that can be used and reused in many situations because
important information such as the owner of a document, a workflow activity performer, or the user
permissions in a document ACL is no longer hard coded into the application. Instead, aliases are
placeholders for these values. The aliases are resolved to real user names, group names, or folder
paths when the application executes.
For example, suppose you write an application that creates a document, links it to a folder, and then
saves the document. If you use an alias for the document owner_name and an alias for the folder
path argument in the link method, you can reuse this application in any context. The resulting
document will have an owner that is appropriate for the application context and be linked into
the appropriate folder also.
The application becomes even more flexible if you assign a template ACL to the document. Template
ACLs typically contain one or more aliases in place of accessor names. When the template is assigned
to an object, the server creates a copy of the ACL, resolves the aliases in the copy to real user or group
names, and assigns the copy to the document.
Aliases are implemented as objects of type dm_alias_set. An alias set object defines paired values
of aliases and their corresponding real values. The values are stored in the repeating properties
alias_name and alias_value. The values at each index position represent one alias and the
corresponding real user name, group name, or folder path.
213
Aliases
For example, given the pair alias_name[0]=engr_vp and alias_value[0]=henryp, engr_vp is the alias
and henryp is the corresponding real user name.
Defining aliases
When you define an alias in place of a user name, group name, or folder path, use the following
format for the alias specification:
%[alias_set_name.]alias_name
alias_set_name identifies the alias set object that contains the specified alias name. This value is the
object_name of the alias set object. Including alias_set_name is optional.
alias_name specifies one of the values in the alias_name property of the alias set object.
To put an alias in a SysObject or activity definition, use a set method. To put an alias in a template
ACL, use a grant method. To include an alias in a link or unlink method, substitute the alias
specification for the folder path argument.
For example, suppose you have an alias set named engr_aliases that contains an alias_name called
engr_vp, which is mapped to the user name henryp. If you set the owner_name property to
%engr_alias.engr_vp, when the document is saved to the repository, the server finds the alias set
object named engr_aliases and resolves the alias to the user name henryp.
It is also valid to specify an alias name without including the alias set name. In such cases, the server
uses a predefined algorithm to search one or more alias scopes to resolve the alias name.
Alias scopes
The alias scopes define the boundaries of the search when the server resolves an alias specification.
If the alias specification includes an alias set name, the alias scope is the alias set named in the alias
specification. The server searches that alias set object for the specified alias and its corresponding
value.
If the alias specification does not include an alias set name, the server resolves the alias by searching a
predetermined, ordered series of scopes for an alias name matching the alias name in the specification.
The scopes that are searched depend on where the alias is found.
214
Aliases
215
Aliases
216
Aliases
If an alias set name is not defined in the alias specification, the server resolves the alias name in
the following manner:
• If the object to which the template is applied has an associated lifecycle, the server resolves the
alias using the alias set defined in the r_alias_set_id property of the object. This alias set is the
object lifecycle scope. If no match is found, the server returns an error.
• If the object to which the template is applied does not have an attached lifecycle, the server
resolves the alias using the alias set defined for the session scope. This is the alias set identified in
the alias_set property of the session config object. If a session scope alias set is defined, but no
match is found, the server returns an error.
• If the object has no attached lifecycle and there is no alias defined for the session scope, the server
resolves the alias using the alias set defined for the user scope. This is the alias set identified in the
alias_set_id property of the dm_user object for the current user. If a user scope alias set is defined
but no match is found, the server returns an error.
• If the object has no attached lifecycle and there is no alias defined for the session or user scope, the
server resolves the alias using the alias set defined for the user default group. If a group alias set
is defined but no match is found, the system returns an error.
• If the object has no attached lifecycle and there is no alias defined for the session, user, or group
scope, the server resolves the alias using the alias set defined for the system scope. If a system
scope alias set is defined but no match is found, the system returns an error.
If no alias set is defined for any level, Documentum Server returns an error stating that an error set
was not found for the current user.
217
Aliases
The server uses the default resolution algorithm when the activity resolve_type property is set to
0. The server searches the following scopes, in the order listed:
• Workflow
• Session
• User performer of the previous work item
• The default group of the previous work item performer
• Server configuration
The server examines the alias set defined in each scope until a match for the alias name is found.
218
Aliases
The server uses the package resolution algorithm if the activity's resolve_type property is set to 1. The
algorithm searches only the package or packages associated with the activity incoming ports. Which
packages are searched depends on the setting of the activity resolve_pkg_name property.
If the resolve_pkg_name property is set to the name of a package, the server searches the alias sets
of the package components. The search is conducted in the order in which the components are
stored in the package.
If the resolve_pkg_name property is not set, the search begins with the package defined in
r_package_name[0]. The components of that package are searched. If a match is not found, the search
continues with the components in the package identified in r_package_name[1]. The search continues
through the listed packages until a match is found.
The server uses the user resolution algorithm if the activity resolve_type property is set to 2. In such
cases, the search is conducted in the following scopes:
• The alias set defined for the user performer of the previous work item
• The alias set defined for the default group of the user performer of the previous work item
The server first searches the alias set defined for the user. If a match isn't found, the server searches
the alias set defined for the user default group.
When the server finds a match in an alias set for an alias in an activity, the server checks the
alias_category value of the match. The alias_category value must be one of:
• 1 (user)
• 2 (group)
• 3 (user or group)
If the alias_category is appropriate, the server next determines whether the alias value is a user
or group, depending on the setting in the activity performer_type property. For example, if
performer_type indicates that the designated performer is a user, the server will validate that the
alias value represents a user, not a group. If the alias value matches the specified performer_type, the
work item is created for the activity.
219
Aliases
Resolution errors
If the server does not find a match for an alias, or finds a match but the associated alias category
value is incorrect, the server:
• Generates a warning
• Posts a notification to the inbox of the workflow supervisor
• Assigns the work item to the supervisor
220
Chapter 12
Internationalization Summary
Overview
Internationalization refers to the ability of the Documentum Server to handle communications and
data transfer between itself and client applications in a variety of code pages. This ability means that
the Documentum Server does not make assumptions based on a single language or locale. (A locale
represents a specific geographic region or language group.)
Documentum Server runs internally with the UTF-8 encoding of Unicode. The Unicode Standard
provides a unique number to identify every letter, number, symbol, and character in every language.
UTF-8 is a varying-width encoding of Unicode, with each single character represented by one to
four bytes.
Documentum Server handles transcoding of data from national character sets (NCS) to and from
Unicode. A national character set is a character set used in a specific region for a specific language.
For example, the Shift-JIS and EUC-JP character sets are used for representing Japanese characters.
ISO-8859-1 (sometimes called Latin-1) is used for representing English and European languages. Data
can be transcoded from a national character set to Unicode and back without data loss. Only common
data can be transcoded from one NCS to another. Characters that are present in one NCS cannot be
transcoded to an NCS in which they are not available.
Note: Internationalization and localization are different concepts. Localization is the ability to display
values such as names and dates in the languages and formats specific to a locale. Documentum Server
uses a data dictionary to provide localized values for applications. For information about the data
dictionary and localization, refer to Chapter 4, The Data Model.
Content files
You can store content files created in any code page and in any language in a repository. The files are
transferred to and from a repository as binary files.
Note: It is recommended that all XML content use one code page.
221
Internationalization Summary
Metadata
The metadata values you can store depend on the code page of the underlying database. The code
page may be a national character set or it may be Unicode.
If the database was configured using a national character set as the code page, you can store only
characters allowed by that code page. For example, if the database uses EUC-KR, you can store only
characters that are in the EUC-KR code page as metadata.
All code pages supported by the Documentum System include ASCII as a subset of the code page.
You can store ASCII metadata in databases using any supported code page.
If you configured the database using Unicode, you can store metadata using characters from any
language. However, your client applications must be able to read and write the metadata without
corrupting it. For example, a client using the ISO-8859-1 (Latin-1) code page internally cannot read
and write Japanese metadata correctly. Client applications that are Unicode-compliant can read and
write data in multiple languages without corrupting the metadata.
Constraints
A UTF-8 Unicode repository can store metadata from any language. However, if your client
applications are using incompatible code pages in national character sets, they may not be able to
handle metadata values set in different code page. For example, if an application using Shift-JIS or
EUC-JP (the Japanese code pages) stores objects in the repository and another application using
ISO-8859-1 (Latin-1 code page) retrieves that metadata, the values returned to the ISO-8859-1
application will be corrupted because there are characters in the Japanese code page that are not
found in the Latin-1 code page.
222
Internationalization Summary
operating system. These parameters are used by the server in managing data, user authentication,
and other functions.
The Documentum system has recommended locales for the server host and recommended code
pages for the server host and database.
The server config object describes a Documentum Server and contains information that the server
uses to define its operations and operating environment.
• locale_name
The locale of the server host, as defined by the host operating system. The value is determined
programmatically and set during server installation. The locale_name determines which data
dictionary locale labels are served to clients that do not specify their locale.
• default_client_codepage
The default code page used by clients connecting to the server. The value is determined
programmatically and set during server installation. It is strongly recommended that you do not
reset the value.
• server_os_codepage
The code page used by the server host. Documentum Server uses this code page when it
transcodes user credentials for authentication and the command-line arguments of server
methods. The value is determined programmatically and set during server installation. It is
strongly recommended that you do not reset the value.
Documentum Server Administration and Configuration Guide contains a table of default values for code
pages by locale.
A client config object records global information for client sessions. It is created when DFC is
initialized. The values reflect the information found in the dfc.properties file used by the DFC
223
Internationalization Summary
instance. Some of the values are then used in the session config object when a client opens a
repository session.
The following properties for internationalization are present in a client config object:
• dfc.codepage
The dfc.codepage property controls conversion of characters between the native code page and
UTF-8. The value is taken from the dfc.codepage key in the dfc.properties file on the client host.
This code page is the preferred code page for repository sessions started using the DFC instance.
The value of dfc.codepage overrides the value of the default_client_codepage property in the
server config object.
The default value for this key is UTF-8.
• dfc.locale
This is the client preferred locale for repository sessions started by the DFC instance.
A session config object describes the configuration of a repository session. It is created when a client
opens a repository session. The property values are taken from values in the client config object, the
server config object, and the connection config object.
The following properties for internationalization are set in the session config object:
• session_codepage
This property is obtained from the client config object dfc.codepage property. It is the code page
used by a client application connecting to the server from the client host.
If needed, set the session_codepage property in the session config object early in the session and
do not reset it.
• session_locale
The locale of the repository session. The value is obtained from the dfc.locale property of the
client config object. If dfc.locale is not set, the default value is determined programmatically from
the locale of the client host machine.
The values of the dfc.codepage and dfc.locale properties in the client config object determine
the values of session_codepage and session_locale in the session config object. These values are
determined in the following manner:
1. Use the values supplied programmatically by an explicit set on the client config object or session
config object.
2. If the values are not explicitly set, examine the settings of dfc.codepage and dfc.locale keys in
the dfc.properties file.
224
Internationalization Summary
If not explicitly set, the dfc.codepage key and dfc.locale keys are assigned default values. DFC
derives the default values from the Java Virtual Machine (JVM), which gets the defaults from
the operating system.
Other Requirements
225
Internationalization Summary
• dm_user.user_db_name
• dm_user.user_address
• dm_group.group_name
The requirements for these differ depending on the site configuration. If the repository is a standalone
repository, the values in the properties must be compatible with the code page defined in the server
server_os_codepage property. (A standalone repository does not participate in object replication or a
federation, and its users never access objects from remote repositories.)
If the repository is in an installation with multiple repositories but all repositories have the same
code page defined in server_os_codepage, the values in the user property must be compatible
with the server_os_codepage. However, if the repositories have different code pages identified in
server_os_codepage, the values in the properties listed above must consist of only ASCII characters.
Lifecycles
The scripts that you use as actions in lifecycle states must contain only ASCII characters.
Docbasic
Docbasic does not support Unicode. For all Docbasic server methods, the code page in which the
method is written and the code page of the session the method opens must be the same and must
both be the code page of the Documentum Server host (the server_os_codepage).
Docbasic scripts that run on client machines must be in the code page of the client operating system.
Federations
Federations are created to keep global users, groups, and external ACLs synchronized among
member repositories.
A federation can include repositories using different server operating system code pages
(server_os_codepage). In a mixed-code page federation, the following user and group property
values must use only ASCII characters:
• user_name
• user_os_name
• user_address
• group_address
ACLs can use Unicode characters in ACL names.
226
Internationalization Summary
Object replication
When object replication is used, the databases for the source and target repositories must use the
same code page or the target repository must use Unicode. For example, you can replicate from a
Japanese repository to a French repository if the French repository database uses Unicode. If the
French repository database uses Latin-1, replication fails.
In mixed code page environments, the source and target folder names must contain only ASCII
characters. The folders contained by the source folder are not required to be named with only ASCII
characters.
227