Disclaimer

The material in this document is for informational purposes only. The products it describes are subject to change without prior notice, due to the manufacturer’s continuous development program. Rampiva makes no representations or warranties with respect to this document or with respect to the products described herein. Rampiva shall not be liable for any damages, losses, costs or expenses, direct, indirect or incidental, consequential or special, arising out of, or related to the use of this material or the products described herein.

The Relativity-related modules may only be used by parties with valid licenses for Relativity®, a product of Relativity ODA LLC. Relativity ODA LLC does not test, evaluate, endorse or certify this product.

The Brainspace-related modules may only be used by parties with valid licenses for Brainspace, a product of the Brainspace Corporation. The Brainspace Corporation does not test, evaluate, endorse or certify this product.

© Rampiva Technology Inc. 2023 All Rights Reserved

Introduction

This guide describes the features and options of Rampiva Workflow, a product which is a part of the Rampiva Automate suite. This document works like a reference - use the table of contents to look for the topic that you find out about.

The Rampiva software and this documentation may contain bugs, errors, or other limitations. If you encounter any issues with the Rampiva software or with this documentation, please contact support@rampiva.com.

Styles Used in This Guide

Note: This is icon indicates that additional clarifications are provided, for example what the valid options are.
Tip: This icon lets you know that some particularly useful tidbit is provided, perhaps a way in which to use the application to achieve a certain behavior.
Warning: This icon highlights information that may help you avoid an undesired behavior.

Emphasized: This style indicates the name of a menu, option or link.

code: This style indicates code that should be used verbatim, and can refer to file paths, parameter names or Nuix search queries.

1. Quick Start

1.1. Designing a native-only workflow

Workflows containing only Rampiva native operations can be designed in the Standalonw Workflow Designer. This is a standalone application that is installed under `C:\Program Files\Rampiva\Automate\bin\Rampiva Native Workflow Designer.exe.

To design a workflow containing both Rampiva native operations as well as Nuix operations, see Designing a full workflow.

1.2. Designing a full workflow

Workflows containing any operations can be created and edited in the Workflow Designer module. To run this module, start Nuix Workbench and open the Scripts → Rampiva → Workflow Designer menu.

1.3. Executing a workflow

After designing a workflow, it can be executed using the Workflow Execution module. This module can be run either in the currently open Nuix Workbench case, or in a new or existing case which is not open. To run this module, use the Scripts → Rampiva → Workflow Execution menu.

1.4. Quick run

To run a single operation without creating a workflow, use Quick Run modules. To run a Quick Run module, first open a case in Nuix Workbench and the use the Scripts → Rampiva → Quick Run menu to select the operation that needs to be executed.

2. Licensing

When starting either the Workflow Designer or Workflow Execution module, the module will look for and validate the license on the local system. If a licence file is required, please contact support@rampiva.com

The default location of the licence file can be specified using the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Rampiva\Workflow for Nuix\LicenceFile

3. Workflow Design

3.1. Concepts

A workflow is comprised of a series of operations which execute in sequence. When running the Workflow Designer, the user is presented with a blank workflow. Operations can be added to the workflow using menu Edit → Add Operation.

The order of operations in the workflow can be changed using the Edit menu and the Move to Top, Move Up, Move Down and Move To Bottom options.

When designing a workflow, parameters can be used in the workflow along with static text in every field which accepts user input such as search queries, file paths, production sets names, etc. See the Parameters Guide for more details.

General options can be set using the Edit → Options menu. These options are stored in the user profile and determine whether the system checks for updates whenever Rampiva Workflow is started and whether telemetry is enabled.

After completing the design of a Workflow, the workflow can be saved using the File → Save Workflow menu. At this point, the Workflow Designer can be closed or can be switch to the Workflow Execution module using File → Workflow Execution.

To disable the execution of certain operations in the workflow without deleting them from the workflow, use the Edit → Enable / Disable / Toggle Enabled in the Workflow Designer.

By default, when an operation encounters an error, the workflow execution is stopped. This behavior can be changed to have operations soft fail, which will continue the execution of the workflow even if one operation encounters an error. This can be configured using the Edit → Enable / Disable Soft Fail in the Workflow Designer.

Workflow details can be set using the Edit → Workflow Details menu. This includes the workflow execution mode, workflow description, list of prerequisites, and usage description. Workflows can run in one for the following 2 modes:

  • Automate Native: These are workflows which only use native Rampiva operations, such as Script and Run External Application.

  • Automate Nuix: These are workflows which can use Nuix cases and associated operations.

Legacy workflows which required the user to interact during the execution in Nuix Workstation are migrated to the Automate Nuix mode when opened in Workflow Designer.

The mode in which a workflow is designed is displayed in the bottom status bar.

3.2. Operations

3.2.1. Configure Parameters

This operation lets users define custom parameters which will exist during the scope of the execution of the workflow. Custom parameters can be manually defined or loaded from a CSV or TSV file, along with a value, description and validation regex.

There are two types of parameters that can be defined in this operation: Static parameters and User parameters. Static parameters are parameters that have a fixed value which is defined in the operation configuration. For User parameters, a prompt is presented when queueing the workflow to provide the values.

Display Conditions

Display conditions can be used to determine if a user is prompted to provide a value for a certain parameter, depending on the values of previously filled-out parameters.

For example, if there are two parameters {perform_add_evidence} and {source_data_location}, a display condition could be set up to only display {source_data_location} parameter if the value of the {perform_add_evidence} parameter is True.

If a parameter does not match the display condition, it will have a blank value.

Display conditions can only reference parameters defined in the same Configure Parameters operation above the current parameter.
Parameter Value Filters

The following parameter value files can be applied, depending on the parameter type:

  • Text parameter values can be filtered using regular expressions (regex).

  • Number parameter values can be filtered using a minimum and maximum allowed value.

  • Relativity parameter values can be filtered based on other previous Relativity parameters, such as the Relativity client or workspace. These filters require the use of a Relativity Service.

3.2.2. Configure Nuix

This operation is used to define the settings of the Nuix processing engine, from Nuix Configuration profile and/or a Nuix Processing profile. The use of Processing profiles is recommended over Configuration profiles.

By default, Nuix stores configuration profiles in the user-specific folder %appdata%\Nuix\Profiles. To make a configuration profile available to all users, copy the corresponding .npf file to %programdata%\Nuix\Profiles.
Only a subset of settings from the Configuration profiles are supported in Rampiva Workflow, including Evidence Processing Settings (Date Processing, MIME Type, Parallel Processing), Legal Export (Export Type - partial, Load File - partial, Parallel Processing).
Configure Workers

The worker settings can either be extracted from the Nuix settings (see above) or can be explicitly provided in the workflow.

For local workers, these settings can be used to specify the number of local workers, the memory per worker and the worker temporary directory.

Nuix does not support running the OCR Operation and Legal Export operation with no local workers. If a value of 0 is specified in the local workers for these operations, Rampiva Workflow will start the operation with 1 local worker and as many remote workers as requested.

For remote workers, the number of remote workers, the worker broker IP address and port must be specified.

The worker broker must be running before starting the workflow, to avoid an unrecoverable error from Nuix that will halt the execution of the worker.
If the number of remote workers requested is not available immediately when starting the operation that requires workers, Rampiva Workflow will keep trying to assign the required number of workers up until the end of the operation.

Parallel processing settings can also be set using the following parameters:

  • {local_worker_count} - The number of local workers to run;

  • {local_worker_memory} - The memory (in MB) of each local worker;

  • {broker_worker_count} - The number of remote workers to assign;

Password settings

Passwords are used during the loading and re-loading of the data in Nuix. This section allows for specifying the use of a password list of passwords file.

Keystore settings

Keystores are used during the loading and re-loading of the data in Nuix. This section allows for specifying a CSV or TSV file containing the keystore information.

Keystore configuration file

The keystore file expects the following columns:

  • Path: The file path to the keystore

  • Password: The password of the keystore

  • Alias: The alias to use from the keystore

  • AliasPassword: The password for the alias

  • Target: The notes storage format file (NSF)

Sample Lotus Notes ID:

Path	Password	Alias	AliasPassword	Target
C:\Stores\Lotus\user.id	password			example.nsf
C:\Stores\Lotus\rampiva.id	password123			rampiva.nsf
When configuring a Lotus Notes ID store, the target can be the full path or the filename of the notes storage format file (NSF). Additionally the target can be set to * for the ID file to be applied to any (NSF) file.

Sample showing PGP, PKCS12 and Lotus Notes ID:

Path	Password	Alias	AliasPassword	Target
C:\Stores\PGP\0xA8B31F11-sec.asc		test@rampiva.com	test_password
C:\Stores\PKCS12\template.keystore	password	ssl_cert
C:\Stores\Lotus\user.id	password			example.nsf
C:\Stores\PKCS12\rampiva.keystore	password123	rampiva-sample
C:\Stores\PGP\0x9386E293-sec.asc		user@rampiva.com	abcd1234
When configuring the keystore file not all columns will have values, before adding this file to the workflow verify the values are in the correct columns.

A single keystore can be set using the following parameters:

  • {keystore_file_path} - The path to the keystore.

  • {keystore_file_password} - The password of the keystore.

  • {keystore_file_alias} - The alias to use from the keystore.

  • {keystore_file_alias_password} - The password for the alias.

  • {keystore_file_target} - The notes storage format file (NSF).

When using a single keystore the {keystore_file_path} parameter must contain a valid file path for the keystore to be added.

The keystore file can also be set using the parameter:

  • {keystore_tsv} - The file path to the keystore CSV or TSV file;

Require Nuix Profiles in Execution Profile

When using the workflow in Rampiva Automate, selecting the option Require all Nuix profiles to be supplied in the Execution Profile option will require that all Nuix profiles used in the Workflow are explicitly supplied in the Execution Profile. If profiles are missing, the Job will not start.

3.2.3. Use Case

This operation opens an existing Nuix case or creates one, depending on the Method option specified.

The case timezone can be overwritten by setting parameter {case_timezone_id}. See Joda Time Zones for a list of valid timezone IDs.

3.2.4. Add to Compound Case

This operation adds existing cases to the currently opened Nuix case.

The current Nuix case must be a compound case, otherwise this operation will fail during execution.

By default the compound case will be closed and reopened after all child cases are added. The option Skip reloading compound case changes this behavior and does not reload the compound case. Some operations might not perform correctly when using this option due to the compound case not being refreshed.

3.2.5. Add Evidence

This operation adds evidence to the Nuix case.

The type of data that is added to the Nuix case is defined using the Scope setting:

The source data timezone specified in the settings, and can be overwritten by setting parameter {data_timezone_id}. See Joda Time Zones for a list of valid timezone IDs.

The source encoding and zip encoding can be specified in the settings.

Deduplication

If this option is selected, data will be deduplicated at ingestion. Unless data will be added to the case in a single batch, the option Track and deduplicate against multiple batchloads needs to be selected.

The mechanism for deduplication at ingestion is designed to be used for the specific scenarios where a large amount of data is loaded and which is expected to have a high level of duplication. Due to the live synchronization required between the Nuix workers during the ingestion, only one ingestion with deduplication can run at a time on a server, and no remote workers can be added.

Handling of duplicate items:

  • Metadata-only processing: Deduplication status is tracked using the metadata field Load original. Top-level original items will have the value true in this field and will have all typical metadata and descendants processed - the descendants will not have this metadata field populated. Top-level duplicate items will have value false in this field and no other properties except for the metadata field Load duplicate of GUID which will indicate the GUID of the original document with the same deduplication key as the duplicate document.

To query all items that were not flagged as duplicates, use query !boolean-properties:"Load original":false.
  • Skip processing entirely will completely skip items identified as duplicates and no reference of these items will exist in the case.

Deduplication method:

  • Top-level MD5: Uses the MD5 hash of the top-level item.

  • Email Message-ID: Uses the email Message-ID property from the first non-blank field: Message-ID, Message-Id, Mapi-Smtp-Message-Id, X-Message-ID, X-Mapi-Smtp-Message-Id, Mapi-X-Message-Id, Mapi-X-Smtp-Message-Id.

  • Email MAPI Search Key: Uses the email MAPI Search Key property from the first non-blank field: Mapi-Search-Key, X-Mapi-Search-Key.

For a deduplication result similar to the post-ingestion Nuix ItemSet deduplication, check option Top-level MD5 only. For the most comprehensive deduplication result, check all three options.
Emails in the Recoverable Items folder are not considered for deduplication based on Message-ID and MAPI Search Key, due to the fact that data in this folder is typically unreliable.
Date filter

All modes other than No filter specify the period for which data will be loaded. All items that fall outside of the date filter will be skipped entirely and no reference of these items will exist in the case.

Mime type filter

Allows to set a filter to restrict data of certain mime-types to specific names.

For example, the filter mode Matches, with mime-type application/vnd.ms-outlook-folder and item name Mailbox - John Smith will have the following effect:

  • Items which are in a PST or EDB file, must have the first Outlook Folder in their path named Mailbox - John Smith.

  • Items which are not in a PST or EDB file are not affected.

The Mime type filter can be used to select specific folders for loading from an Exchange Database (EDB) file.
Add Evidence from Evidence listing

When selecting the Scope option Evidence listing, the Source path is expected to point to a CSV or TSV file with the following columns:

  • Name: The name of the evidence container

  • Path: The path to the file or folder to load

  • Custodian: Optional, the custodian value to assign

  • Timezone: Optional, the timezone ID to load the data under. See Joda Time Zones for a list of valid timezone IDs.

  • Encoding: Optional, the encoding to load the data under.

  • ZipEncoding: Optional, the encoding to load the zip files under.

If additional columns are specified, these will be set as custom evidence metadata.

If optional settings are not provided, the default settings from the Add Evidence operation will be used.

When selecting the option Omit evidence folder names, the last folder name from the path to each evidence included in the listing will not be included in the path in the Nuix case. Instead, all items from the folder will appear directly under the evidence container.

Sample evidence listing:

Name	Path	Custodian	Encoding	Timezone	Sample Custom Field	Another Sample Field
Evidence1	C:\Data\Folder1	Morrison, Jane	UTF-8	Europe/London	Value A	Value B
Evidence2	C:\Data\Folder2	Schmitt, Paul	Windows-1252	Europe/Berlin	Value C	Value D
Add Evidence from Data Set

When selecting the Scope option Data set, the Data set ID field should point to a data set parameter defined in the Configuration operation.

The Data set scope is only compatible with jobs submitted in Rampiva Scheduler and for Matters that have Data sets associated with them.
Add Evidence from Microsoft Graph

When adding data using the Microsoft Graph, the following configuration parameters must be defined prior to the Add Evidence operation.

  • {ms_graph_tenant_id}: The tenant ID for Azure AD.

  • {ms_graph_client_id}: The client/application ID for the app that has been registered with Azure AD which and granted the necessary privileges.

  • {ms_graph_client_secret_protected}: The client secret that has been configured for the client ID provided, for authentication.

  • {ms_graph_certificate_store_path}: The path to a PKCS#12 certificate store, to use instead of the client secret, for authentication.

  • {ms_graph_certificate_store_password}: The password for the PKCS#12 certificate store, if present.

  • {ms_graph_username}: Optionally, the username for a user that is a member of the Teams to be processed, only needed for ingesting Team Calendars.

  • {ms_graph_password}: The password for the username, if it is present.

For authentication, one of the {ms_graph_client_secret_protected} or {ms_graph_certificate_store_path} parameters must be set.
  • {ms_graph_start_datetime}: The beginning of the collection date range.

  • {ms_graph_end_datetime}: The end of the collection date range.

For collection of calendars (Users or Teams), the date range cannot exceed 5 years.
  • {ms_graph_retrievals}: A list of the content types to be retrieved, containing one or more of the following values: TEAMS_CHANNELS, TEAMS_CALENDARS, USERS_CHATS, USERS_CONTACTS, USERS_CALENDARS, USERS_EMAILS, ORG_CONTACTS, SHAREPOINT.

  • {ms_graph_mailbox_retrievals}: Optionally, a list of areas to retrieve from, containing one or more of the following values: MAILBOX, ARCHIVE, PURGES, DELETIONS, RECOVERABLE_ITEMS, ARCHIVE_PURGES, ARCHIVE_DELETIONS, ARCHIVE_RECOVERABLE_ITEMS, PUBLIC_FOLDERS. By Default, only the MAILBOX area is retrieved.

  • {ms_graph_team_names}: Optionally, a list of team names to filter on.

  • {ms_graph_user_principal_names}: Optionally, a list of user principal names to filter on.

  • {ms_graph_version_retrieval}: Optionally, a boolean indicating of all versions should be retrieved. Defaults to false

  • {ms_graph_version_limit}: Optionally, an integer limiting the number of versions retrieved if version retrievel is enabled. Defaults to -1 which retrieves all versions available.

Sample Microsoft Graph collection parameters:

  • {ms_graph_tenant_id} : example.com

  • {ms_graph_client_id} : 6161a8bb-416c-3015-6ba5-01b8ca9819f6

  • {ms_graph_client_secret_protected} : AvjAvbb9akNF<pbpaFvz,mAGjgdsl>vk

  • {ms_graph_start_datetime} : 20180101T000000

  • {ms_graph_end_datetime} : 20201231T235959

  • {ms_graph_user_principal_names} : john.smith@example.com, eve.rosella@example.com

  • {ms_graph_retrievals} : TEAMS_CHANNELS, USERS_CHATS, USERS_EMAILS, SHAREPOINT

  • {ms_graph_mailbox_retrievals} : MAILBOX, ARCHIVE, RECOVERABLE_ITEMS, ARCHIVE_RECOVERABLE_ITEMS

For details on how to configure the Microsoft Graph authentication, see the Nuix documentation on the Microsoft Graph connector at https://download.nuix.com/system/files/Nuix%20Connector%20for%20Microsoft%20Office%20365%20Guide%20v9.0.0.pdf
Add Evidence from SharePoint

When adding data from SharePoint, the following configuration parameters must be defined prior to the Add Evidence operation.

  • {sharepoint_uri}: A URI specifying the site address.

  • {sharepoint_domain}: This optional parameter defines the Windows networking domain of the server account.

  • {sharepoint_username}: The username needed to access the account.

  • {sharepoint_password}: The password needed to access the account.

Add Evidence from Exchange

When adding data from Exchange, the following configuration parameters must be defined prior to the Add Evidence operation.

  • {exchange_uri}: The path to the Exchange Web Service (e.g. https://ex2010/ews/exchange.asmx).

  • {exchange_domain}: This optional parameter defines the Windows networking domain of the server account.

  • {exchange_username}: The username needed to access the account.

  • {exchange_password}: The password needed to access the account.

  • {exchange_mailbox}: The mailbox to ingest if it differs from the username.

  • {exchange_impersonating}: A boolean, defaults to false. This optional setting instructs Exchange to impersonate the mailbox user instead of delegating when the mailbox and username are different.

  • {exchange_mailbox_retrieval}: A list containing one or more of the following values: mailbox, archive, purges, deletions, recoverable_items, archive_purges, archive_deletions, archive_recoverable_items, public_folders.

  • {exchange_from_datetime}: This optional parameter limits the evidence to a date range beginning from the specified date/time. It must be accompanied by the {exchange_to_datetime} parameter.

  • {exchange_to_datetime}: This optional parameter limits the evidence to a date range ending at the specified date/time. It must be accompanied by the {exchange_from_datetime} parameter.

Add Evidence from Enterprise Vault

When adding data from Enterprise Vault, the following configuration parameters must be defined prior to the Add Evidence operation.

  • {ev_computer}: The hostname or IP address of Enterprise Vault.

  • {ev_vault}: A vault store ID. This optional parameter limits the evidence to the specified Enterprise Vault vault.

  • {ev_archive}: An archive ID. This optional parameter limits the evidence to the specified Enterprise Vault archive.

  • {ev_custodian}: A name. This optional parameter limits the evidence to the specified custodian or author.

  • {ev_from_datetime}: This optional parameter limits the evidence to a date range beginning from the specified date/time. It must be accompanied by the {ev_to_datetime} parameter.

  • {ev_to_datetime}: This optional parameter limits the evidence to a date range ending at the specified date/time. It must be accompanied by the {ev_from_datetime} parameter.

  • {ev_keywords}: This optional parameter limits the evidence to results matching Enterprise Vault’s query using the words in this string. Subject and message/document content are searched by Enterprise Vault and it will match any word in the string unless specified differently in the {ev_flag} parameter.

  • {ev_flag}: An optional value from any, all, allnear, phrase, begins, beginany, exact, exactany, ends, endsany.

The {ev_flag} parameter specifies how keywords are combined and treated for keyword-based queries. It must be accompanied by the {ev_keywords} parameter but will default to any if it is omitted.
Add Evidence from S3

When adding data from S3, the following configuration parameters must be defined prior to the Add Evidence operation.

  • {s3_access}: This parameter specifies the access key ID for an Amazon Web Service account.

  • {s3_secret_protected}: This parameter specifies the secret access key for an Amazon Web Service account.

  • {s3_credential_discovery_boolean}: This optional parameter is only valid when access and secret are not specified. A true value allows credential discovery by system property. A false or omitted value will attempt anonymous access to the specified bucket.

  • {s3_bucket}: This optional parameter specifies a bucket and optionally a path to a folder within the bucket that contains the evidence to ingest. For example, mybucketname/top folder/sub folder. Omitting this parameter will cause all buckets to be added to evidence.

  • {s3_endpoint}: This optional parameter specifies a particular Amazon Web Service server endpoint. This can be used to connect to a particular regional server, e.g. https://s3.amazonaws.com.

Add Evidence from Documentum

When adding data from Documentum, the following configuration parameters must be defined prior to the Add Evidence operation.

  • {documentum_domain}: This optional parameter defines the Windows networking domain of the server account.

  • {documentum_username}: The username needed to access the account.

  • {documentum_password}: The password needed to access the account.

  • {documentum_port_number}: The port number to connect on.

  • {documentum_query}: A DQL query. This optional parameter specifies a query used to filter the content.

  • {documentum_server}: This parameter specifies the Documentum server address.

  • {documentum_doc_base}: This parameter specifies the Documentum docbase repository.

  • {documentum_property_file}: This optional parameter specifies the Documentum property file.

Add Evidence from SQL Server

When adding data from SQL Server, the following configuration parameters must be defined prior to the Add Evidence operation.

  • {sql_server_domain}: This optional parameter defines the Windows networking domain of the server account.

  • {sql_server_username}: The username needed to access the account.

  • {sql_server_password}: The password needed to access the account.

  • {sql_server_computer}: The hostname or IP address of the SQL Server.

  • {sql_server_max_rows_per_table_number}: The maximum number of rows to return from each table or query. This parameter is optional. It can save time when processing tables or query results with very many rows. The selection of which rows will be returned should be considered arbitrary.

  • {sql_server_instance}: A SQL Server instance name.

  • {sql_server_query}: A SQL query. This optional parameter specifies a query used to filter the content.

Add Evidence from Oracle

When adding data from Oracle, the following configuration parameters must be defined prior to the Add Evidence operation.

  • {oracle_username}: The username needed to access the account.

  • {oracle_password}: The password needed to access the account.

  • {oracle_max_rows_per_table}: The maximum number of rows to return from each table or query. This parameter is optional. It can save time when processing tables or query results with very many rows. The selection of which rows will be returned should be considered arbitrary.

  • {oracle_driver_type}: The driver type used to connect. Can be thin, oci, or kprb.

  • {oracle_database}: A string representation of the connection params. The possible formats are documented at https://www.oracle.com/database/technologies/faq-jdbc.html#05_04

  • {oracle_role}: The role to login as, such as SYSDBA or SYSOPER. For normal logins, this should be blank.

  • {oracle_query}: A SQL query. This parameter specifies a query used to filter the content.

Add Evidence from Dropbox

When adding data from Dropbox, the following configuration parameters must be defined prior to the Add Evidence operation.

  • {dropbox_auth_code_protected}: A string retrieved via a webpage on Dropbox that enables access to an account.

  • {dropbox_team_boolean}: A boolean that indicates that a Dropbox team will be added to evidence. This optional parameter should be present and set to true for all invocations when adding a Dropbox team to evidence. It can be omitted to add an individual Dropbox account.

  • {dropbox_access_token_protected}: A string retrieved using the authCode that enables access to an account. If the access token to an account is already known, provide it directly using this parameter instead of {dropbox_auth_code_protected}. This code doesn’t expire unless the account owner revokes access.

Add Evidence from SSH

When adding data from SSH, the following configuration parameters must be defined prior to the Add Evidence operation.

  • {ssh_username}: The username needed to access the account.

  • {ssh_password}: The password needed to access the account.

  • {ssh_sudo_password}: The password needed to access protected files when using SSH key based authentication.

  • {ssh_key_folder}: Points to a folder on the local system which holds the SSH authentication key pairs.

  • {ssh_computer}: The hostname or IP address of Enterprise Vault.

  • {ssh_port_number}: The port number to connect on.

  • {ssh_host_fingerprint}: The expected host fingerprint for the host being connected to. If this value is not set then any host fingerpint will be allowed, leaving the possibility of a man in the middle attack on the connection.

  • {ssh_remote_folder}: A folder on the SSH host to start traversing from. This optional parameter limits the evidence to items underneath this starting folder.

  • {ssh_accessing_remote_disks_boolean}: A boolean. When set to true, remote disks (e.g. /dev/sda1 will be exposes as evidence instead of the remote systems file system structure.

Add Evidence from Historical Twitter

When adding data from Twitter, the following configuration parameters must be defined prior to the Add Evidence operation.

  • {twitter_access_token}: A string retrieved using the authCode that enables access to an account. A new app can be created at https://apps.twitter.com to generate this token.

  • {twitter_consumer_key}: The consumer key (API key) of the Twitter app.

  • {twitter_consumer_secret_protected}: The consumer secret (API secret) of the Twitter app.

  • {twitter_access_token_secret_protected}: The access token secret of the Twitter app.

3.2.6. Add Evidence Repository

This operation adds an evidence repository to the case. The typical Nuix options can be used to customize the evidence repository settings.

This operation does not load data into the case. The Rescan Evidence Repositories operation must be used to add data.

3.2.7. Rescan Evidence Repositories

This operation rescans all evidence repositories and adds new data to the case.

The option No new evidence behavior can be used to show a warning, trigger an error, or finish the execution of the workflow if no new evidence is discovered.

3.2.8. Detect and Assign Custodians

This operation detects custodian names using one of the following options:

  • Set custodians from folder names sets the custodian to the same name as the folder at the specified path depth.

  • Set custodians from folder names with typical custodian names attempts to extract custodian names from the folder names, where the folder names contain popular first names, up to the specified maximum path depth.

  • Set custodians from PST files sent emails sender name attempts to extract custodian names from the name of the sender of emails in the Sent folder.

  • Set custodians from data set metadata sets the custodian names defined in the Custodian field in the data set metadata.

When using Set custodians from folder names option, ensure that the scope query contains all of the folders from the Nuix Case root up to the folder depth defined. For example, the query path-guid:{evidence_guid} is not valid because it only contains the items below the evidence container but not the evidence container itself. On the other hand, the query batch-load-guid:{last_batch_load_guid} is valid because it contains all of the items loaded in that specific batch, including the evidence container and all of the folders on which custodian values will be assigned.

The settings of this operation can also be controlled using the following parameters:

  • {set_custodian_from_folder_name} - Enable or disable the Set custodians from folder names option;

  • {custodian_folder_level} - The folder depth corresponding to the Set custodians from folder names option;

  • {set_custodian_from_typical_folder_name} - Enable or disable the Set custodians from folder names with typical custodian names option;

  • {max_custodian_typical_folder_level} - The max folder depth corresponding to the Set custodians from folder names with typical custodian names option.

  • {set_custodian_from_pst} - Enable or disable the Set custodians from PST files sent emails sender name option;

The parameters for enabling or disabling options can be set to true, yes, or Y to enable the option, and to anything else to disable the option.

3.2.9. Exclude Items

This operation excludes items from the case that match specific search criteria.

Entries can be added to the exclusions list using the + and - buttons, or loaded from exclusions list from a CSV or TSV file.

The exclusions can also be loaded from a file during the workflow execution, using the Exclusions file option.

Parameters can be used in the Exclusions file path, to select an exclusion file dynamically based on the requirements of the workflow.

3.2.10. Include Items

This operation includes items previously excluded.

Excluded items which are outside of the scope query will not be included.

Items belonging to all exclusion categories can be included, or alternatively, exclusion names can be specified using the + and - buttons, or loaded from a text file.

3.2.11. Add to Item Set

This operation adds items to an existing Item Set or creates a new Item Set if one with the specified name does not exist.

If the list of items to add to an Item Set is empty, the first Root Item is temporarily added as a filler item to help create the Item Set Batch.

In addition to the standard Nuix deduplication options, Rampiva Workflow offers two additional deduplication methods:

  • Message ID: Uses the email Message-ID property from the first non-blank field: Message-ID, Message-Id, Mapi-Smtp-Message-Id, X-Message-ID, X-Mapi-Smtp-Message-Id, Mapi-X-Message-Id, Mapi-X-Smtp-Message-Id.

  • Mapi Search Key: Uses the email MAPI Search Key property from the first non-blank field: Mapi-Search-Key, X-Mapi-Search-Key.

When performing a deduplication by family based on Message-ID or MAPI Search Key, two batches will be created: one for top-level items (with suffix TL) and another one for non-top-level items (with suffix NonTL). To query for original items in both of these batches, use syntax:
item-set-batch:("{last_item_set_originals_batch} TL" OR "{last_item_set_originals_batch} NonTL")

3.2.12. Remove from Item Set

This operation removes items, if present, from the specified Item Set.

3.2.13. Delete Item Set

This operation deletes the specified Item Set.

3.2.14. Add Items to Digest List

This operation adds items to a digest list with the option to create the digest list if it doesn’t exist.

A digest list can be created in one of the three digest list locations:

  • Case: Case location, equivalent to the following subfolder from the case folder Stores\User Data\Digest Lists

  • User: User profile location, equivalent to %appdata%\Nuix\Digest Lists

  • Local Computer: Computer profile location, equivalent to %programdata%\Nuix\Digest Lists

3.2.15. Remove Items from Digest List

This operation removes items, if present, from the specified digest list.

3.2.16. Manage Digest Lists

This operation performs an operation on the two specified digest lists and then saves the resulting digest list in the specified digest list location.

List of operations:

  • Add: Produces hashes which are present in either digest list A or digest list B;

  • Subtract: Produces hashes present in digest list A but not in digest list B;

  • Intersect: Produces hashes which are present in both digest list A and digest list B.

3.2.17. Delete Digest List

This operation deletes the specified digest list, if it exists, from any of the specified digest list locations.

3.2.18. Digest List Import

This operation imports a text or Nuix hash file into the specified digest list location.

Accepted file formats:

  • Text file (.txt, .csv, .tsv). If the file contains a single column, hashes are expected to be provided one per line. If the file contains multiple columns, a column with the header name MD5 is expected

  • Nuix hash (.hash) file

3.2.19. Digest List Export

This operation exports a Nuix digest list to the specified location as a text file. The resulting text file contains one column with no header and one MD5 hash per line.

3.2.20. Search and Tag

This operation tags items from the case that match specific search criteria.

Options:

  • Identify families: If selected, the operation will search for Family items and Top-Level items of items with hits for each keyword.

  • Identify descendants If selected, the operation will search for descendants of items with hits for each keyword.

  • Identify exclusive hits ("Unique" hits) oIf selected, the operation will search for Exclusive hits (items which only hit on one keyword), Exclusive family items (items for which the entire family only hit on one keyword) and Exclusive top-level items (also items for which the entire family only hit on one keyword).

  • Compute size If selected, the operation will compute the audited size for Hits and Family items.

  • Compute totals If selected, the operation will compute the total counts and size for all keywords.

  • Breakdown by custodian If selected, the searches and reporting will be performed for each individual custodian, as well as for items with no custodians assigned.

  • Log results If selected, the search counts will be printed in the Execution Log.

Tags

If the Assign tags option is selected, items will be tagged under the following tag structure:

  • Tag prefix

    • Hits

      • Keyword tag: Items that matched the search query.

    • Families

      • Keyword tag: Families of items that matched the search query.

    • TopLevel

      • Keyword tag: Top-level items of items that matched the search query.

    • Descendants

      • Keyword tag: Descendants of items that matched the search query.

    • ExclusiveHits

      • Keyword tag: Items that hit exclusively on the keyword.

    • ExclusiveFamilies

      • Keyword tag: Families that hit exclusively on the keyword.

    • ExclusiveTopLevel

      • Keyword tag: Top-level items of families that hit exclusively on the keyword.

If the Remove previous tags with this prefix option is selected, all previous tags starting with the Tag prefix will be removed, regardless of the search scope, according to the Remove previous tags method.

This operation can be used with an empty list of keywords and with the Remove previous tags with this prefix enabled, in order to remove remove tags that have been previously applied either by this operation or in another way.

The Remove previous tags with this prefix method renames Tag prefix to Rampiva|SearchAndTagOld|Tag prefix_{datetime}. Although this method is the fasted, after running the Search and Tag operation multiple times it can create a large number of tags which might slow down manual activities in Nuix Workbench.

Reporting

This option generates a search report in an Excel format, based on a template file.

See Processing Report for information on using a custom template.
Keywords

The keywords can either be specified manually in the workflow editor interface, or loaded from a file.

The following file formats are supported:

  • .csv: Comma-separated file, with the fist column containing the keyword name or tag and the second column containing the keyword query. If the first row is a header with the exact values tag and query, the line will be read as a header. Otherwise, it will be read as a regular line with a keyword and tag name.

  • .tsv, .txt : Tab-separated file, with the fist column containing the keyword name or tag and the second column containing the keyword query.

  • .json: JSON file, either exported from the Nuix Search and Tag window, or containing a list of searches, with each search containing a tag and a query.

Sample JSON file:

{
  "searches": [
    {
      "tag": "KW 01",
      "query": "Plan*"
    },
    {
      "tag": "KW 02",
      "query": "\"Confidential Data\" OR Privilege"
    }
  ]
}

Alternatively, the path to a keywords file can be supplied which will be loaded when the workflow executes.

3.2.21. Search and Assign Custodians

This operation assigns custodians to items from the case that match specific search criteria.

Entries can be added to the custodian/query list using the + and - buttons, or loaded from a CSV or TSV file.

3.2.22. Tag Items

This operation searches for items in the scope query.

Then it matches the items to process either as the items in scope, or duplicates of the items in scope, as individuals or by family.

The tag name is applied to either the items matched (Matches), their families (All families), their descendants (All Descendants), items matched and their descendants (Matches and Descendants) or their families top-level items (Top-level).

3.2.23. Untag Items

This operation removes tags for the items in the scope query.

Optionally, if the tags are empty after the items in scope are untagged, the remove method can be set to delete the tags.

When the option to remove tags starting with a prefix is specified, tags with the name of the prefix and their subtags are removed. For example, if the prefix is set to Report, tags Report and Report|DataA will be removed but not Reports.

3.2.24. Match Items

This operation reads a list of MD5 and/or GUID values from the specified text file. Items in scope with a matching MD5 and/or GUID values of the items in scope are tagged with the value supplied in the Tag field.

3.2.25. Date Range Filter

This operation filters items in the scope query to items within the specified date range using either the item date, top-level item date or a list of date-properties.

Then it applies a tag or exclusion similar to Tag Items.

Use \* as a date property to specify all date properties.
The dates for this range can be specified using the parameters {filter_before_date} and {filter_after_date}.

3.2.26. Find Items with Words

This operation analyzes the text of the items in scope and determines if the item is responsive if the number of words respects the minimum and maximum count criteria.

The words are extracted by splitting the text of each item using the supplied regex.

Sample regex to extract words containing only letters and numbers:

[^a-zA-Z0-9]+

Sample regex to extract words containing only letters:

[^a-zA-Z]+

Sample regex to extract words containing any character, separated by a whitespace character (i.e. a space, a tab, a line break, or a form feed)

\s+

3.2.27. Filter Emails

This operation performs advanced searches for emails, based on recipient names, email addresses and domain names.

The Wizard feature prepopulates the filtering logic based on one of the following scenarios:

  • Tag internal-only emails

  • Tag communications between two individuals only

  • Tag communications within a group

3.2.28. Add Items to Cluster Run

This operation adds items to an existing Cluster Run or creates a new Cluster Run if one with the specified name does not exist.

When running this operation, the progress will only show 0.01%, and will be updated when the operation finishes.

3.2.29. Detect Attachment-Implied Emails

This operation must be used in conjunction with the Cluster Run operation from Nuix. First, generate a Cluster Run using Nuix Workstation and then run the Detect Attachment-Implied Emails operation to complement the identification of inclusive and non-inclusive emails.

If no cluster run name is specified, the operation will process all existing cluster runs.

Items will be tagged according to the following tag structure:

  • Threading

    • Cluster run name

      • Items

        • Inclusive

          • Attachment-Inferred

          • Singular

          • Ignored

          • Endpoint

        • Non Inclusive

      • All Families

        • Inclusive

          • Attachment-Inferred

          • Singular

          • Ignored

          • Endpoint

        • Non Inclusive

To select all data except for the non-inclusive emails, use query
tag:"Threading|Cluster run name|All Families|Inclusive|*"
This operation should be used on cluster runs that contain top-level emails only, clustered using email threads. Otherwise, the operation will produce inconsistent results.

3.2.30. Reload Items

This operation reloads from source the items matching the scope query.

This operation can be used to decrypt password protected files when preceded by a Configuration operation which defines passwords and if the Delete encrypted inaccessible option is used.
If the scope query results in 0 items, the Nuix case database does not get closed which causes issues when attempting to add more data in the future. As a workaround, use a preceding getO to skip the Reload Items operation if the scope query results in 0 items. See example Python script below:
# Set scope_query to the scope query of the Reload Items operation
items_count = current_case.count(scope_query);
print("Reload Items operation scope count: %s" %items_count)

if items_count == 0:
    # Skip next operation
    current_operation_id = workflow_execution.getCurrentOperationId()
    workflow_execution.goToOperation(current_operation_id + 2)
When decrypting a document, the Nuix Engine maintains the originally encrypted item in place and creates a descendant with the decrypted content. In this situation, when using the Exclude encrypted documents decrypted successfully option, the originally encrypted item is excluded and only the decrypted version remains. Note that this only impacts encrypted documents (such as Word or PDF) and does not impact encrypted zip archives.

3.2.31. Replace Items

This operation replaces case items with files which are named with the MD5 or GUID values of the source items.

3.2.32. Delete Items

This operation deletes items in the scope query and their descendants.

This is irreversible. Deleted items are removed from the case and will no longer appear in searches. All associated annotations will also be removed.

3.2.33. Replace Text

This operation replaces the text stored for items matching the scope query, if an alternative text is provided in a file which is named based on the items MD5 or GUID values.

This operation can be used after an interrupted Nuix OCR operation, to apply the partial results of the OCR operation, by copying all of the text files from the OCR cache to a specific folder and pointing the Replace Text operation at that folder.
This operation searches for files at the root of the specified folder only and ignores files from subfolders.

3.2.34. Remove Text

This operation removes the text stored for items matching the scope query.

This operation can be used to remove text from items for which Nuix stripped the text during loading but where no meaningful text was extracted.

3.2.35. Redact Text

This operation runs regex searches against the text of the items in scope, and redacts all matches.

The Redaction definition file can be a text file with a list of regular expressions, or a tab-separated file columns Name and Regex.

3.2.36. OCR Items

This operation runs OCR using the Nuix OCR on the items identified by the scope query, using standard Nuix options

Starting with Nuix version 8, the OCR settings cannot be supplied manually and instead an OCR profile must be used.

The option Differentiate profile will apply when using an OCR profile with a custom cache directory. In this case, short job ID will be added as a sub-directory to the custom cache directory to avoid conflicts when running multiple jobs at the same time.

3.2.37. Configure Native OCR

This operation sets the configuration of the Rampiva OCR.

The Rampiva OCR uses the Tesseract/Leptonica binaries built by Mannheim University Library. Prior to running a Native OCR operation, the Rampiva OCR or another distribution of the Tesseract OCR must be installed.

The operation has the following settings:

  • Workers allocation:

    • Predetermined: Use the specified number of workers

    • Per CPU core: Use a number of workers as a ratio of the number of CPU cores. For example, on a server with 16 cores, a ratio of 0.8 corresponds to 12 cores (i.e. 80% of 16 cores).

  • OCR engine binaries folder: Optional, the folder where the Rampiva OCR or the Tesseract OCR is installed.

  • User words file: Optional, the path to the Tesseract words file.

  • User patterns file: Optional, the path to the Tesseract patterns file.

  • Image resolution: Optional, the resolution of the source image in DPI of the images, if known.

  • Rasterize PDF resolution: Optional, the resolution to use when rasterizing PDF files before OCR.

  • OCR engine log level: Optional, the logging level for the Tesseract OCR Engine.

  • Languages: Optional, the language(s) in which the text is written, if known. When configuring multiple languages, separate them by the plus sign, for example eng+deu+fra.

  • Page segmentation mode: Optional, the method used to segment the page.

For a list of page segmentation modes supported by Tesseract, see https://tesseract-ocr.github.io/tessdoc/ImproveQuality.html#page-segmentation-method
  • Deskew: If set, the preprocessor will attempt to deskewed the image before running the OCR.

The Deskew option is only available for common image formats and PDF files. It is not available for source text files containing a listing of images.
The Deskew option will only correct small angle rotations and will not rotate the image by 90, 180, or 270 degrees.
  • Rotate: If set, the preprocessor will rotate the image before running the OCR. When using the Auto Detect option, the OCR engine will first run in 0 - Orientation and Script Detection (OSD) only mode to detect the orientation, and then will run a second time on the rotated images in the user-configured mode.

When using the Auto Detect rotation mode, it’s generally optimal in most cases to either not select a specific Page segmentation mode, or to select a mode without OSD because the image will already be correctly orientated.
  • OCR engine mode: Optional, the mode in which the OCR engine should run. This option should only be used when using a custom Tesseract build.

  • OCR engine config file: Optional, the Tesseract configuration file to use with configuration variables.

  • Timeout per file: Optional, the maximum duration of time that the OCR engine is allowed to run on a single file, possibly containing multiple pages.

  • OCR temp folder: Optional, the folder in which the temporary files used during files the OCR operations are created. If not set, a temp folder will be created in the destination folder where the OCR text is exported, or inside the Nuix case folder.

  • Don’t clear OCR temp folder on completion: If set, the OCR temp folder is not deleted on OCR completion. This option can be used to troubleshoot the OCR process by inspecting the intermediary temporary files.

3.2.38. Native OCR Items

This operation runs OCR using the Rampiva OCR Engine on Nuix case items. The operation is designed to perform best when the Nuix items have binary data stored.

When running the Native OCR Items operation on Nuix items which do not have binary data stored, the OCR will take significantly longer. Before running this operation, either store item binaries during the Add Evidence operation, or use the Populate Binary Store operation to populate binaries of the items that need to be OCRed.

Items in PDF or image formats supported by the Rampiva OCR Engine are extracted as native files from the Nuix items and OCRed. For all other items, printed images are generated inside Nuix which are then OCRed.

The settings for the OCR Engine are defined in the Configure Native OCR operation.

A CSV summary report is produced, listing all source items, the OCR success status and the other details of the OCR process.

The operation has the following settings:

  • Scope query: The Nuix query to select the items to OCR.

  • Text modifications

    • Append: Append the extracted text at the end of the existing document text.

    • Overwrite: Replace the document text with the extracted text.

  • Create searchable PDF: If set, generate PDF files with the extracted text overlaid and set as the printed images for the items.

The Tag failed items as options have the same behavior as in the Legal Export operation.

3.2.39. Native OCR Images Files

This operation runs OCR using the Rampiva OCR Engine on image files.

For a list of supported image file formats, please see https://github.com/tesseract-ocr/tessdoc/blob/main/InputFormats.md. In addition to this file formats, Rampiva supports source PDF files (these are rasterized to images) and text files containing a list of image files.

The settings for the OCR Engine are defined in the Configure Native OCR operation.

For each source image file, a corresponding text file is written in the Output text files folder. A CSV report named summary_report.csv is produced, listing all source files, the OCR success status, the path and size of the resulting text file, as well as the output of the OCR engine.

The operation has the following settings:

  • Source image files folder: The folder containing the image files to be OCRed.

  • Scan folder recursively: If set, the source folder will be scanned recursively, and the output files will be created using the same folder structure.

  • Skip images with existing non-empty text files: If set, images will be skipped if a text file with the expected name and a size greater than 0 exists in the destination folder.

  • Assemble pages regex: The regular expression to use to detect documents with multiple pages, which were exported with one image file per page. The regex must have at least one matching group which is used to select the document base name.

  • Output text files folder: The folder in which the text files will be created.

  • Keep incomplete files: If set, empty files and incomplete text files from the OCR Engine are not deleted.

  • Create searchable PDF: If set, the source images are converted to PDF files in the Output text files folder with the extracted text overlaid.

  • Output PDF files folder: The folder in which the PDF files will be created. If this field is blank, it will default to the output text files folder.

3.2.40. Generate Duplicate Custodians Field

This operation will generate a CSV file with the list of duplicate custodians in the case. See Generate Duplicate Fields for a description of the available options.

Running without the DocIDs selected in the Original fields will significantly improve execution time.
This operation is less memory-intensive than the Generate Duplicate Fields operation.

3.2.41. Generate Domain Fields

This operation will extract email domains from items in the Scope.

The resulting extracted domain fields can be saved to a CSV file and/or can be assigned as custom metadata to the items in Scope.

3.2.42. Generate Duplicate Fields

This operation will identify all items that match the Update items scope query and that have duplicates in the larger Search scope query.

The operation supports two evaluation methods:

  • Memory-Intensive: This method uses a large amount of memory on large cases but requires reduced computation.

  • Compute-Intensive: This operation performs a large number of computations on large cases but requires a reduced amount of memory.

The duplicate items are identified based on the following levels of duplication:

  • As individuals: Items that are duplicates at the item level.

  • By family: Items that are duplicates at the family level.

  • By top-level item: Only the top-level items of items in scope that are duplicates are identified.

When using the deduplication option By top-level item, ensure that the families provided are complete in the search and update scope..

When an item in the Update item scope with duplicates is identified, this operation will generate duplicate fields capturing the properties of the duplicate items. The following duplicate fields are supported:

  • Custodians

  • Item Names

  • Item Dates

  • Paths

  • Tags

  • Sub Tags

  • GUIDs

  • Parent GUIDs

  • Top-Level Parent GUIDs

  • DocIDs

  • Lowest Family DocID

  • Metadata Profile

When selecting the Metadata Profile option, all of the fields found in the specified Metadata Profile will be computed.

The Results inclusiveness option determines whether the value from the current original item should be added to the duplicate fields. For example, if the original document has custodian Smith and there are two duplicate items with custodians Jones and Taylor, the Alternate Custodians field will contain values Jones; Taylor whereas the All Custodians field will contain values Jones; Taylor; Smith.

The resulting duplicate fields can be saved to a CSV file and/or can be assigned as custom metadata to the items in the Update items scope.

For help with date formats, see Joda Pattern-based Formatting for a guide to pattern-based date formatting.

3.2.43. Generate Printed Images

This operation generate images for the items in scope using the specified Imaging profile.

The Tag failed items as options have the same behavior as in the Legal Export operation.

3.2.44. Populate Binary Store

This operation populates the binary store with the binaries of the items in scope.

3.2.45. Assign Custom Metadata

This operation adds custom metadata to the items in scope. A CSV or TSV file is required.

The file header must start with either GUID, ItemName, DocID, or Key, followed by the names of the metadata fields to be assigned.

When using ItemName, the metadata will be assigned to all items in the Nuix case which have that Item Name. This might involve assigning the same medata information to multiple items, if they have the same name.
When using Key, the matching of the items will be attempted with either the GUID, ItemName, or DocID, in this order.

Each subsequent line corresponds to an item that needs to be updated, with the fist column containing the GUID, ItemName, or DocID of the item and the the remaining columns containing the custom metadata.

Example simple CSV metadata file:

DocID,HasSpecialTerms,NumberOfSpecialTerms
DOC00001,Yes,5
DOC00002,Yes,1
DOC00003,No,0
DOC00004,Yes,7

To assign custom metadata of a specific type, add a second header line with the following format:

  • The first column: Type, indicating that this line is a header specifying field types

  • For each subsequent column, the type of the data, from the following options:

    • Text

    • Date

    • Boolean

    • Integer

    • Float

Example CSV metadata file with types:

ItemName,DateRecorded,SampleThreshold
Type,Date,Float
file1.txt,2020-01-01,0.5
file2.txt,2021-01-01,1.5
Email.eml,2022-01-01,-7

3.2.46. Assign Data Set Metadata

This operation assigns the fields defined in the data set, either as custom metadata or as tags.

3.2.47. Associate Google Vault Metadata

This operation parses the XML files and CSV files exported from Google Vault, extracts the metadata records available (see https://support.google.com/vault/answer/6099459?hl=en#mailxml) and associate these as custom metadata to the matching items in the Nuix case.

The matching between Google Vault metadata records and the items in the Nuix case is performed in the following way:

  • Google Mail

    • When parsing XML metadata files, matching is performed using the metadata field MBOX From Line

    • When parsing CSV metadata files, matching is performed using the metadata fields Mapi-Smtp-Message-Id and Message-ID.

  • Google Documents

    • When parsing XML metadata files, matching is performed using File name

3.2.48. Remove Custom Metadata

This operation removes the custom metadata specified from the items in scope.

3.2.49. Add Items to Production Set

This operation adds items matching the scope query to a production set.

When adding items to a production set, the following sort orders can be applied:

  • No sorting: Items are not sorted.

  • Top-level item date (ascending): Items are sorted according to the date of the top-level item in each family, in ascending order.

  • Top-level item date (descending): Items are sorted according to the date of the top-level item in each family, in descending order.

  • Evidence order (ascending): Items are sorted by effective path name (similar to the Windows Explorer sorting), in ascending order.

  • Custom: Items are sorted by a combination of fields in ascending or descending order.

To achieve a sort order equivalent to the Nuix Default sort order, select the Rampiva Custom sort method with the field Position in Ascending order.

The item numbering can be performed either at the Document ID level, or at the Family Document ID level. In the latter case, the top-level item in each family will be assigned a Document ID according to the defined prefix and numbering digits. All descendants from the family will be assigned a Document ID which is the same as the one of the top-level item, and a suffix indicating the position of the descendant in the family.

The document ID start number, the number of digits and the number of family digits can be specified using custom parameters:

  • {docid_start_numbering_at} - Select the option Start numbering at in the configuration of the Add Items to Production Set operation for this parameter to have an effect;

  • {docid_digits}

  • {docid_family_digits} - Select the numbering scheme Family Document ID in the configuration of the Add Items to Production Set operation for this parameter to have an effect;

When using a page-level numbering scheme, the parameter {group_family_items} can be used to control the grouping of documents from the same family, and the parameter {group_document_pages} can be used to control the grouping of pages from the same document. these parameter can be set to true or false.

3.2.50. Delete Production Set

This operation deletes ALl or Specific production sets.

This operation performs a legal export, using standard Nuix options.

Use the Imaging profile and Production profile options to control the parameters of images exported during a legal export.

The Split export at option will split the entire export (including loadfile and export components) into multiple parts of the maximum size specified, and will include family items.

The Convert mail, contacts, calendars to option will export the native emails to the selected format.

The Export scheme option can be used to control if attachments are separated from emails or not.

When selecting the Export type Relativity, the loadfile will be uploaded to Relativity during the legal export operation. If the export is split into multiple parts, each part will be uploaded as soon as it is available and previous parts finished uploading.

The following settings are required:

  • Fields mapping file: Path to JSON file mapping the Nuix Metadata profile to the Relativity workspace fields. If a mapping file is not provided, the fields in the loadfile will be mapped to fields with the same names in the Relativity workspace.

See more information on how to create a mapping file in the Relativity Loadfile Upload operation.
This operation only loads native files, text and metadata to Relativity. To load images, in addition to this operation, use the Relativity Images Overlay operation.

3.2.52. Set Nuix Discover Case

This operation connects to the Nuix Discover environment and retrieves the specified case ID, using the following settings:

  • Discover Connect API URL: The URL to of the Nuix Discover API, for example https://ringtail.us.nuix.com/ringtail-svc-portal/api/query

  • API token: The API token of the username to connect with. This token can be obtained from the Nuix Discover User Administration page → Users → username → API Access.

  • Case identifier:

    • ID: The Nuix Discover case ID.

    • Name: The Nuix Discover case name.

    • Name (Regex): A regular expression to match the Nuix Discover case name by.

  • File repository: The type of repository to use for uploading native files. For local Nuix Discover deployments set to the Windows File Share location corresponding to the imports folder of the Nuix Discover case. For SaaS deployments, use the Amazon S3 repository.

The File repository location can typically be derived from the name of the Nuix Discover case, for example using a path similar to \\DISCOVER.local\Repository\Import\{discover_case_name}. However in certain situations, the name of the import folder can be different than the name of the Nuix Discover case, for example if the case name has spaces or non-alphanumeric characters such as punctuation, or if 2 cases with the same name exist. In this scenarios, a script can be used to normalize the Nuix Discover case name and derive the expected import folder.
  • Existing case: The action to take if the case does not exist:

    • Clone case if it does not already exist creates a new case by cloning the source case.

    • Only use existing case triggers an error if the case does not exist.

  • Wait for case to be active: Waits for the specified time for the case to become active.

Use the Wait for case to be active option in a dedicated operation before promoting documents to Nuix Discover, to ensure that the documents can be uploaded.
  • Clone settings: The settings to use when cloning a case.

3.2.53. Promote to Nuix Discover

This operation exports a production set from the Nuix case and uploads the items to Nuix Discover.

The following settings can be configured:

  • Production set name: The name of the production set to promote to Nuix Discover.

  • Export standard metadata: Export items standard metadata to Nuix Discover. If checked, a copy of the metadata profile will be saved in the export folder.

  • Export custom metadata from profile: Optional, the metadata profile to use for additional metadata to export to Nuix Discover. To use this option, ensure that the Nuix Discover case is configured with the fields that are defined in the custom metadata profile.

  • Run indexing in Discover: Triggers an indexing in Nuix Discover after the documents are uploaded.

Enable the Run indexing in Discover option to have the content parsed and available for searching in Nuix Discover.
  • Run deduplication in Discover: Triggers a deduplication in Nuix Discover after the documents are uploaded.

  • Document ID strategy: Assign new Sequential document numbers from the Nuix Discover case, or use the Nuix Production set numbering.

  • Level: The Nuix Discover level to import documents to.

  • Documents per level: The maximum number of documents per level.

  • Filetypes: Upload the Native files and/or the Text extraction from the Nuix case to the Nuix Discover case.

When uploading documents to Nuix Discover using the Amazon S3 transfer mode, the files are uploaded as document pages. These are extracted to the formatted and unformatted content views when running an indexing job.
  • Temporary export folder: The folder where which the temporary legal export is created. After the upload is complete, the native and text files are deleted from the temporary folder.

  • Split export at: Break down the export and uploads into multiple part of the maximum number of items specified.

  • Wait for Discover job to finish: Waits until the items have been loaded into Nuix Discover before moving to the next upload part or before finishing the operation.

The Convert mail, contacts, calendars to, Export scheme, and Tag failed items as options have the same behavior as in the Legal Export operation.

3.2.54. Configure ElasticSearch Connection

This operation sets the configuration used to connect to the ElasticSearch environment:

  • Host: The ElasticSearch host name, for example es.example.com, or 127.0.0.1.

  • Host: The port on which the ElasticSearch REST API is deployed, by default 9200.

  • Username: The username to authenticate with.

  • Password: The password for the username above.

  • Certificate fingerprint: Optional, the SHA-256 fingerprint of the ElasticSearch certificate that should be trusted even if the certificate is self-signed.

  • Bulk operations: The number of operations to submit in bulk to ElasticSearch. Using a higher value can increase throughout but requires more memory.

3.2.55. Export Items to ElasticSearch

This operation will export the metadata of items matching the scope query to ElasticSearch.

  • Scope query: The Nuix query to select the items to export to ElasticSearch.

  • Metadata profile: The Nuix metadata profile used during the export.

  • Index name: The ElasticSearch index name.

  • Export items text: If selected, the operation will export the item text in addition to the metadata. The text is exported in ElasticSearch under the item property _doc_text.

  • Trim item text at: The maximum number of characters to export from the item text. If the item text is trimmed, the ElasticSearch property _doc_text_trimmed is set on the item.

3.2.56. Configure Relativity Connection

This operation sets the configuration used to connect to the Relativity environment.

Optionally, the Relativity Service can be used and point to a parameter of type Relativity Service. During the submission of the workflow in Scheduler, the user will be prompted to select the Relativity Service and authenticate to the service if required.

When not using a Relativity Service, the following options are explicitly defined in the operation:

  • Host name: The Relativity host name, for example relativity.example.com.

  • Service endpoint: The Relativity Service Endpoint, for example /relativitywebapi.

  • Endpoint type: The Relativity Endpoint Type, for example HTTPS.

  • User name: The user name used to perform the import into Relativity.

  • Password: The password for the username above.

The value entered in this field will be stored in clear text in the workflow file - a password SHOULD NOT be entered in this field. Instead, set this field to a protected parameter name, for example {relativity_password} and see section Protected Parameters for instructions on how to set protected parameter values.
  • Import threads: The number of parallel threads to use for Relativity uploads, such as Legal Export, Relativity Loadfile Upload, Relativity Images Overlay, Relativity Metadata Overlay, Relativity CSV Overlay.

  • Import thread timeout: The number of seconds to allow a Relativity upload thread to be idle. If no progress is reported for longer than the allowed timeout, the import thread will be aborted.

  • Import thread retries: The number of times to retry running an import thread, in situations where import encountered a fatal error or timed out.

  • Metadata threads: The number of parallel threads to use for Relativity metadata operations, such as Create Relativity Folders.

  • Patch invalid entries: If selected, this option will automatically patch entries that fail uploading due to the following issues:

    • Field value too long - the uploaded field value is trimmed to the maximum allowed length in Relativity;

    • Field value invalid, for example due to incorrectly formatted date - the field value is removed from the item uploaded to Relativity;

    • Missing native of text file - the native or text component is removed from the item uploaded to Relativity;

  • Client version: When unchecked, Rampiva will use the Relativity client version which is the closest match to the Relativity server version. When checked, Rampiva will use the specified Relativity client version, if available.

  • REST version: The version of the REST services to use when querying Relativity objects, such as workspaces and folders. For Relativity One, use REST (v1 Latest).

The REST (Server 2021) version requires the Relativity Server Patch (Q3 2021) or later.
The Import threads value is independent of the number of Nuix workers. When using more than 1 import thread, the loadfile or the overlay file will be split and data will be uploaded to Relativity in parallel. Because multiple threads load the data in parallel, this method will impact the order in which documents appear in Relativity when no sort order is specified.

3.2.57. Set Relativity Client

This operation selects a client in the Relativity environment, using the following settings:

  • Client identifier: The Name or Artifact ID of the Relativity client.

  • Existing client: The action to take if the client does not exist:

    • Create client if it does not exist creates a new client.

    • only use existing client triggers an error if the client does not exist.

The following settings are applicable when creating a new client:

  • Client number: The client number to set on the client.

  • Status identifier: Optional, the Name or Artifact ID of the status to set on the client.

  • Keywords: Optional, the keywords to set on the client.

  • Notes: Optional, the notes to set on the client.

3.2.58. Set Relativity Matter

This operation selects a matter in the Relativity environment, using the following settings:

  • Matter identifier: The Name or Artifact ID of the Relativity matter.

The matter is selected in Relativity irrespective of the client that it belongs to, even if the Set Relativity Client operation was previously used.
  • Existing matter: The action to take if the matter does not exist:

    • Create matter if it does not exist creates a new matter.

    • only use existing matter triggers an error if the matter does not exist.

The following settings are applicable when creating a new matter:

  • Matter number: The matter number to set on the matter.

  • Status identifier: Optional, The Name or Artifact ID of the status to set on the matter.

  • Keywords: Optional, the keywords to set on the matter.

  • Notes: Optional, the notes to set on the matter.

When a new matter is created, it is created under the client selected using the previous Set Relativity Client operation.

3.2.59. Set Relativity Workspace

This operation selects a workspace in the Relativity environment using the following settings:

  • Workspace identifier: The Name or Artifact ID of the Relativity workspace.

The workspace is selected in Relativity irrespective of the client and matter that it belongs to, even if the Set Relativity Client or Set Relativity Matter operations were previously used.
  • Folder path: The path inside the workspace. If blank, this will retrieve the folder corresponding to the root of the workspace.

  • Create folder path if it does not exist:: If checked, the specified folder path will be created in the workspace if it does not exist.

  • Existing workspace: The action to take if the Workspace does not exist:

    • Clone workspace if it does not already exist creates a new Workspace by cloning the source Workspace.

    • Only use existing workspace triggers an error if the Workspace does not exist.

  • Clone settings: The settings to use when cloning a Workspace.

    • Workspace name: The name to give the newly created Workspace.

    • Matter: The Matter to use when cloning the Workspace.

    • Workspace template: The Workspace template to use when cloning the Workspace.

    • Resource pool: The Resource pool to use when cloning the Workspace, if this setting is not defined the first available Resource pool from the Relativity environment will be selected.

    • Database location: The Database location to use when cloning the Workspace, if this setting is not defined the first available Database location from the Relativity environment will be selected.

    • Default file repository: The Default file repository to use when cloning the Workspace, if this setting is not defined the first available Default file repository from the Relativity environment will be selected.

    • Default cache location: The Default cache location to use when cloning the Workspace, if this setting is not defined the first available Default cache location from the Relativity environment will be selected.

    • Status: The Status to use when cloning the Workspace, if this setting is not defined the first available Status from the Relativity environment will be selected.

When a workspace is cloned, it is created under the matter selected using the previous Set Relativity Matter operation.

3.2.60. Delete Relativity Workspace

This operation deletes the specified workspace, if it exists.

3.2.61. Create Relativity Group

This operation creates one or more groups in Relativity under the client selected using the previous Set Relativity Client operation, using the following settings:

  • Group Name: The name of the group to be created.

  • Keywords: Optional, the keywords to assign to the group created.

  • Notes: Optional, the notes to assign to the group created.

If a group with the specified name exists under the client, the group will not be created and instead the group name and artifact ID will be logged.

In addition to providing the values for the group settings manually, the user can also load from a CSV or TSV file, for example:

Group Name   Keywords    Notes
Reviewer    reviewer    Simple group for reviewer
Admin   admin   Group for admins

3.2.62. Manage Relativity Workspace Groups

This operation adds or removes groups in Relativity under the workspace selected using the previous Set Relativity Workspace operation, using the following settings:

  • Group identifier type: The identifier type used for the workspace groups, Name or Artifact ID.

  • Group action: The action to be performed on the groups, Add or Remove.

  • Group Settings Table

    • Group identifier: The Name or Artifact ID of the group, defined by the Group identifier type field.

In addition to providing the values for the workspace groups manually, the user can also load from a CSV or TSV file, for example:

Group Identifier
Domain Users
Level 1
Level 2

The workspace groups can also be loaded from a file during the workflow execution, using the Workspace groups file option.

3.2.63. Create Relativity Users

This operation create one or more users in Relativity under the client selected using the previous Set Relativity Client operation, using the following settings:

  • User template identifier: The Name, Artifact ID or Email Address of the user to copy properties from.

When choosing the identifier type Name, the full Relativity name must be provided.
  • Send email invitation: Sends an email invitation to each user created.

  • User Settings:

    • Email: The email of the user to be created.

    • First Name: The first name of the user to be created.

    • Last Name: The last name of the user to be created.

    • Keywords: Optional, the keywords to assign to the group created.

    • Notes: Optional, the notes to assign to the group created.

    • Login Method User Identifier: Optional, the subject or account name for the login methods copied from the template user.

In addition to providing the values for the user settings manually, the user can also load from a CSV or TSV file, for example:

Email   First Name    Last Name    Keywords    Notes    Login Method User Identifier
jon.doe@hotmail.com Jon Doe Reviewer    User    created by Rampiva  j.doe
el.mills@gmail.com  Elisa   Mills   Support User    created by Rampiva  e.mills

The user settings can also be loaded from a file during the workflow execution, using the User settings file option.

3.2.64. Manage Relativity Users

This operation deletes one or more users from Relativity, using the following settings:

  • User identifier type: The identifier type used to retrieve users: Name, Artifact ID or Email Address.

When choosing the identifier type Name for the user identifier, the full name must be provided.
  • User action: The action to be performed on the users, Delete.

  • Users:

    • User identifier: The Name, Artifact ID or Email Address of the user.

In addition to providing the values for the users manually, the user can also load from a CSV or TSV file, for example:

User Identifier
jon.doe@hotmail.com
el.mills@gmail.com

The users can also be loaded from a file during the workflow execution, using the Users file option.

3.2.65. Manage Relativity Group Users

This operation adds or removes one or more users from a group, using the following settings:

  • Group identifier: The Name, Artifact ID or Name (Like) of the group to add or remove users.

  • User identifier type: The identifier type used to retrieve users: Name, Artifact ID or Email Address.

When choosing the identifier type Name, the full name must be provided.
  • User group action: The action to be performed on the users of the group, Add or Remove.

  • Group users:

    • User identifier: The Name, Artifact ID or Email Address of the user.

In addition to providing the values for the group users manually, the user can also load from a CSV or TSV file, for example:

User Identifier
jon.doe@hotmail.com
el.mills@gmail.com

The group users can also be loaded from a file during the workflow execution, using the Group users file option.

3.2.66. Query Relativity Workspace Group Permissions

This operation exports the permissions of a Relativity group to the specified location as a JSON file.

3.2.67. Apply Relativity Workspace Group Permissions

This operation applies permissions to a Relativity Group, using the following settings:

  • Group Identifier: The Name, Artifact ID or Name (Like) of the group to apply permissions to.

  • Permissions JSON: Optionally, the content of a permissions file.

  • Permissions file: A permissions file created by the Query Relativity Workspace Group Permissions operation.

3.2.68. Copy Relativity Workspace Group Permissions

This operation copies the permissions assigned to a group in a Relativity workspace to another group or workspace, using the following settings:

Copy permissions from:

  • Source Workspace Identifier: The Name, Artifact ID or Name (Like) of the source workspace.

  • Source Group Identifier: The Name, Artifact ID or Name (Like) of the source group.

To:

  • Destination Workspace Identifier: The Name, Artifact ID or Name (Like) of the source workspace.

  • Destination Group Identifier: The Name, Artifact ID or Name (Like) of the source group.

3.2.69. Create Relativity Folders

This operation creates folders in the Relativity workspace from the listing CSV file. The listing file must have a single column and the name of the column must contain the word Folder or Path or Location.

When uploading documents to Relativity with a complex folder structure, it is recommended to use the Create Relativity Folders before the upload to prepare the folder structure.

3.2.70. Relativity Loadfile Upload

This operation loads a Concordance or CSV loadfile to Relativity.

The following settings are required:

  • Loadfile location: Path to the loadfile.

  • Fields mapping file: Path to JSON file mapping the Nuix Metadata profile to the Relativity workspace fields. If a mapping file is not provided, the fields in the loadfile will be mapped to fields with the same names in the Relativity workspace.

  • Detect export in parts: Detects the existence of loadfiles in subfolders in the specified location, and uploads all detected loadfiles sequentially.

This operation sets the Relativity OverwriteMode property to Append when loading the documents into Relativity.
The Legal Export operation can be used to export the loadfile and upload to Relativity, with the added benefit of uploading export parts as soon as they become available.

Sample minimal mapping.json:

{
    "FieldList": [
        {
            "identifier": true,
            "loadfileColumn": "DOCID",
            "workspaceColumn": "Control Number"
        },
        {
            "loadfileColumn": "TEXTPATH",
            "workspaceColumn": "Extracted Text"
        },
        {
            "loadfileColumn": "ITEMPATH",
            "workspaceColumn": "File"
        },
        {
            "loadfileColumn": "BEGINGROUP",
            "workspaceColumn": "Group Identifier"
        }
    ]
}

3.2.71. Relativity Metadata Overlay

This operation exports metadata from the Nuix items in the scope query and overlays it to Relativity.

The following settings are required:

  • Fields mapping file: Path to JSON file mapping the Nuix Metadata profile to the Relativity workspace fields. If a mapping file is not provided, the fields in the loadfile will be mapped to fields with the same names in the Relativity workspace.

See more information on how to create a mapping file in the Relativity Loadfile Upload operation, or use the sample mapping file below.
This operation sets the Relativity OverwriteMode property to Overlay when loading the metadata into Relativity.

To overlay data to Relativity using a non-indexed field, set the identifier property to true in the mapping file and provide the Artifact ID of that field in the fieldId property.

Sample mapping.json for overlaying data based the GUID in a workspace which contains the field NuixGuid with the Artifact ID 1040313:

{
    "FieldList": [
        {
            "loadfileColumn": "TEXTPATH",
            "workspaceColumn": "Extracted Text"
        },
        {
            "loadfileColumn": "GUID",
            "identifier": true,
            "fieldId": 1040313,
            "workspaceColumn": "NuixGuid"
        }
    ]
}

3.2.72. Relativity Images Overlay

This operation overlays images from an Opticon loadfile to Relativity.

The following settings are required:

  • Identifier field: The Artifact ID of the identifier field, such as Control Number or Document ID.

To get the Artifact ID of the identifier field, open the workspace in Relativity, navigate to the Workspace Admin → Fields, and click on the identifier field, for example Control Number. Then, to obtain the Artifact ID of this field, extract the value from the URL. For example, the Artifact ID of the with the following URL is 1003667: https://relativity.rampiva.lab/Relativity/RelativityInternal.aspx?AppID=1018179&ArtifactTypeID=14&ArtifactID=1003667&Mode=Forms&FormMode=view&LayoutID=null&SelectedTab=null
  • Strip suffix from first page: Strips the suffix from the first page to infer the document ID from the Opticon loadfile, for example _0001.

  • Detect export in parts: Detects the existence of loadfiles in subfolders in the specified location, and uploads all detected loadfiles sequentially.

This operation sets the Relativity OverwriteMode property to Overlay when loading the images into Relativity.

3.2.73. Relativity CSV Overlay

This operation overlays the metadata from the specified overlay file to Relativity.

The following settings are required:

  • Fields mapping file: Path to JSON file mapping the Nuix Metadata profile to the Relativity workspace fields. If a mapping file is not provided, the columns in the CSV file will be mapped to fields with the same names in the Relativity workspace.

See more information on how to create a mapping file in the Relativity Loadfile Upload operation.

3.2.74. Relativity Property Query

This operation queries properties of a Relativity workspace and assigns them as parameters in Workflow.

3.2.75. Load Relativity Dynamic Objects

This operation loads dynamic objects (RDO) to Relativity , using the following settings:

  • Object type identifier: The Name, Artifact ID or Name (Like) of the object type.

  • Load objects in workspace: Determines if the objects will be loaded into a workspace.

If the options Load objects in workspace is selected, then the Set Relativity Workspace Operation is required.
  • Objects: Tab separated list of the objects to be loaded.

Sample objects data:

Article Title	Article Type	Article Date	Is Available
Star Wars	Wikipedia Article	2022-11-10T00:00:01	Yes
Globex	Review Article	2022-11-10T00:00:01	No
The field Name is required and the operation will fail if the field is not present.

In addition to providing the values for the Objects manually, the user can also load from a TSV file using the same format as the example above.

When loading objects the first row represents the fields of the object type and the rows after are the objects that will be evaluated and loaded into Relativity.

When the user is using the field type object or choice, use the name of the object or choice in the column, for example given the field Department with the type Single Object and the field Department Group with the type Single Choice:

Name    Department  Department Group
John Doe    IT  Sales
Jane Doe    Marketing   Sales

When the user is using the field type multiple object or multiple choice, use the name of the object or choice and separate each item by comma ,, for example given the field Hobbies with the type Multiple Object and the field Groups with the type Multiple Choice:

Name    Hobbies  Groups
John Doe    Hockey,Golfing  Rotary Club,Robotics
Jane Doe    Golfing,Skiing,Reading   Book Club,Crossfit

3.2.76. Create ARM Archive

This operation creates a Relativity ARM archive job, using the following settings:

  • Archive Directory: The path where the archive will be stored, for example \\INSTANCE007\Arhives\TestWorkspaceArchive

  • Use Default Archive Directory: Uses the default path to store your archive

When selecting an archive directory, a valid UNC path must be provided, for example: \\INSTANCE001\Arhives\NewArchive.
  • Priority: The priority of execution for the archive job: Low, Medium, High

  • Wait for Archive to Complete: Waits until the archive job completes.

  • Lock UI Job Actions: Determines if job actions normally available on UI should be visible for the user.

  • Notify Job Creator: Determines if email notifications will be sent to the job creator.

  • Notify Job Executor: Determines if email notifications will be sent to the job executor.

  • Include Database Backup: Include database backup in the archive.

  • Include dtSearch: Include dtSearch indices in the archive.

  • Include Conceptual Analytics: Include conceptual analytics indices in the archive.

  • Include Structured Analytics: Include structured analytics indices in the archive.

  • Include Data Grid: Include data grid application data in the archive.

  • Include Repository Files: Include all files included in workspace repository, including files from file fields in the archive.

  • Include Linked Files: Include all linked files that do not exist in the workspace file repository in the archive.

  • Missing File Behavior: Indicates weather to Skip File or Stop Job when missing files are detected during the archiving process.

Setting the Missing File Behavior to Stop Job will cause the archive job to stop / fail when there is a file missing.
  • Include Processing: Include processing application data in the archive.

  • Include Processing Files: Include all files and containers that have been discovered by processing in the archive.

When the options Include Processing Files is selected the files will be located in the archive directory under the folder Invariant.
  • Missing Processing File Behavior: Indicates whether to Skip File or Stop Job when missing processing files are detected during the archiving process.

  • Include Extended Workspace Data: Include extended workspace information in the archive.

Extended workspace data includes installed applications, linked relativity scripts and non-application event handlers.
  • Application Error Export Behavior: Indicates whether to Skip Application or Stop Job on applications that encountered errors during export.

This operation requires the Relativity instance to have the ARM application installed.

3.2.77. Create Relativity ARM Restore

This operation creates an ARM restore job, using the following settings:

  • Archive Path: Path of the ARM archive to be restored, for example \\INSTANCE007\Arhives\TestWorkspaceRestore

The Archive Path provided must not be in use by another ARM job.
  • Priority: The priority of execution for the restore job: Low, Medium, High.

  • Lock UI Job Actions: Determines if job actions normally available on UI should be visible for the user.

  • Notify Job Creator: Determines if email notifications will be sent to the job creator.

  • Notify Job Executor: Determines if email notifications will be sent to the job executor.

  • Matter Identifier: The Name, Artifact ID or Name (Like) of the matter to restore to.

If a preceding Set Relativity Matter operation exists in the workflow then the matter from the Set Relativity Matter operation, if there is a value in the Matter Identifier field then the matter set in the Matter Identifier field will be used.
  • Resource Pool Identifier: The resource pool to restore the workspace to. If this setting is not defined, the first available resource pool from the Relativity environment will be selected.

  • Database Server Identifier: The database server to restore the workspace to. If this setting is not defined, the first available database server from the Relativity environment will be selected.

  • Cache Location Identifier: The cache location to restore the workspace to. If this setting is not defined, the first available cache location from the Relativity environment will be selected.

  • File Repository Identifier: The file repository to restore the workspace to. If this setting is not defined, the first available file repository from the Relativity environment will be selected.

    • Reference Files as Archive Links: Determines if files should remain in the archive directory and should be referenced from the workspace database as opposed to copying them to the workspace repository.

    • Update Repository File Paths: Determines if repository file locations should be updated to reflect their new location.

    • Update Linked File Paths: Determines if non-repository file locations should be updated to reflect their new location

    • Auto Map Users: Determines if archive users should be auto mapped by email address.

    • Auto Map Groups: Determines if archive groups should be auto mapped by name.

    • Structured Analytics Server: The Name, Artifact ID or Name (Like) of the structured analytics server. This field is only required when the archive the user is restoring contains structured analytics data.

    • Conceptual Analytics Server: The Name, Artifact ID or Name (Like) of the conceptual analytics server This field is only required when the archive the user is restoring contains conceptual analytics data.

    • dtSearch Location Identifier: The Name, Artifact ID or Name (Like) of the dtSearch location. This field is only required when the archive the user is restoring contains dtSearch indexes.

    • Existing Target Database: Target database in case the archive does not have a database backup file.

This operation requires the Relativity instance to have the ARM application installed.

3.2.78. List Relativity Documents

This operation lists all documents present in the Relativity Workspace.

The following settings are available:

  • Scope query: Cross references the DocIDs from the Relativity workspaces against the documents in the Nuix case in this scope.

  • Tag matched items as: The tag to assign to documents in scope in the Nuix case which have the same DocIDs as documents from the Relativity workspace.

  • Export DocIDs under: The path and name of the file to which to write the list of DocIDs from the Relativity workspaces. Each line will contain a single DocID.

3.2.79. Add Relativity Script

This operation adds the specified script to the Workspace, using the following settings:

  • Script identifier: The script to add to the Relativity Workspace

  • Application identifier: The application the script will run under, this setting is optional.

In order to add a script to the Relativity Workspace, first define it in the Relativity Script Library. The Relativity Script Library is located on the home page of Relativity under Applications & ScriptsRelativity Script Library.

3.2.80. Relativity Run Script

This operation runs a script in a Relativity workspace, or in the admin workspace.

Optionally, input values can be provided to the script. To determine the required input IDs and the allowed values, run the script without any inputs and inspect the Execution Log.

Once the script has completed, eventual errors will be stored in the parameter name {last_relativity_script_error}.

The output of the script can be exported to a file of the following type:

  • CSV: Use the extension .csv

  • PDF: Use the extension .pdf

  • XLSX: Use the extension .xlsx, the export defaults to this option if no other format is matched, this option will be used.

3.2.81. Delete Relativity Script

This operation deletes the specified script, if it exists.

3.2.82. Manage Relativity dtSearch Index

This operation runs an index build on the dtSearch index, using the following settings:

  • dtSearch Index identifier: The dtSearch index to perform the actions on.

  • Index action: The index build operation to be performed on the index, the build operation is one of the following:

    • Full build

    • Incremental build

    • Compress index

    • Activate index

    • Deactivate index

  • Wait for action completion: Waits for the build operation to finish before moving to the next operation.

3.2.83. Run Relativity Search Term Report

This operation runs a search term report on the Relativity instance, using the following settings:

  • Search Term Report Identifier: The search term report to run

  • Report Run Type: The report run type to be performed, the report run type is one of the following:

    • Run All Terms

    • Run Pending Terms

  • Report Results Location: Optional, the location to export the csv results of the report

Once this operation has completed, the results will be stored as a json object in the parameter {relativity_search_term_results_json}. The results will be in the following format:

{
    "results": [
        {
            "Name": "apples",
            "Documents with hits": "16",
            "Documents with hits, including group": "0",
            "Unique hits": "",
            "Last run time": "2/10/2023 4:08 AM"
        },
        {
            "Name": "rampiva",
            "Documents with hits": "72",
            "Documents with hits, including group": "0",
            "Unique hits": "",
            "Last run time": "2/10/2023 4:08 AM"
        },
        {
            "Name": "sensitive",
            "Documents with hits": "2",
            "Documents with hits, including group": "0",
            "Unique hits": "",
            "Last run time": "2/10/2023 4:08 AM"
        }
    ]
}

The results of the search term report are stored in the results array, and the properties inside the objects are the fields corresponding to the view of the search term report results.

The parameter {relativity_search_term_results_json} can be used in a script to add logic to the results of the search term report, for example the following script only prints results that were seen at least once :

# Example script only showing terms with hits
results_object = parameters.getJsonObject("{relativity_search_term_results_json}")
results_array = results_object["results"]

# Header which indicates how many times it was seen
hits_header = "Documents with hits"

# Only print a result if it was seen at least one time
for result in results_array:
	if int(result[hits_header]) > 0:
		for key in result.keySet():
			print(key + ": " + result[key])

		# Separate results
		print("\n")
Reporting

This option generates a search terms report in an Excel format, based on a template file. The report uses the _REL_RUN_SEARCH_TERMS_ worksheet from the template.

See Processing Report for information on using a custom template.

3.2.84. Export Relativity Saved Searches

This operation converts saved searches to Rampiva Relativity Query Language format and exports the saved searches to a csv file, using the following settings:

  • Saved search export location: The location to export the csv results

Once this operation has completed, the csv file location will be stored in the parameter {relativity_saved_searches_file}.

Reporting

This option generates a saved search report in an Excel format, based on a template file. The report uses the _REL_EXPORT_SAVED_SEARCH_ worksheet from the template.

See Processing Report for information on using a custom template.

3.2.85. Create Relativity Saved Searches

This operation creates saved searches using Rampiva Relativity Query Language, using the following settings:

  • Saved Searches:

    • Folder: The folder path, if the path does not exist then it will be created

    • Name: The name of the query

    • Query: The relativity query language string that will be converted into the saved search

    • Scope: The scope of the saved search

    • Fields: The fields of the saved search, fields are seperated by , commas

    • Sorting: The sorting fields of the saved search, sorting fields are seperated by , commas and contain a sorting direction in square [] brackets. For example, if the user wanted to sort by the artifact id ascending then the user would provide Artifact ID [Ascending] for the sorting column. A user can only provide two possible values for the sorting direction Ascending or Descending

In addition to providing the values for the saved searches manually, the user can also load from a CSV or TSV file, for example:

Folder,Name,Query,Scope,Scope Folders,Fields,Sorting
Admin Searches,Produced Documents,[Bates Beg] is_set,WORKSPACE,,"Edit,File Icon,Control Number,Bates Beg,Bates End",Bates Beg [Ascending]
Admin Searches,Extracted Text Only,[Extracted Text] is_set,FOLDERS,Temp\\Tes,Extracted Text,

The saved searches can also be loaded from a file during the workflow execution, using the Saved searches file option.

Rampiva Relativity Query Language

Rampiva Relativity Query Language is a custom language used to create Relativity saved searches, this language takes the saved search creation form from Relativity and converts it to text based query language to allow workflows to automate the creation of saved searches.

This language is made up of a bunch of expressions, each expression contains a document field name, operator and a value. Each expression is then joined by an and or an or which acts as a logical operator between the two expressions.

Expressions can also be grouped together to form logic groups which contain one or more expressions inside of parentheses. Expressions inside logic groups will be evaluated together and the result of a logic group is the evaluated expressions inside. There is no limit to how many times an expression can be nested.

Document Field Name

The document field name corresponds to Document Fields in Relativity, to declare the document field name in an expression, enclose the field name within square brackets. For example if the user wanted to use the field name Control Number, then in the expression it would be declared as [Control Number].

When using a Saved Search or an Index Search as a document field in the expression, they are declared as follows: [Saved Search] for saved searches and [Index Search] for index searches.
Operator

The operator of an expression defines how the value is evaluated, there are two different kinds of operators Binary Operators which expect a value and Unary Operators which don’t require a value. To declare the operator in an expression the user must first declare the document field name and then provide one of the following operators listed in the table below.

Operator Example

is

[Control Number] is "Example"

is_not

[Control Number] is_not "Example"

is_set

[Artifact ID] is_set

is_not_set

[Artifact ID] is_not_set

is_logged_in

[Created By] is_logged_in

is_not_logged_in

[Created By] is_not_logged_in

is_like

[Folder Name] is_like "FolderA"

is_not_like

[Control Folder Name] is_not_like "FolderA"

is_less_than

[Attachment Count] is_less_than "3"

is_less_than_or_equal_to

[Attachment Count] is_less_than_or_equal_to "12"

is_greater_than

[Attachment Count] is_greater_than "9"

is_greater_than_or_equal_to

[Attachment Count] is_greater_than_or_equal_to "5"

starts_with

[Email Subject] starts_with "Confidential"

does_not_start_with

[Email Subject] does_not_start_with "Redacted"

ends_with

[Title] ends_with "signing"

does_not_end_with

[Title] does_not_end_with "signing"

contains

[Email From] contains "@example.com"

does_not_contain

[Email From] does_not_contain "@example.com"

is_any_of_these

[Custodian] is_any_of_these [1023221, 2254568]

is_none_of_these

[Custodian] is_none_of_these [1023221, 2254568]

is_all_of_these

[Submitted By] is_all_of_these [1024881]

is_not_all_of_these

[Submitted By] is_not_all_of_these [1024881, 102568]

is_in

[Folder] is_in [1025681, 1024881, 1032568]

is_not_in

[Folder] is_not_in [1025681, 1024881, 1032568]

is_before

[Sort Date] is_before 2022-04-05

is_before_or_on

[Sort Date] is_before_or_on 2022-04-05

is_after

[Date Last Modified] is_after 2021-12-09T15:36:00

is_after_or_on

[Date Last Modified] is_after_or_on 2021-12-09T15:36:00

is_between

[Date Added] is_between 2019-01-01 - 2023-01-01

is_not_between

[Date Added] is_not_between 2019-01-01 - 2023-01-01

Value

The value of the expression defines what value the user expects from the document field. To declare a value in an expression the document field name and operator must be declared and then the user can provide a value by putting text or a number inside double quotes, for example "Value 8" or the user can provide a list of integers inside square brackets, for example [102889, 1025568]. The integers inside the square brackets correspond to artifact IDs of objects on Relativity.

Values can also be declared as dates, date values do not need to be inside double quotes or square brackets. When declaring a date value only specific operators can be used, the operators that support date values are is, is_not, is_in, is_not_in, is_before, is_before_or_on, is_after, is_after_or_on, is_between, is_not_between.

The operators is_between and is_not_between can only take two date or date time values separated by a -. For example: 2019-01-01 - 2023-01-01 or 2019-01-01T00:00:00 - 2022-12-31T23:59:59

A date value can be one of the following formats:

  • Date: The format of a date is Year (4 digits) - Month (2 digits) - Day (2 digits). For example 2023-04-13

  • Date Time:The format of a date time is Year (4 digits) - Month (2 digits) - Day (2 digits) T Hour (2 digits) : Minute (2 digits) : Seconds (2 digits) optionally the user can also declare the milliseconds by adding . followed by 1 to 9 digits. For example: 2019-05-10T05:00:13 or 2019-05-10T05:00:13.8754

  • Month: The format of month is the name of a month capitalized. For example: March or July

  • This week: The format of this week is the lower case word, for example this week

  • This month: The format of this month is the lower case word, for example this month

  • Next week: The format of next week is the lower case word, for example next week

  • Last week: The format of last week is the lower case word, for example last week

  • Last 7 days: The format of last 7 days is the lower case word, for example last 7 days

  • Last 30 days: The format of last 30 days is the lower case word, for example last 30 days

Example Saved Search Queries

Emails with attachments between two dates:

[Email Subject] is_set and ([Number of Attachments] is_not "0" and [Date Sent] is_between 2021-08-04 - 2023-02-28T23:59:59.997)

Produced documents with production errors, sorted by file size:

[Bates Beg] is_set and [Production Errors] is "true"

Documents without extracted text:

[Extracted Text] is_not_set or [Extracted Text Size] is "0"

3.2.86. Query Relativity Workspace Overwritten Permissions

This operation exports overridden inherited permissions, using the following setting:

  • Permissions output file: The location to export the permissions JSON file

  • Object scope:

    • Object type: The type of the object, for example Folder

    • Object name: Optionally, the name of the object, for example Staging. To query all objects with a specific type, leave the name field blank.

In addition to providing the values for the object types manually, the user can also load from a CSV or TSV file, for example:

Folder  Admin
Folder  Staging
View

The object scope can also be loaded from a file during the workflow execution, using the Object scope file option.

3.2.87. Apply Relativity Workspace Overwritten Permissions

Before running this operation on a production workspace, run this operation on a test workspace created from the same template, or perform a backup of the production workspace, to ensure the desired outcome is achieved.

This operation applies overridden inherited permissions using the following settings:

  • Match objects by:

    • Artifact ID & Name: Objects from the target workspace must have the same name and Artifact ID as the objects from the permissions file to be a match.

    • Name: Objects from the target workspace must have the same name as the objects from the permissions file.

  • New object behavior: The action to take when an object is identified in the target workspace, but the object does not exist in the permissions file:

    • Do not change permissions for objects not present in the permissions file

    • Reset permissions for objects not present in the permissions file

  • Skip objects: (Optional) Skip applying permissions on objects defined in the table

    • Object type: The object type name, for example View

    • Object name: The name of the object

  • Overwritten permissions file: A permissions file created by the Query Relativity Workspace Overwritten Permissions operation.

  • Overwritten permissions JSON: Optionally, the content of a permissions file.

Reporting

This option generates an overwritten permissions report in an Excel format, based on a template file. The report uses the _REL_OVERWRITTEN_PERMISSIONS_ worksheet from the template.

See Processing Report for information on using a custom template.

3.2.88. Call Relativity API

This operation will make an API call to Relativity using the current the config from Configure Relativity Connection, using the following settings:

  • Verb: The HTTP verb, such as GET or POST.

  • Endpoint: The endpoint on the Relativity API.

  • Parameters: Optional, URL parameters.

  • Body: The JSON request.

Once the API call has completed, the following parameters will be populated:

  • {relativity_call_api_response_code}: The HTTP response code.

  • {relativity_call_api_response_headers}: The response headers, JSON encocoded.

  • {relativity_call_api_response_body}: The response body.

This operation deletes a specified saved search or all saved searches from a workspace.

3.2.90. Run Relativity Imaging Set

This operation runs the specified imaging set, using the following settings:

  • Imaging set identifier: The Name, Artifact ID or Name (Like) of the imaging set.

  • Hide images for QC review: When enabled, it prevents users from viewing images until the QC review process is complete.

  • Wait for completion: Waits until the imaging set has finished running.

3.2.91. Delete Relativity Index

This operation deletes the specified index, if it exists.

3.2.92. Create Relativity Analytic Index

This operation creates an analytic index, using the following settings:

  • Name: The name of the analytic index

  • Index type: The type of index, Conceptual or Classification

  • Saved search identifier: The Name, Artifact ID or Name (Like) of the saved search

  • Analytics server identifier: The Name, Artifact ID or Name (Like) of the analytics server

  • Order: The order of the index seen in dropdown inside of Relativity, for example setting the value to 1 would cause the index to be seen first in all dropdown

  • Email notification recipients file: (Optional) The list of email recipients notified during index population and build, for example:

Email Notification Recipient
usera@example.com
userb@example.com
userc@example.com

In addition to the settings above, conceptual analytic indexes have the following advanced options

  • Advanced Options

    • Concept stop words file: (Optional) A file containing words to suppress from the index

    • Continue index steps to completion: (Optional) Indicated whether to automatically complete all steps necessary to activate an analytics index after starting a step

    • Dimensions: (Optional) The number of dimensions of the concept space in which documents are mapped when the index is built

    • Enable email header filter: (Optional) removes common header fields (such as To, From, and Date) and reply-indicator lines

    • Optimize training set: (Optional) Indicates whether to select only conceptually relevant documents from the training set saved search

    • Remove documents that errored during population: (Optional) Removes documents from being populated when they have errored in a previous population

    • Remove English signatures and footers: (Optional) Indicates whether to remove signatures and footers in English language emails

    • Repeated content filters file: (Optional) A file containing repeated content filters associated with the index

    • Training set: (Optional) The Name, Artifact ID or Name (Like) of the saved search for training

Example Concept stop words file:

Stop Words
and
a
can

Example Repeated content filters file, filters are identifier by name:

Content Filters
Credit Card Regex Filter
Email Address Filter

3.2.93. Run Relativity Saved Searches

This operation runs saved searches on the Relativity instance and returns the item count, using the following settings:

  • Run options: How the user will retrieve the saved searches to run:

    • All saved searches in workspace: Runs all saved searches in the workspace

    • All saved searches under search container: Runs all saved searches under the specified search container

    • A single saved search: Runs the specified saved search

  • Saved search identifier: The Name, Artifact ID or Name (Like) of the saved search

  • Search container identifier: The Name, Artifact ID or Name (Like) of the search container

Once this operation has completed, the results will be stored as a json object in the parameter {relativity_run_saved_search_results_json}. The results will be in the following format:

{
    "results": [
        {
            "Name": "All Documents",
            "Query": "[Artifact ID] is_set",
            "Hits": 163,
            "Folder": "Admin Searches\\Tests"
        },
        {
            "Name": "Extracted Text Only",
            "Query": "[Extracted Text] is_set",
            "Hits": 113,
            "Folder": ""
        },
        {
            "Name": "Produced Documents",
            "Query": "[Control Number] is_set and [Document] is \"true\"",
            "Hits": 65,
            "Folder": "Admin Searches"
        }
    ]
}

The results of the saved searches that ran are stored in the results array, the properties inside the objects are:

  • Name: The name of the saved search

  • Artifact ID: The artifact ID of the saved search

  • Hits: The amount of documents returned when running the saved search

The parameter {relativity_run_saved_search_results_json} can be used in a script to add logic to the results of the saved search results, for example the following script will only print results that have at least one hit:

# Example script only showing saved searches with atleast one document
results_object = parameters.getJsonObject("{relativity_run_saved_search_results_json}")
results_array = results_object["results"]

# Only print a result if it has atleast one document
for result in results_array:
	if int(result["Hits"]) > 0:
		print("Folder: " + result["Folder"])
		print("Name: " + result["Name"])
		print("Query: " + result["Query"])
		print("Hits: " + str(result["Hits"]))

	# Separate results
	print("\n")
Reporting

This option generates a saved search report in an Excel format, based on a template file. The report uses the _REL_RUN_SAVED_SEARCH_ worksheet from the template.

See Processing Report for information on using a custom template.

3.2.94. Manage Relativity Analytic Index

This operation runs an index action on the specified analytic index, using the following settings:

  • Analytic index identifier: The Name, Artifact ID or Name (Regex) of the analytic index

  • Analytic index type: The analytic index type Conceptual or Classification

  • Existing analytic job action: The behavior when an existing analytic index job is found

    • Skip running an analytic index job if another one is in progress for the same index

    • Stop the currently running analytics job action, and start a new job

  • Index action: The action to perform on the analytics index

    • Full population: Runs full index population

    • Incremental population: Runs an incremental population

    • Build index: Runs a full index build

    • Retry errors: Retries errors that occurred during population

    • Remove documents in error: Removes documents that errored

    • Activate: Activates the index for querying

    • Deactivate: Disables queries on the index

  • Wait for completion: Waits for the index job to complete

When using the index action Build index on an analytic index, the analytic index must be deactivated.

3.2.95. Run Relativity OCR Set

This operation runs the specified OCR set, using the following settings:

  • OCR set identifier: The Name, Artifact ID or Name (Like) of the OCR set.

  • Existing OCR set job action: Action to take when an existing OCR set job is currently running

    • Stop: Stop the currently running OCR set job, and start a new job

    • Skip: Skip running an OCR set job if another one is in progress for the same set

  • Wait for completion: Waits until the OCR set has finished running.

3.2.96. Export Relativity Metadata

This operation exports the specified metadata type, using the following settings:

The output of the view can be exported to a file of the following type:

  • CSV: Use the extension .csv

  • PDF: Use the extension .pdf

  • XLSX: Use the extension .xlsx, the export defaults to this option if no other format is matched, this option will be used.

3.2.97. Create Relativity Production Set

This operation create a Production set, using the following settings:

  • Name: The name of the production set

  • Production data sources: The data sources for the production

    • Data source name: The name of the data source

    • Data source type: The type of data to produce, one of the following Images, Natives or Images and Natives

    • Saved search identifier: The Name, Artifact ID or Name (Like) of the saved search

    • Image placeholder: The action to perform when using image placeholders, either Never use image placeholder, Always use image placeholder, or When no image exists

    • Placeholder identifier: The Name, Artifact ID or Name (Like) of the placeholder

    • Markup set identifier: The Name, Artifact ID or Name (Like) of the markup set

    • Burn redactions: Weather to burn redactions when producing image type productions

  • Create production set from template: Create a new production set using the settings of an existing production set

    • Production set template identifier: The Name, Artifact ID or Name (Like) of the production set template

    • Production set exists in another workspace: Enabling this option lets the user copy production set template settings from any workspace

    • Workspace identifier: The Name, Artifact ID or Name (Like) of the template production set workspace

  • Create production set from settings Create a new production using the settings in the operation

    • Numbering type: The document numbering type

    • Prefix: The string shown before the bates number

    • Suffix: (Optional) The string shown after the bates number

    • Start Number: The initial starting bates number

    • Number of numbering digits: The number representing the number of digits used for document-level numbering, range 1-7

    • Branding font: The type of font used for branding

    • Branding font size: The size of font to use for branding

    • Scale branding font: Causes the branding font to scale

    • Wrap branding font: Causes branding text to wrap when it overlaps with adjacent headers or footers

3.2.98. Run Relativity Production Set

This operation runs a Production set, using the following settings:

  • Production set identifier: The Name, Artifact ID or Name (Like) of the production set

  • Production set action: The action to perform on the production set

    • Stage: Stages the production set to prepare for producing documents

    • Run: Starting a job on the production set and produces staged documents

    • Stage and Run: Stages the production set to prepare for producing documents and then immediately starts a job on the production set

  • Wait for completion: Waits for the production set to finish before moving onto the next operation

3.2.99. Set Brainspace Dataset

This operation connects to the Brainspace environment and retrieves the specified dataset ID, using the following settings:

  • Brainspace API URL: The URL to of the Brainspace environment, for example https://app.brainslace.local

  • Certificate fingerprint: Optional, the SHA-256 fingerprint of the Brainspace app server certificate that should be trusted even if the certificate is self-signed.

  • API key: The API key. This value can be obtained from the Brainspace Administration page → Connectors → API Authentication.

  • Dataset identifier:

    • ID: The Brainspace dataset ID.

    • Name: The Brainspace dataset name.

    • Name (Regex): A regular expression to match the Brainspace dataset name by.

  • Existing dataset: The action to take if the case does not exist:

    • Clone dataset if it does not already exist creates a new dataset by cloning the source dataset.

    • Only use existing dataset triggers an error if the dataset does not exist.

  • Clone settings: The settings to use when cloning a dataset.

    • Copy groups: Copy the groups of the source dataset to the newly created dataset.

    • Add new dataset to group: Adds the newly created dataset to the specified group.

3.2.100. Load Items to Brainspace

This operation exports the text and metadata of items from the Nuix case and loads it to Brainspace.

The following settings can be configured:

  • Scope query: The Nuix query to select the items to load into Brainspace.

  • Export standard metadata: Export items standard metadata to Brainspace.

  • Export custom metadata from profile: Optional, the metadata profile to use for additional metadata to export to Brainspace. When using this option, a Custom field mapping file must be provided.

  • Custom fields mapping file: The JSON mapping file defining the mapping of the custom metadata profile to Brainspace.

  • Export DocIDs from production set: If checked, the name of the production set to export the DocID numbers from.

  • Trim body text at: if checked, the size in characters after which the body text of items is trimmed before loading to Brainspace.

When the body text of items is trimmed, the field Text Trimmed is set to true in Brainspace on the items in question.
The Tag failed items as option has the same behavior as in the Legal Export operation.

Sample Custom fields mapping file mapping 2 custom Nuix fields named Custom Field 1 and Custom Field 2 :

{
  "name": "Custom Mapping",
  "fields": [
    {
      "name": "Custom Field 1",
      "mapTo": "STRING"
    },
    {
      "name": "Custom Field 2",
      "mapTo": "ENUMERATION",
      "faceted": true
    }
  ]
}

3.2.101. Manage Brainspace Build

This operation manages the builds on the Brainspace dataset.

The following settings can be configured:

  • Wait for previous build to complete: Waits for a build that was running at the operation start to complete.

  • Build dataset: Triggers a build of the dataset.

The Build dataset option should be used after the Load Items to Brainspace operation to make the loaded items available for refiew.
  • Wait for build to complete: Waits for the build triggered by this operation to complete.

If a Wait option is selected and the build does not complete in the alloted time, the operation will fail.
The percentage progress of this operation reflects the elapsed timeout and is not an indication of the build progress.

3.2.102. Propagate Tags to Brainspace

This operation propagates tag values from Nuix items to the corresponding Brainspace documents as tag choices.

The following settings can be configured:

  • Scope query: The query to retrieve the Nuix items for which to propagate tags.

  • Nuix root tag: The name of the Nuix root tag.

When using this operation, it is expected that in Nuix, a root tag is created, for example Relevancy. Then, the Nuix items should be assigned subtag values under the root tag, for example Relevancy|Relevant and Relevancy|Not Relevant. The root Nuix tag will be mapped to a Brainspace tag (Relevancy in this example), and the Nuix subtag values will be mapped to Brainspace choices (Relevant and Not Relevant in this example.)
The Nuix items should only have one subtag value, because in Brainspace these are mapped to single-choice tags.
Nested subtags values such as Relevancy|Not Relevant|Personal are not supported.
This operation updates previous tag choices, but does not update items for which in Nuix no subtag exists. As a workaround, to indicate that a document should not have any of the previous tag choices, assign it to a new dedicated choice, for example Relevancy|Unassigned.

3.2.103. Retrieve Metadata from Brainspace

This operation reads metadata form items from Brainspace and applies is to the Nuix items.

The following settings can be configured:

  • Nuix scope query: The Nuix query to select the items to update.

  • Brainspace scope:

    • All Items: Retrieve metadata from all Brainspace items in the dataset.

    • Notebook: Only retrieve metadata from the Brainspace items in the specified Notebook.

  • Tag matching items: The tag to apply to the Nuix items that were matched to Brainspace items.

  • Retrieve Brainspace tags: Select whether to retrieve the tags assigned to items in Brainspace, and what prefix to use when applying the tags to the matching Nuix items.

  • Retrieve Brainspace classifier scores: Select whether to the values of Brainspace fields corresponding to classifiers. These fields are identified as having a numeric type and the word score in their name.

  • Retrieve Brainspace fields: Select whether to retrieve metadata fields from Brainspace to be assigned as custom metadata to the Nuix items, and which Brainspace fields to retrieve.

3.2.104. Azure Container Download

This operation downloads the contents of an Azure Container.

The following settings can be configured:

3.2.105. Convert Purview Export

This operation converts an export from Microsoft Purview into a Nuix Logical Image (NLI) format, preserving the metadata from Purview as well as the family relationships.

The following settings can be configured:

  • Purview export folder: The folder where the Purview data was exported to.

  • Resulting NLI location: The location of the resulting NLI.

3.2.106. Case Subset Export

This operation will export the items in scope in a case subset under the specified parameters.

3.2.107. Export Items

This operation exports items to the specified Export folder.

The Path options option will export items into a Single directory or Recreate the directory structure of the original data.

The Convert emails to option will export the native emails to the selected format.

By default, only the items that are exported are tracked in the utilization database. When selecting the option Track material descendants of exported items in utilization data, in addition to tracking items that are exported, the material descendants of these items are also tracked.

3.2.108. Logical Image Export

This operation will export the items in scope in a Nuix Logical Image (NLI) container.

3.2.109. Metadata Export

This operation will export the metadata of items matching the scope query, using the selected metadata profile.

The following sort orders can be applied:

  • No sorting: Items are not sorted.

  • Top-level item date (ascending): Items are sorted according to the date of the top-level item in each family, in ascending order.

  • Top-level item date (descending): Items are sorted according to the date of the top-level item in each family, in descending order.

  • Evidence order (ascending): Items are sorted in the same way in which they appear in the evidence tree, in ascending order.

The Max Path Depth option does not offer any performance advantages - all items matching the scope query are processed and items exceeding the max path depth are not outputted to the resulting file.

3.2.110. Word-List Export

This operation exports a list of words from the items matching the scope query.

The words are extracted by running by splitting the text of each item using the supplied regex.

Sample regex to extract words containing only letters and numbers:

[^a-zA-Z0-9]+

Sample regex to extract words containing any character, separated by a whitespace character (i.e. a space, a tab, a line break, or a form feed)

\s+

Words which are shorter than the min or longer than the max length supplied are ignored.

3.2.111. SQL Command

This operation connects to a SQL database and runs SQL commands using the following options:

  • SQL platform: The SQL platform that commands will run on, either Microsoft SQL (using the JTDS or Native driver) and PostgreSQL.

  • SQL server name: The SQL host name, for example localhost.

  • Port: The SQL host port, for example 1433 for Microsoft SQL, 5432 for PostgreSQL.

  • Encryption: The requirement for encrypted JTDS connections:

    • Disabled: Does not use encryption.

    • Requested: Attempts to use an encrypted connection if supported by the server

    • Required: Requires the use of an encrypted connection.

    • Signed: Requires the use of an encrypted connection, signed with a certificate in the Java Trust Store.

  • Instance: The Microsoft SQL instance, for example SQLEXPRESS, or blank for the default instance.

  • Domain: The Windows domain for the Microsoft SQL authentication, or blank for Integrated Authentication.

  • Username: The username used to connect to the database, or blank for Integrated Authentication.

  • Password: The password used to connect to the database, or blank for Integrated Authentication.

  • Database: The SQL database to run SQL commands on.

When no database is specified using the SQL platform PostgreSQL, the operation will try to connect to the postgres database. Additionally, when creating a database with PostgreSQL, the database cannot be altered with the same query. To alter the database created, another SQL Command operation is required.
  • SQL query: The SQL query to run.

This operation can be used to create the database required to run other SQL operations.

Sample SQL query to create a database:

CREATE DATABASE rampiva;

3.2.112. Metadata To SQL

This operation exports the metadata of items matching the scope query to Microsoft SQL (using the JTDS or Native driver) or PostgreSQL.

When the table specified does not exist, this operation will try to determine each column type from the metadata fields in the selected metadata profile and create a SQL table with the detected column types.

When creating a SQL table, the type NVARCHAR(MAX) will be used in Microsoft SQL and the type TEXT will be used PostgreSQL when unable to determine the metadata field type.

3.2.113. Query From SQL

This operation queries data from a SQL database and adds custom metadata to the items in the scope, as well as exports the queried data to a CSV file.

The first table column name needs to be either GUID or DocID. The subsequent columns correspond to the metadata fields to be assigned.

Column aliases can be used in lieu of columns with the names GUID or DocID.

Sample Microsoft SQL query with column aliases:

SELECT [Header One] as 'GUID'
      ,[Header Two] as 'File Type'
      ,[Header Two] as 'File Path'
  FROM [TEST TABLE]

Sample PostgreSQL query with column aliases:

SELECT "Header One" as "GUID"
      ,"Header Two" as "File Type"
      ,"Header Two" as "File Path"
  FROM test_table

3.2.114. Notify

This operation sends an email notification with a customized message.

If the Email Notification option is selected, an email will be sent to the specified email address. To obtain information about the SMTP email server and port used in the environment, contact the network administrator.

The value entered Password field will be stored in clear text in the workflow file - a password SHOULD NOT be entered in this field. Instead, set this field to a protected parameter name, for example {smtp_password} and see the section Protected Parameters for instructions on how to set protected parameter values.

The following additional options can be configured:

  • Attach workflow execution log as text: Select this option to attach a file named WorkflowLog.txt to the email, containing the current Execution Log.

  • Attach last generated report, if available: Select this option to attach the last generated report file.

  • Additional attachments: Specify additional files that should be attached to the email.

To attach multiple reports to a notification email, define and store the paths to those files using parameters, and then use those parameters in the Additional attachments section.

3.2.115. Processing Report

This operation generates a processing report in an Excel format, based on a template file.

If a custom template is not specified, the operation will use the default Rampiva template. To create a custom template, first run the Processing Report operation with default settings. Then, make a copy of the latest template file from %userprofile%\.rampiva\Workflow For Nuix\Templates. When finished, modify the workflow to point to the newly created custom template file.
Processing Stages

A processing stage consists in a subset of items from the case, identified by a Nuix query and with an associated method to compute the size. The following size methods are available:

  • Audited size: The Nuix audited size.

  • File size The Nuix file size.

  • Text size: The size of the text.

  • Audited + Text size: The audited size plus the size of the text.

  • Audited (attachments 2x): The audited size, with the attachments size included twice. This can be an estimate of the size of a legal export with the option to leave attachments on emails.

  • Audited (attachments 2x) + Text size The audited size, with the attachments size included twice, plus the size of the text.

  • Digest size: The digest size. If the item does not have a digest, fallback to the file size. If the item is not a file, fallback to the audited size.

The default options from this operation generate a report with a predefined number of stages:

  • Source data

  • Extracted

  • Material

  • Post exclusions

  • Post deduplication

  • Export

Views

Views are used to define how the data is displayed in a report sheet, including the vertical and horizontal columns, the processing stage for which the view applies, the option to calculate the count and/or size of items, and the size unit.

The default options include several predefined views, with each view corresponding to a sheet in the Excel report:

  • Processing overview

  • Material items by custodian

  • Export items by custodian

  • Material items by year

  • Export items by year

  • Material items by type

  • Export items by type

  • Material items by extension

  • Export items by extension

  • Material images by dimensions

  • Export images by dimensions

  • Irregular items

  • Exclusions by type

By default, sizes are reported in Gibibytes (GiB). 1 GiB = 1024 x 1024 x 1024 bytes = 1,073,741,824 bytes. The size unit can be changed in the view options pane.

Each stage and view can be customized, removed and new stages and views can be added.

If the parameter {report_password} is set, the resulting Excel file will be encrypted with the password provided.
Generate Processing Report from multiple cases

The Additional cases option can be used to generate a single report from multiple cases, by specifying the location of the additional cases that need to be considered. Items are evaluated from the main workflow case first, and then from the additional cases, in the order provided. If an item exists in multiple cases with the same GUID, only the first instance of the item is reported on.

When using the Additional cases option to report on a case subset as well as the original case, run the report from the case subset and add the original case in the Additional cases list. This will have the effect of reporting on the case subset items first, and ignoring the identical copies of these items from tzhe original case.

3.2.116. Scan Case Statistics

This operation scans the case for evidence containers, custodians, languages, tags, and date-ranges (by month), item sets, production sets and exclusions, and for each of these tracks the count of all items, count and size of audited items, and count and size of physical items.

The resulting JSON file is stored in the case folder Stores\Statistics and sent to Rampiva Scheduler for centralized reporting.

The following additional options can be configured:

  • Case History: Enables the scanning of the case history to extract sessions, operations and volumes.

  • Compute Size: The methods used to compute the size of items.

  • Max scan duration (seconds): Stop scanning further case details after this time is reached.

  • Native Export: Include non-exported material children: If selected, when a Native Export event is detected in the case history, the material childre of the exported items are also included in the export scope.

  • Force scan previously scanned case: Re-scan a case even if it was previously scanned and no new events were detected.

  • Don’t skip Rampiva Engine sessions: By default, sessions ran by the Rampiva Engine are skipped during the case history scan. If enabled, this option will scan sessions ran by the Rampiva Engine as well. Use this option when rebuilding the Scheduler Utilization database.

3.2.117. Tree Size Count Report

This operation will generate a tree report including the size and count of items in the scope.

If the first elements from the path of items should not be included in the report, such as the Evidence Container name and Logical Evidence File name, increase the value of the Omit path prefixes option.

The Max path depth option limits the number of nested items for which the report will be generated.

See Processing Report for information on using a custom template and size units.

3.2.118. Call API

This operation will make an API call.

The following options can be configured:

  • Verb: The HTTP verb, such as GET or POST.

  • URL: The URL.

  • Certificate fingerprint: Optional, the certificate SHA-256 fingerprint that should be trusted even if the certificate is self-signed.

  • Authentication type: The type of authentication that the API requires.

    • No Auth: No authentication.

    • API Key: Provide the API key name and API key value that will be set as headers.

    • Bearer Token: Provide the Token value.

    • Basic Auth: Provide the Username and Password.

  • Parameters: Optional, URL parameters.

  • Headers: Optional, custom HTTP headers.

  • Body type: The type of body data to submit.

    • None: No data to submit.

    • Form Data: Provide the form field Names and Values.

    • Raw: Provide the Body type and data.

    • Binary: Provide the File location containing the binary data.

Once the API call has completed, the following parameters will be populated:

  • {call_api_response_code}: The HTTP response code.

  • {call_api_response_headers}: The response headers, JSON encocoded.

  • {call_api_response_body}: The response body.

3.2.119. Run External Application

This operation will run an executable file with specified arguments and wait for it to finish.

Example for copying a folder using robocopy:

  • Application location: C:\Windows\System32\Robocopy.exe

  • Arguments: "C:\Program Files\Rampiva" "C:\Temp\Rampiva" /E

Example for listing a folder using cmd.exe, and redirecting the output to a text file in the C:\Temp folder:

  • Application location: C:\Windows\System32\cmd.exe

  • Arguments /c dir "C:\Program Files" > "listing_{date_time}.txt"

  • Working Directory: C:\Temp

3.2.120. Script

This operation will run either the Script code supplied or the code from a Script file in the context of the Nuix case.

This operation can be used to integrate existing in-house scripts in a workflow.
Static parameters

All case parameters are evaluated before the script is started, and can be accessed as attributes in the script execution context without the curly brackets. For example, to print the contents of the case folder, following python script can be used:

import os

print "Contents of case folder: "+case_folder
for f in os.listdir(case_folder):
	print f
Dynamic parameters

The parameters helper object can be used to get and set the value of dynamic parameters:

  • get(String name) - Get the value of the parameter with the name supplied as a String. If the parameter is not defined, return the parameter name.

  • get(String name, Object defaultValue) - Get the value of the parameter with the name supplied as String. If the parameter is not defined, the default value is returned.

  • put(String name, String value) - Set the value of the parameter with the name supplied. If the name supplied is not a valid parameter name, it will be normalized.

  • getAllParameterNames() - Returns a list with the names of all of the parameter names, including system parameters, user-defined parameters, and parameters supplied in the Execution Profile

Example of setting and retrieving parameters:

# Setting parameter {param1}
parameters.put("{param1}","Test Value from Script1")
print "Parameter {param1} has value: "+parameters.get("{param1}")

# Attempting to get undefined parameter {param2}
parameterValue = parameters.get("{param2}",None)
print "Parameter {param2} has value: "+str(parameterValue)

Output:

Parameter {param1} has value: Test Value from Script1
Parameter {param2} has value: None

Additionally, to get the values of parameters converted to specific types, use the methods below:

  • getLong(String name) - Get the value of the parameter with the name supplied as a Long number. If the parameter is not defined or can’t be converted, an exception is thrown.

  • getLong(String name, long defaultValue) - Get the value of the parameter with the name supplied as a Long number. If the parameter is not defined or can’t be converted, the default value is returned.

  • putLong(String name, long value) - Convert the Long number value and store in the parameter.

  • getBoolean(String name) - Get the value of the parameter with the name supplied as a Boolean. If the parameter is not defined or can’t be converted, an exception is thrown.

  • getBoolean(String name, boolean defaultValue) - Get the value of the parameter with the name supplied as a Boolean. If the parameter is not defined or can’t be converted, the default value is returned.

  • putBoolean(String name, boolean value) - Convert the Boolean value and store in the parameter.

  • getDouble(String name) - Get the value of the parameter with the name supplied as a Double number. If the parameter is not defined or can’t be converted, an exception is thrown.

  • getDouble(String name, double defaultValue) - Get the value of the parameter with the name supplied as a Double number. If the parameter is not defined or can’t be converted, the default value is returned.

  • putDouble(String name, double value) - Convert the Double number value and store in the parameter.

  • getJsonObject(String name) - Get the value of the parameter with the name supplied as a deserialized JSON object. If the parameter is not defined or can’t be deserialized as a JSON object, an exception is thrown.

  • getJsonObject(String name, Object defaultValue) - Get the value of the parameter with the name supplied as a deserialized JSON object. If the parameter is not defined or can’t be deserialized as a JSON object, the default value is returned.

  • putJsonObject(String name, Object value) - Serialize the value as a JSON string and store in the parameter.

When converting the parameter values to JSON object, the resulting object type is infered during deserialization and might be different than the original type.

Example of getting and setting typed parameters:

# Defining a Python dictionary
dictionary={}
dictionary["number"]=5
dictionary["color"]="Orange"
print "Original dictionary:"
print type(dictionary)
print dictionary

# Storing the dictionary as a parameter
parameters.putJsonObject("{sample_dictionary}",dictionary)

# Getting the parameter as an object
retrievedDictionary = parameters.getJsonObject("{sample_dictionary}")
print "Deserialized dictionary:"
print type(retrievedDictionary)
print retrievedDictionary

Output:

Original dictionary:
<type 'dict'>
{'color': 'Orange', 'number': 5}

Deserialized dictionary:
<type 'com.google.gson.internal.LinkedTreeMap'>
{u'color': u'Orange', u'number': 5.0}
See section Parameters for a list of built-in parameters.
For assistance with creating custom scripts or for integrating existing scripts into Rampiva Workflow, please contact us at info@rampiva.com.
Workflow Execution

The workflow execution can be manipulated live from the Script operation using the following methods from the workflowExecution helper object:

  • stop() - Stops the workflow execution

  • pause() - Pauses the workflow execution

  • log(String message) - Adds the message to the workflow execution log

  • logInfo(String message) - Adds the message to the workflow info list

  • logWarning(String message) - Adds the message to the workflow warnings

  • triggerError(String message) - Triggers an error with the specified message

  • appendWorkflow(String pathToWorkflowFile) - Appends the operations from workflow from file pathToWorkflowFile to the end of the current workflow.

  • appendWorkflowXml(String workflowXml) - Appends the operations from workflow XML workflowXml to the end of the current workflow. The workflowXml should contain the entire content of the workflow file.

  • insertWorkflow(String pathToWorkflowFile) - Inserts the operations from workflow from file pathToWorkflowFile after the current Script operation.

  • insertWorkflowXml(String workflowXml) - Inserts the operations from workflow XML workflowXml after the current Script operation. The workflowXml should contain the entire content of the workflow file.

  • goToOperation(int id) - Jumps to operation with specified id after the Script operation completes. To jump to the first operation, specify an id value of 1.

  • goToNthOperationOfType(int n, String type) - Jumps to nth operation of the specified type from the workflow after the Script operation completes.

  • goToOperationWithNoteExact(String text) - Jumps to the first operation in the workflow for which the note equals the specified text.

  • goToOperationWithNoteContaining(String text) - Jumps to the first operation in the workflow for which the note contains the specified text.

  • goToOperationWithNoteStartingWith(String text) - Jumps to the first operation in the workflow for which the note starts with the specified text.

  • getOperations() - Returns all operations.

  • getOperationsWithWarnings() - Returns all operations with warnings.

  • getOperationsWithErrors() - Returns all operations with errors.

  • getOperationsWithExecutionState(ExecutionState executionState) - Returns all operations for which the execution state equals the specified execution state.

  • getOperation(int id) - Returns the operation with the specified id.

  • getOperationWithNoteExact(String text) - Returns the first operation in the workflow for which the note equals the specified text.

  • getOperationWithNoteContaining(String text) - Returns first operation in the workflow for which the note contains the specified text.

  • getOperationWithNoteStartingWith(String text) - Returns the first operation in the workflow for which the note starts with the specified text.

  • getCurrentOperationId() - Returns the id of the current Script operation.

  • getOperationsCount() - Returns the id of the last operation in the workflow.

  • clearStickyParameters() - Remove all sticky parameters set in the user profile.

  • setProgress(double percentageComplete) - Set the operation progress. This is displayed in the user interface and used for the ETA calculation. Specify values between 0.0 and 1.0.

  • setTaskName(String taskName) - Sets the name of the tasks that the script is working on. This is displayed in the user interface.

Example of script that restarts execution twice and then jump to the last operation in the workflow:

count = parameters.getLong("{execution_count}",0)
count=count+1
parameters.putLong("{execution_count}",count)

if (count<3):
        workflowExecution.goToOperation(1)
else:
        workflowExecution.goToOperation(workflowExecution.getOperationsCount())
Operations

Information about an operation can be obtained from the Script operation using the following methods from the operation helper object:

  • getId() - Returns the operation id.

  • getExecutionState() - Returns the operation execution state.

  • getName() - Returns the operation name.

  • getNotes() - Returns the operation notes.

  • getErrorMessage() - Returns the operation error message. If the operation does not have an error this value will be null or blank.

  • getWarningMessages() - Returns the list of warnings for the operation. If the operation does not have any warnings this will be an empty list.

  • getStartDateTime() - Returns the start date of the operation as a Joda DateTime.

  • getFinishedDateTime() - Returns the finished date of the operation as a Joda DateTime.

  • getSkippable(Boolean skippable) - Returns true if is skippable.

  • getDisabled() - Returns true` if the operation is disabled.

  • setDisabled(Boolean disabled) - Sets the disabled state of the operation.

  • getSoftFail() - Returns true` if the operation is set to soft fail on error.

  • setSoftFail(Boolean softFail) - Set the soft fail state of the operation.

  • getEta() - Returns the operation ETA as a Joda DateTime.

  • getPercentageComplete() - Returns the operation progress as a percentage.

Example script that prints the details of the last operation with an error:

operations_with_errors = workflowExecution.getOperationsWithErrors()

if operations_with_errors.size() >= 1:
	last_error_operation = operations_with_errors[-1]
	print "Last operation with error #{0} {1}: {2}".format(last_error_operation.getId(), last_error_operation.getName(), last_error_operation.getErrorMessage())
else:
	print "No operations encountered errors"
Data Sets Metadata

Information about the data sets selected when submitting the Job is stored in the dataSetsMetadata helper object. This object is a dictionary, with the key being the Data Set ID, and the values being a dictionary with the properties of the Data Set.

Call APIs

The Script operation exposes several helper objects that can be used to make calls to Rampiva and third-party APIs. These helper objects are:

  • restRampiva - Make calls to the Rampiva Automate API.

  • restDiscover - Make calls to the Nuix Discover API.

  • restRelativity - Make calls to the Relativity REST API.

  • rest - Make calls to generic REST APIs.

The response from API calls has the following methods and fields:

  • status_code - An integer representing the status code

  • text - The text response

  • json() - An object after parsing the response as JSON

  • raise_for_status() - Raise an exception if the status code is 4xx or 5xx

  • headers - A dictionary with the response headers

When making calls to an API over HTTPS, the call will fail if the HTTPS certificate is not trusted by the Java keystore. To explicitly allow connections to servers with a specific SHA-256 fingerprint certificate fingerprint, use the following method:

  • setFingerprint (String fingerprint)

Call Rampiva API

To make calls to the Rampiva Automate API from the Script operation use the restRampiva helper object.

The base URL of the Rampiva Automate instance and the authentication API key are set automatically from the Job under which the Script operation is running. However, these settings can be overwritten with the following methods:

  • setBaseUrl(String baseUrl)

  • setBearerToken(String bearerToken)

The following methods can be used to call an API endpoint:

  • get(String endpoint)

  • delete(String endpoint)

  • post(String endpoint, Object data)

  • put(String endpoint, Object data)

Example Python script that creates a new client:

body = {
  "name": "Sample Client Name",
  "description": "This client was created from the API",
  "enabled": False
}

response = restRampiva.post("/api/v1/scheduler/client", body);

print response.json();
Call Discover API

To make calls to the Nuix Discover API from the Script operation use restDiscover helper object.

The base URL of the Nuix Discover API and the authentication API key are set automatically from the Use Discover Case operation. However, these settings can be overwritten with the following methods:

  • setBaseUrl(String baseUrl)

  • setBearerToken(String bearerToken)

The following methods can be used to call an API endpoint:

  • call(String query)

  • call(String query, Map<String,Object> variables)

Example Python script that runs a GraphQL query for the users with the first name John:

body = '''
query MyQuery ($fn: String){
  users(firstName: $fn) {
    id,
    fullName
  }
}
'''
variables = {"fn":"John"}

response = restDiscover.call(body,variables);
print response.json();
Call Relativity API

To make calls to the Relativity Rest API from the Script operation use the restRelativity helper object.

The URL of the Relativity server and the authentication headers are set automatically from the Configure Relativity Connection operation. However, these settings can be overwritten with the following methods:

  • setBaseUrl(String baseUrl)

  • setBearerToken(String bearerToken)

  • setBasicAuth(String username, String password)

The following methods can be used to call an API endpoint:

  • get(String endpoint)

  • delete(String endpoint)

  • post(String endpoint, Object data)

  • put(String endpoint, Object data)

Example Python script that queries the Relativity Object Manager for workspaces with a specific name and prints the Artifact ID:

workspaceName = "Relativity Starter Template"

body = {
    "request": {
        "Condition": "'Name' == '"+workspaceName+"'",
        "ObjectType": {
            "ArtifactTypeID": 8
        },
        "Fields": [{
                "Name": "Name"
            }
        ]
    },
    "start":0,
    "length":1000
}

response = restRelativity.post("/Relativity.Rest/api/Relativity ObjectManager/v1/workspace/-1/object/query",body)
response.raise_for_status()

print("Response count: "+str(int(response.json()["TotalCount"])))
for responseObject in response.json()["Objects"]:
    print "ArtifactID: "+str(int(responseObject["ArtifactID"]))
    for fieldValue in responseObject["FieldValues"]:
        print(fieldValue["Field"]["Name"]+": "+fieldValue["Value"])
Call Generic API

To make calls to a generic API from the Script operation use rest helper object.

The base URL can be optionally set using the following method:

  • setBaseUrl(String baseUrl)

The authentication can be optionally set using the following methods:

  • setBearerToken(String bearerToken)

  • setBasicAuth(String username, String password)

Custom headers can be optionally set using the following method:

  • setCustomHeader(String name, String value)

The following methods can be used to call an API endpoint:

  • get(String endpoint)

  • delete(String endpoint)

  • post(String endpoint, Object data)

  • put(String endpoint, Object data)

Example Python script queries a REST API:

response = rest.get("https://dummy.restapiexample.com/api/v1/employees");
print response.json();

3.2.121. PowerShell

This operation will run the specified PowerShell script.

Getting Parameters Values

When running a PowerShell script from the specified code, the Rampiva parameters used in the code will be evaluated before running the code. The evaluation of Rampiva parameters is not performed when running a PowerShell script file.

For example, the following PowerShell script code:

Write-Host "The time is: {date_time}"

will produce the following output:

Running PowerShell code
The time is: 20221006-132923
PowerShell exited with code 0
Setting Parameters Values

To set Rampiva parameter values from a PowerShell script, the value of the parameter has to be written to a file in a specific location. This mechanism is required because the PowerShell script does not run in the same context as the Rampiva workflow.

To set a parameter with the name {sample_parameter_name}, the PowerShell script should write the value of the parameter to a file named sample_parameter_name with no extension, in the folder {powershell_parameters}, for example:

Set-Content -NoNewline -Path {powershell_parameters}\sample_parameter_name -Value $SampleValue
The parameter {powershell_parameters} will be automatically assigned to a temporary path when running the PowerShell operation, and does not need to be defined elsewhere. To use this mechanism in a PowerShell script, pass the value of this parameter as an argument to the script.

For example, to get the current date and time in PowerShell and set to a Rampiva parameter, use the following PowerShell code:

$CurrentDate = Get-Date
Set-Content -NoNewline -Path {powershell_parameters}\date_from_powershell -Value $CurrentDate

3.2.122. Switch License

This operation releases the license used by the Nuix Engine when running a job in Rampiva Scheduler, and optionally acquires a different license depending on the license source option:

  • None: Does not acquire a Nuix license and runs the remaining operations in the workflow without access to the Nuix case.

  • NMS: Acquires a Nuix license from the NMS server specified.

  • CLS: Acquires a Nuix license from the Nuix Cloud License server.

  • Dongle: Acquires a Nuix license from a USB Dongle connected to the Engine Server.

  • Engine Default: Acquires a Nuix license from the default source from which the Engine acquired the original Nuix license when the job was started.

When specifying a Filter, the text provided will be compared against the available Nuix license name and description.

When specifying a Workers count of -1, the default number of workers that the Engine originally used will be selected.

This operation is not supported for workflows executed in Rampiva Workflow.

3.2.123. Placeholder

This operation does not perform any action. It can be used to separate a group of operations or to facilitate jumping to a specific location in the workflow.

3.2.124. Close Case

This operation closes the current Nuix Case used in Rampiva Workflow. If the Nuix Workstation case is being used, it will be closed.

If the Close Execution Log option is selected, the Execution Log stored in the case folder Stores\Workflow will be closed and no further updates will be made to the log file unless the case is re-opened.

3.2.125. Close Nuix

This operation closes the Nuix Workstation window, releasing the licence acquired by this instance.

If the Close Rampiva Workflow option is selected, the Rampiva Workflow window showing the workflow progress and execution log is also closed.

To release the memory acquired by the Nuix Workstation instance, the Rampiva Workflow window needs to be closed

4. Workflow Execution

Workflow Execution can be initiated either from Nuix using the Scripts → Rampiva → Workflow Execution menu or from the Workflow Design, using the File → Workflow Execution menu. Jobs can also be queued from Rampiva Scheduler.

After starting the execution of a workflow, a log of the execution is presented in the Execution Log pane.

The workflow execution can be interrupted with the Pause and Stop buttons. Although some operations support pausing and stopping mid-execution, it should be expected that the workflow will pause or stop after the current operation completed.

The workflow execution can be aborted with the Edit → Abort Execution menu item.

The Abort Execution will terminate any operation running and might corrupt the Nuix case.

If the currently running operation was marked as skippable during the workflow design, then the operation execution can be skipped using the Edit → Skip Operation menu item.

A copy of the workflow file and the execution log will be stored in the case folder, under Stores\Workflow.

4.1. Email Notifications

Rampiva Workflow can send automatic email notifications when a workflow is executing, even if no Notify Operations are explicitly added to the workflow. This feature can be configured from the Options menu.

The following notification frequencies are available:

  • Disabled: No automatic workflow execution email notifications will be sent.

  • On workflow start/complete: A notification will be sent for workflow start, pause, stop, finish, and error events.

  • On operation complete: A notification will be sent for workflow events and when each operation completes.

To avoid sending multiple emails within a short time period, use the Buffer emails for option. This will have the effect of waiting the predefined time before sending a notification email, unless the workflow execution finished in which case the email will be sent right away.