Rampiva Logo

Disclaimer

The material in this document is for informational purposes only. The products it describes are subject to change without prior notice, due to the manufacturer’s continuous development program. Rampiva makes no representations or warranties with respect to this document or with respect to the products described herein. Rampiva shall not be liable for any damages, losses, costs or expenses, direct, indirect or incidental, consequential or special, arising out of, or related to the use of this material or the products described herein.

© Rampiva Technology Inc. 2021 All Rights Reserved

Introduction

This guide describes the features and options of Rampiva Scheduler, a product which is a part of the Rampiva Automate suite. This document works like a reference - use the table of contents to look for the topic of interest.

To report any issues with this guide or with the Rampiva software, please contact support@rampiva.com.

Styles Used in This Guide

Note: This icon indicates that additional clarifications are provided, for example what the valid options are.
Tip: This icon lets you know that a useful tidbit is provided, such as how to achieve a certain behavior.
Warning: This icon highlights information that may help you avoid an undesired behavior.

Emphasized: This style indicates the name of a menu, option or link.

code: This style indicates code that should be used verbatim, and can refer to file paths, parameter names or Nuix search queries.

User Interface Patterns

In addition to standard Web user interface patterns, Automate makes use of the following patterns:

Optional Value

The field border is grey when the field is empty.

Optional Value

Invalid Value

The field border is red.

Invalid Value

Perform Action

When viewing the details of an item, such as a Processing Job or a Client, a list of available actions is displayed by clicking on the dropdown button at the right of the item name.

Add to Selection

A list of available options is displayed in the left pane. Items can be highlighted and added to the selection using the right arrow > button. Items can be removed form the selection with the left arrow < button.

To search through a list of items in a dropdown, press any printable character key to activate the search bar. Clearing the search text, closing the dropdown, or selecting an item will cancel the search.

1. Logging In

Scheduler can be configured to allow logging in with a Nuix account, by providing a username and password, or with a Microsoft account, by using the Sign in with Microsoft button.

If the Sign in with Microsoft button is not visible, contact your administrator to have this option enabled.

After a period of inactivity (by default 15 minutes), a warning will be displayed and the user will be logged out if no action is performed.

2. Processing

The Processing view is used to monitor the Jobs queue, manage Schedules and view Archived jobs.

2.1. Jobs

The Processing Jobs view is the default screen displayed after login. It can be accessed using the ProcessingJobs menu in the top navigation bar as well as by clicking on the Rampiva Automate logo.

2.1.1. Submitting a Job

To submit a Processing Job, click on the Add Job + button at the top-left of the Processing Jobs Queue view. A submission has 4 steps:

  1. Select the Client and the Matter for which the Processing Job is submitted, or select Unassigned if the Processing Job is not submitted for a specific project;

  2. Select the Library and the Workflow to run for this Processing Job or select Workflow File to run a workflow from a file;

The Unassigned Client/Matter option and the Workflow File Library/Workflow option are only visible if the user has the appropriate permissions (see Security Policies).
  1. Fill-in the Processing Job settings:

    • Select an Execution Profile from the dropdown;

    • Select a Resource Pool from the dropdown;

    • Adjust the Processing Job Priority as needed;

    • Adjust the Processing Job Name as needed;

    • Fill-in the Processing Job Notes. This section can be used for documentation purposes and to inform other users about the Processing Job settings.

    • Fill-in the Processing Job Parameters or load their values from a tab-separated value (TSV) file using the …​ button.

To set the priority value Highest, the user must have the modify permission on the Resource Pool to which the Processing Job is assigned.
  1. Review and confirm the details of the submission.

2.1.2. Data Sets

A Processing Job can process data from a Data Set if the Workflow selected uses Data Set Parameters. These are special Parameters with the name ending in _dataset}, for example, {source_dataset}.

When submitting a Processing Job with a Data Set Parameter, the user will be prompted to select a Data Set from the list of Data Sets from the Matter for which the Processing Job is queued. At this stage, only Data Sets in the Finalized stage are presented to the user.

2.1.3. Execution Order

There are several factors which come into play when determining the order in which Processing Jobs will run.

If two Processing Jobs are assigned to the same Resource Pool and there are no active locks, the Processing Job with the highest Priority will start first. If the Processing Jobs have the same priority, the one that was added to the Backlog first will run first.

If two Processing Jobs are assigned to different Resource Pools, the Processing Job from the Resource Pool which has available Engines and which can acquire a Nuix license will run first.

2.1.4. Job Locks

By default, Processing Jobs in Automate can run in parallel. If it’s required to have certain Processing Jobs run sequentially, locks can be used. These can be set by using the Synchronized Processing Jobs option or by using Lock Parameters.

Synchronized Jobs

When the Synchronized Processing Jobs option in the Matter settings is checked, Processing Jobs assigned to that matter will run one at a time.

If multiple Processing Jobs are assigned to a Matter with the Synchronized Processing Jobs option checked and if the order in which the jobs run is important, assign them to the same Resource Pool. Otherwise, the order in which the Processing Jobs start is not guaranteed and depends on the Nuix licenses and Engines available under the respective Resource Pools.

Lock Parameters

Lock Parameters are special Parameters which can be defined in the Workflow to ensure that two Processing Jobs don’t run at the same time, regardless of the Matters to which the Processing Jobs are assigned. The name of Lock Parameters ends with _lock}, for example {project_lock}.

When using Lock Parameters, Processing Jobs are guaranteed to run sequentially only if they have a Lock Parameter with the same name and the same value.

2.1.5. Job Execution States

Processing Jobs can be in one of the following states:

  • Not Started: The Processing Job was submitted - Backlog lane;

  • Running: The Processing Job is currently running - Running lane;

  • Pausing: The Processing Job will pause after the current operation completes - Running lane;

  • Paused: The Processing Job ran and was paused - Backlog lane;

  • Stopping: The Processing Job will stop during the current operation or after the current operation completes - Running lane;

  • Stopped: The Processing Job ran and was stopped - Finished lane;

  • Finished: The Processing Job ran and completed successfully - Finished lane;

  • Finished, with warnings: The Processing Job ran and completed with warnings - Finished lane;

  • Error: The Processing Job ran and encountered an error - Finished lane;

  • Cancelled: The Processing Job was cancelled before running - Finished lane;

2.1.6. Job Lanes

In the Processing Jobs view, queued, running and finished jobs are displayed under different lanes:

  • Backlog: These are jobs that have been queued for execution and will run when resources are available and there are no warnings preventing the Processing Job to run.

  • Running: These jobs are currently running.

  • Finished: These jobs finished running or were cancelled.

Jobs that have been archived are displayed in the Processing Jobs Archive view (see Processing Jobs Archive).

The order in which jobs are displayed in the lanes can be changed from the User Settings (see Processing Job Sort Order).

2.1.7. Job Card

For each job, a Processing Job Card is displayed in the corresponding Processing Job Lane. The information displayed in the Processing Job Cards can be customized from the User Settings (see Processing Job Card).

2.1.8. Job Panel

To see the details of a job, click on the Processing Job Card to open the Processing Job Panel.

The Processing Job Panel contains the following sections:

  • Header: The left side contains the job name and the job action dropdown. The right side contains the job status, the job completion percentage and the job status icon;

  • Processing Job Settings: A table view of the job settings;

  • Notes: The notes supplied by the user when the job was submitted;

  • Parameters: The parameters along with the values supplied by the user when the job was submitted;

  • Workflow: The list of operations that are part of the workflow selected when the job was submitted;

  • Execution Log: The log generated by the job execution (this section is not visible for jobs that have not started);

  • Change Log: The job audit log indicating the job submission, execution, and change events, as well as the time when these events occurred, who they were performed by and where applicable additional details, such as the changes that were made to the job settings.

2.1.9. Job Actions

To perform an action on a job, open the Processing Job Panel by clicking on the corresponding Processing Job Card, and then click on the dropdown button at the right of the Processing Job name.

The following actions can be performed on jobs, depending on the lane the Processing Job is in and the user permissions:

  • Duplicate: Initiates the submission of a job with the same settings as the selected job;

  • Download Logs: Download a zipped copy of the job logs. To download logs of a job, centralized logging must be enabled and you need the permissions to download job logs (see Download Logs of a Processing Job). The zipped copy of job logs contains the following files:

    • Engine Logs

    • Worker Logs

    • Workflow File

    • Processing Job Changelog

    • Execution Log

    • Workflow Parameters

  • Print: Print the job panel, for example, to a PDF file;

  • Cancel Execution: Cancel and move the job to the Finished lane with an error status;

  • Skip Operation: Stop the execution of the current operation and continue the Processing Job. This option is only available if the currently running operation was configured as skippable during the workflow design.

  • Pause: Puts the job in a pausing state and unassigns the Resource Pool. After the currently running operation finishes, the job will be placed in the paused state and will be moved to the Backlog lane. Once paused, the Nuix case is closed and the Nuix license is released. The job will not resume execution unless it is re-assigned to a Resource Pool.

  • Stop: Sends a stop command to the current operation and puts the job in a stopping state. If the operation supports stopping, execution is stopped mid-way. Otherwise, the execution is stopped after the operation completes. Once stopped, the Nuix case is clsoed and the Nuix license is released.

  • Abort: Attempts to first stop the job gracefully for 5 seconds, and if not possible, aborts the job execution by forcibly closing the running processes.

  • Archive: Archives the job and moves it to the Archive lane.

Aborting a job leaves the Nuix case in a corrupted state and should only be used as a last resort if a job is non-responsive.
Table 1. Actions available in each Processing Job Lane
Action Backlog Running Finished

Duplicate

X

X

X

Print

X

X

X

Download Logs

X

X

X

Cancel Execution

X

Pause

X

Stop

X

Abort

X

Archive

X

Eclude / Include Metrics

X

X

2.2. Schedules

The Processing Jobs Schedule view can be accessed using the Processing JobsSchedule menu in the top navigation bar. It can be used to manage the Schedules which automatically add Processing Jobs for executions either at specified time intervals or when a specific event for another Processing Job occurs.

The Processing Jobs Schedule feature requires an Enterprise class license.

2.2.1. Create a Schedule

To create a Schedule, click on the Create Schedule + button at the top-left of the Processing Jobs Schedules view and provide the following information:

  1. Schedule settings:

    • Name: A user-defined name to assign to the Schedule. The Processing Jobs submitted by the Schedule will have the same name.

    • Active: The state of the Schedule. An inactive Schedule will not queue any new Processing Jobs.

    • Description: A user-defined description (optional). The Processing Jobs submitted by the Schedule will have the same description.

    • Conditions: Additional optional conditions that must be met for the Schedule to submit new Processing Jobs:

      • Commence after: Schedule will only add Processing Jobs after this date.

      • Expire after: Schedule will not queue any new Processing Jobs after this date.

      • Skip if X Processing Jobs from this schedule are running: Schedule will not queue new Processing Jobs if there already are X Processing Jobs running which were submitted by this Schedule. After the number of running Processing Jobs drops below X, the Schedule becomes once again eligible to add Processing Jobs.

      • Skip if X Processing Jobs from this schedule are queued: Schedule will not queue new Processing Jobs if there already are X Processing Jobs queued which were submitted by this Schedule. After the number of queued Processing Jobs drops below X, the Schedule becomes once again eligible to add Processing Jobs.

  2. Triggers

    • On a timer: Processing Jobs will be queued at the predefined time interval.

    • On an event: A Processing Job will be queued when any of the specified Processing Job Events occurs and when all of the specified conditions are met for the Processing Job Event in question.

For example, to automatically retry failed jobs that were submitted with a High or Highest priority, the Schedule Processing Job Events would contain the event Processing Job Error, and the Event Conditions would have the Submission Mechanism set to Regular Processing Job, and the Priorities set to Highest and High.
When using a Schedule that triggers on an event, it’s recommended to set the Submission Mechanism condition to Regular Processing Job only. Otherwise, it’s possible to create a loop of events, where the Processing Job queued by the Schedule will in turn trigger the Schedule again.
  1. Client / Matter

    • The Client and the Matter for which the Schedule will submit the Processing Job.

When using the trigger On an event, it’s possible to select the Client and Matter Same as Triggering Processing Job. This will have the effect of queueing a new Processing Job with for the same Matter as the original Processing Job which triggered the Schedule.
  1. Library / Workflow

    • The Library and the Workflow that the scheduled Processing Job will run.

When using the trigger On an event, it’s possible to select the Library / Workflow Same as Triggering Processing Job. This will have the effect of queueing a new Processing Job with the Workflow as the original Processing Job which triggered the Schedule. In this case the Processing Job parameters will also be copied from the Triggering Processing Job and cannot be set explicitly in the Schedule.
  1. Processing Job Settings

    • Execution Profile: The Execution Profile of the queued Processing Job, or Unassigned;

    • Resource Pool: The Resource Pool of the queued Processing Job, or Unassigned;

    • Priority: The Priority of the queued Processing Job;

    • Parameters: The parameters of the queued Processing Job;

When using the Library / Workflow Same as Triggering Processing Job, the Parameters cannot be explicitly set and instead will take the same values as the Triggering Processing Job. The Execution Profile, the Resource Pool and the Priority can either be explicitly defined, or can be set to Same as Triggering Processing Job.
  1. Review and confirm the details of the submission.

To edit, delete, deactivate or activate a Schedule, select the Schedule and then click on the dropdown button at the right of the Schedule name at the top of the Schedule panel.

2.3. Archive

The Processing Jobs Archive view can be accessed using the Processing JobsArchive menu in the top navigation bar. It displays jobs that have been archived either manually with the Archive action, or automatically when the archive conditions are met.

By default, a Processing Job is automatically archived 2 weeks after it finished, or when there are more than 100 Processing Jobs in the Finished lane. These settings can be changed by modifying the Scheduler configuration file (see the Automate Installation Guide for details).

3. Legal Hold

The Legal Hold view is used to access the Overview of outstanding Notices, manage Legal Hold Matters and search for Notices.

The Legal Hold feature requires a Corporate-edition license or higher.

The Legal Holds Overview view can be accessed using the Legal HoldsOverview menu in the top navigation bar. The page displays a summary of the number of matters that the user is subject to and that the administrator is managing, as well the cards for the notices which need to be actioned.

The Legal Holds Matters view can be accessed using the Legal HoldsMatters menu in the top navigation bar. It can be used to add, modify and delete Legal Holds.

To create a Legal Hold, click on the Add + Legal Hold button at the top-left of the Legal Hold Matters view. Creating a Legal Hold has 6 steps:

  1. Select the Client and the Matter.

  2. Configure the Hold and Release notices that will be used when issuing holds and releases to custodians.

  3. Optionally, configure Survey notices that are sent to the custodians when issuing holds, or when the Survey is added, if the Legal Hold is already active.

    • Optionally, provide a Respond by date using either a fixed date or a number of days after the sent date;

    • Optionally, enable Reminders with an interval in days and a Reminder Notice Template;

    • Optionally, enable Escalations with an Escalation Notice Template;

    • Optionally, disable Comments.

Reminders and Escalations require a Respond by date.
  1. Submit the Legal Hold Settings;

    • Fill-in the Name;

    • Optionally, fill-in the Description. This section can be used for documentation purposes and to inform custodians about the Legal Hold;

    • If a Notice was configured with the option for Custodians to upload data, the Data Repository dropdown will be presented and a Data Repository will need to be selected (see Data Repositories);

    • Select an SMTP Server from the dropdown (see SMTP Servers);

    • Adjust the Scheduler URL if needed. This URL is used when sending notification emails to Custodians;

    • Fill-in the Parameters or load their values from a tab-separated value (TSV) file using the …​ button.

  2. Select the Administrators of the Legal Hold.

  3. Select Custodians of the Legal Hold.

To import a list of custodian emails, click the metadataAdd button in between the Available and Selected columns and select the file containing the emails.
Only custodians in the Available column can be imported into the Selected column.
  1. Review and confirm the details.

To see the details of a legal hold, click on the Legal Hold Row to open the Legal Hold Panel.

The Legal Hold Panel contains the following sections:

  • Header: The left side contains the legal hold name and action dropdown. The right side contains the legal hold state and icon;

  • Settings: A table view of the settings;

  • Description: The description;

  • Parameters: The parameters along with the supplied values;

  • Notice Configurations: A table view of the notice configurations when in the Draft state;

  • Notices: A table view of all the notices for the legal hold;

  • Administrators: A table view of all the legal hold administrators;

  • Custodians: A table view of all the legal hold custodians. The following actions can be performed on the custodians:

    • To import and optionally issue holds to a list of custodian, click the metadataAdd button at the top-left of the table view and select the file containing the email addresses;

    • To issue or re-issue a hold, select the custodians in the table view as needed and click the actionHoldAdd button at the top-right of the table view;

    • To release a custodian, select the custodians in the table view as needed and click the actionHoldRelease button at the top-right of the table view.

To issue holds or releases, the legal hold must be in the Active state.
  • Change Log: The legal hold audit log indicating change events, the time when these events occurred, who they were performed by, and where applicable additional details.

Legal Holds can be in one of the following states:

  • Draft: The Legal Hold is a draft. Administrators can log onto Scheduler and modify the Legal Hold;

  • Active: The Legal Hold is active. Notices are actively issued and custodians can log in to Automate and respond to issued notices;

  • Released: The Legal Hold is released. Custodians are released and can log in to Automate to view the responses provided in the notices;

  • Archived: The Legal Hold is archived. Custodians cannot log in to Automate anymore.

  • Deleted: The Legal Hold information is deleted.

To perform an action on a legal hold, open the Legal Hold Panel by clicking on the corresponding Legal Hold Row, and then click on the dropdown button at the right of the Legal Hold name.

The following actions can be performed on legal holds:

  • Edit: Modify the legal hold;

  • Export: Export selected legal hold notices;

  • Duplicate: Initiates the creation of a legal hold with the same settings as the selected legal hold;

  • Delete: Delete the legal hold;

  • Activate: Activate the legal hold and issue hold and survey notices to all custodians;

  • Release: Release the legal hold and issue release notices to all custodians;

  • Archive: Archive the legal hold.

The Legal Holds Notices view can be accessed using the Legal HoldsNotices menu in the top navigation bar. It displays a filtered list of user notices.

4. Collections

The Collections view can be accessed using the Collections link in the top navigation bar. It can be used to create, delete and view the status of Collections.

Collections integrate with Active Directory Domains to find users and computers, and with Nuix Enterprise Collection Center to collect data from the computers.

The Collection feature requires a Corporate-edition license.

4.1. Submitting a Collection

To submit a Collection, click on the Create Collection + button at the top-left of the Collections view. A submission has 4 steps:

  1. Select the Client and the Matter for which the Collection is submitted;

  2. Collection settings:

    • Name: A user-defined name to assign to the Collection;

    • Description: A user-defined description (optional);

    • Collection Type: The type of Collection, for example ECC Collection.

    • Data Repository: The location where data sets and logs will be created.

When selecting the Data Repository, make sure that computers that will be collected have access to this folder.
  1. ECC Collection Settings:

    • ECC Profile: The Enterprise Collection Center profile used for the collection (see ECC Profiles).

    • ECC Configuration: A configuration from the ECC deployment, which defines the collection settings such as the collection tasks, the file types and source location;

    • Log Strategy: The strategy used to determine where logs will go after a Collection finishes;

      • Delete once collection is complete will delete logs generated by the collection and logs stored by scheduler;

      • Include logs near dataset folder will move the logs generated by the collection into the Log folder inside the data set directory;

      • Include logs in dataset folder will move the logs inside the data set Final folder and create file info for each log file.

    • ECC Collection Target Type: The type of target the collection will be performed on, Computers or Users;

    • Computers: The computers that will be a part of the collection.

In order to submit a Collection, at least one configuration must be defined in Nuix Enterprise Collection Center.
  1. Review and confirm the details of the submission.

Collections can be in one of the following states:

  • Awaiting Resource: The Collection is waiting for another computer to finish;

  • Cancelled: The Collection was cancelled;

  • Cancelling: The Collection is cancelling;

  • Failed: The Collection ran and encountered an error;

  • Finished: The Collection ran and completed successfully;

  • Pausing: The Collection is pausing;

  • Paused: The Collection ran and was paused;

  • Picked Up: The Collection was acknowledged by ECC;

  • Ready: The Collection is ready to begin;

  • Resumed: The Collection resumed;

  • Resuming: The Collection is resuming;

  • Running: The Collection is running;

  • Running with Warnings: The Collection is running with warnings;

  • Suspending: The Collection will stop collecting files;

  • Suspended: The Collection has stopped collecting files;

  • Waiting: The Collection is waiting for ECC;

  • Pending: The Collection is preparing to start.

4.2. Collection Panel

To see the details of a collection, click on the Collection row to open the Collection Panel

The Collection Panel contains the following sections:

  • Header: The left side contains the collection name and the collection action dropdown. The right side contains the collection status, the collection completion percentage and the collection status icon;

  • Collection Settings: A table view of the collection settings;

  • ECC Settings: A table view of the ECC collection settings;

  • Collection Status: The list of computers that are part of the collection selected when the collection was submitted;

  • Log: The log generated by a collection;

  • Change Log: The collection audit log indicating the collection submission, execution, and change events, as well as the time these events occurred, who they were performed by and where applicable additional details, such as the changes that were made to the collection settings.

4.3. Collection Actions

To perform an action on a collection, open the Collection Panel by click on the corresponding Collection row, and then click on the dropdown button at the right of the Collection name.

The following actions can be performed on collections, depending on the Collection execution state and the user permissions:

  • Duplicate: Prepares a collection with the same settings as the selected collection;

  • Download Logs: Download a zipped copy of the collection logs. The zipped copy of collection logs contains the follow files:

    • Computer Logs

    • Collection Log

    • CSV of computers that ran in the collection

  • Stop: Sends a stop command to all computers and cancels the collection;

  • Delete: Deletes the collection and informs all computers to cancel the current tasks;

  • Archive All Datasets: Archives all datasets created by computers that have finished collecting;

  • Delete All Datasets: Deletes all datasets created by computers that have finished collecting;

Stopping or deleting a collection will only affect the tracking and actions performed Automate, such as issuing new collection instructions or deploying agents. The collection tasks submitted to ECC will continue running.

5. Clients

The Clients view can be accessed using the Clients link in the top navigation bar. It can be used to create, modify and delete Clients and their Matters.

5.1. Clients

Clients are used to organize and track Processing Jobs, and can correspond to external clients, internal clients, departments, or teams.

A Client has a name, a description, and optionally a default Execution Profile and a default Resource Pool.

If a Client is assigned a default Execution Profile or a default Resource Pool value, these values will be automatically selected when submitting a Processing Job for the Client in question. The user still has the option to change these values during Processing Job submission.

When a Client is inactive it will not be visible in the Processing Job submission steps.

To add a new Client, use the Add Client + button at the top-left of the Clients view.

To edit, delete, deactivate or activate a Client, select the Client and then click on the dropdown button at the right of the Client name at the top of the Client panel.

5.2. Matters

Matters are created under Clients, and have a name, a description, and optionally a default Execution Profile and a default Resource Pool. Additionally, Matters can be configured with the Synchronized Processing Jobs option (see Synchronized Processing Jobs).

If a Matter is assigned a default Execution Profile or a default Resource Pool value, these values will be automatically selected when submitting a Processing Job for the Matter in question. The user still has the option to change these values during Processing Job submission.

To deactivate or activate a Matter, switch the toggle at the left of the Matter name in the Client panel.

When a Matter is inactive it will not be visible in the Processing Job submission steps.

To add a new Matter, use the Add + button at the top of the Client panel.

Additionally, to edit, delete, deactivate or activate a Matter, select the Matter and then click on the dropdown button at the right of the Matter name.

5.2.1. Data Sets

Data Sets are created under Matters, and are used to store data that is then used by Processing Jobs.

The location of where the data is stored, as well as quotas and file extension restrictions are defined by administrators in the Data Repositories.

To create a Data Set, select a Matter and click the Add + Data Set button in the Matter Pane. After a Data Set is created, its name, description and Data Repository cannot be changed.

To upload data, click on the upload button upload at the top left of the files table, select the files to upload, and start the upload by clicking on the Upload button at the bottom right of the pane.

Uploads can be paused, resumed and cancelled. If an upload is interrupted, for example due to the browser being closed or crashing, when re-uploading the files that did not complete during the initial upload, the system will automatically continue uploading the files from the offset that was last transmitted, if the result information is available on the server.

A Data Set can be in one of the following states:

  • Draft: Files and metadata can be uploaded and modified. This is the default state a Data Set is in after creation.

  • Finalized: The contents of the Data Set is frozen. The Data Set can be used when queueing Processing Jobs.

  • Hidden: Hidden from the user when queueing new Processing Jobs.

  • Archived: Prevents new Processing Jobs from using the Data Set.

  • Expired: The Data Set files are deleted.

The Data Repository under which the Data Set is created can be configured to automatically transition the Data Set to the Hidden state after a Processing Job is submitted to prevent accidentally using the Data Set more than once, and to Archive the Data Set after a Processing Job completes, and to trigger the later expiration of the Data Set after a predefined time.

When a data set expires, all of its files are deleted. This action cannot be reverted.
Data Sets Metadata

Each file in a Data Set can be associated metadata values, such as custodian information and other labels.

To edit the files metadata, use the metadata edit button metadataEdit.

To upload files metadata in bulk, first download the existing file list and metadata using the metadata download button metadataDownload, modify the metadata file as needed, and the upload the file using the metadata upload button metadataAdd.

Required Metadata Headers

The Required Metadata Headers can be used to enforce the metadata values that the user must supply before a Data Set can be finalized. Required Metadata Header names along with an option regular expression that the values must satisfy can be defined at the Client Pool, Client, and Matter level.

The resulting Required Metadata Headers is the combination of all of the requirements from the Matter, Client, and Client Pool that a Data Set is associated to. If a specific header is required in more than one place, then the supplied value must satisfy all of the regular expressions provided.

Built-in Metadata Headers

By default, the system automatically populates the Name, Uploaded By, Size (bytes), Size (display) and Hash (MD5) metadata header values. These values cannot be overwritten by the user

5.3. Client Pools

Clients can be further grouped into Client Pools. A Client can belong to one, multiple or no Client Pools (see Client Pools).

Client Pools can be used to group and assign permissions to the Clients managed by a specific team.

5.4. Allowed Parameter Values

Clients, Matters and Workflows can be configured to restrict the values of parameters that a user can submit when queuing a Processing Job. These can be defined in the Allowed Parameters Values section of Clients, Matters and Workflows.

When queuing a Processing Job, the queue settings page will show a dropdown for the parameters that have Allowed Parameter Values defined. If the Allowed Parameter Values are defined in more than one location (i.e. Clients, Matters and Workflows), the only values allowed are those that satisfy the requirements defined at each location as well as the workflow parameter regex.

5.4.1. Scripted Allowed Parameter Values

In addition to manually defining the list of values that a certain parameter can take, a script can be used to return the list of allowed values.

To use a script, set the value to language:path, where:

  • language is one of js python or ruby;

  • path is the location of the script file, which must be accessible by the Scheduler service.

Example parameter value for using a ruby script: ruby:C:\Scripts\param1.rb
Configuring Clients, Matters or Workflows with Scripted Allowed Parameter Values requires a Security Policy with the Modify permission set on the Built-In scope Scripts.

Scripts have access to the following variables:

  • client_id

  • client_name

  • client_reference

  • matter_name

  • matter_id

  • matter_reference

  • library_name

  • library_id

  • workflow_name

  • workflow_id

  • user_name

  • parameter_name

Sample JavaScript script:

//// DEBUG
//print("Starting script to evaluate "+parameter_name+" - this will be logged to the Scheduler log")

var result = []

// Sample result
result.push("C:\\Temp\\JS")

// Sample result to confirm live execution
var today = new Date();
result.push("C:\\Temp\\"+today)

// Sample result containing client name
result.push("C:\\Client\\"+client_name)

//// DEBUG
//print (result)

// Return the array
result

Sample Pyton script:

import datetime

## DEBUG
#print("Starting script to evaluate "+parameter_name+" - this will be logged to the Scheduler log")

result=[]

# Sample result to confirm live execution
result.append('C:\\Temp\\Python')

# Sample result to confirm live execution
result.append('C:\\Temp\\'+str(datetime.datetime.now()))

# Sample result containing client name
result.append("C:\\Client\\"+client_name)

## DEBUG
#print("returning: "+str(result))

# The resulting list is expected in the variable named "result"

Sample Ruby script:

## DEBUG
#puts("Starting script to evaluate "+parameter_name+" - this will be logged to the Scheduler log")

result = []

# Sample result
result << "C:\\Temp\\Ruby"

# Sample result to confirm live execution
result << "C:\\Temp\\"+Time.now.strftime("%d/%m/%Y %H:%M:%s")

# Sample result containing client name
result << "C:\\Client\\"+client_name

## DEBUG
#puts(result)

# Return the array
result

6. Libraries

The Libraries view can be accessed using the Libraries link in the top navigation bar. It can be used to create, modify and delete Libraries and their Workflow Templates.

6.1. Libraries

Libraries are used to organize Workflow Templates, and can correspond to the types of projects on which Processing Jobs are run.

A Library has a name and a description. When a Library is inactive, that Library will not be visible in the Processing Job submission steps.

To add a new Library, use the Add Library + button at the top-left of the Library view.

To edit, delete, deactivate or activate a Library, select the Library and then click on the dropdown button at the right of the Library name at the top of the Library panel.

6.2. Workflow Templates

Workflow Templates are created under Libraries, and have a name, a description, a list of parameters with default values, and a list of operations.

To deactivate or activate a Workflow Template, switch the toggle at the left of the Workflow Template name in the Libraries panel.

When a Workflow Template is inactive it will not be visible in the Processing Job submission steps.

To add a new Workflow Template, use the Add + button at the top of the Library panel and point to a Workflow file created with the Workflow Designer module.

Additionally, to edit, delete, deactivate or activate a Workflow Template, select the Workflow Template and then click on the dropdown button at the right of the Workflow Template name.

7. Settings

The Settings view can be accessed using the Settings link in the top navigation bar. It can be used to manage system settings, such as licenses, engines, security policies, as well as user settings related to the user interface.

7.1. Rampiva License

The Rampiva License settings tab is used to inspect and update the current deployed Rampiva License. Rampiva licenses can either use a License ID and Key mechanism which is validated against the Rampiva License Service, or an offline license file.

7.2. Authentication Services

The Authentication Services settings tab is used to define the services that can be used to authenticate users when logging on to Automate. The services can also be used to populate the list of users and computers used in Legal Hold and Collections.

7.2.1. LDAP Authentication Service

An LDAP Authentication Service is used to authenticate users against an LDAP directory service, such as Active Directory.

To add a new LDAP Authentication Service, use the Add + LDAP Authentication Service button and provide the following information:

  • Name: A user-defined name to assign to the LDAP Authentication Service.

  • Active: Whether to enable the service for use.

  • Description: A user-defined description (optional).

  • Domain DN: The DN to which users that can be authenticated belong to.

  • Host: The host name or IP address of the LDAP directory service.

  • Port: The port of the LDAP directory service, typically 389 for unsecure LDAP and 636 for secure LDAP.

  • Secure LDAP: Whether to use secure LDAP when connecting to the LDAP directory service.

  • Synchronize Objects: Whether to synchronize users and computers from the LDAP directory service (optional).

  • User Base DN: The DN from where to synchronize users.

  • User Search Scope The LDAP search scope to use when performing the search to synchronize users.

  • Computer Base DN: The DN from where to synchronize computers.

  • Computer Search Scope: The LDAP search scope to use when performing the search to synchronize computers.

  • Synchronization Interval: The interval for periodically synchronizing users and computers with the above settings.

  • Service Account Name: The account used to perform the search to synchronize users and computers.

  • Service Account Password: The password for the account above.

  • Whitelisted Certificate Fingerprints: The SHA-256 fingerprint of the LDAP directory service certificate that should be trusted even if the certificate is self-signed (optional).

Users that are not under the Domain DN will not be able to authenticate.

7.2.2. UMS Authentication Service

A UMS Authentication Service is used to authenticate users against a Nuix UMS.

To add a new UMS Authentication Service, use the Add + UMS Authentication Service button and provide the following information:

  • Name: A user-defined name to assign to the UMS Authentication Service.

  • Active: Whether to enable the service for use.

  • Description: A user-defined description (optional).

  • UMS URL: The URL of the Nuix UMS.

  • Synchronize Objects: Whether to synchronize users from the Nuix UMS (optional).

  • Synchronization Interval: The interval for periodically synchronizing users with the above settings.

  • Service Account Name: The account used to perform the search to synchronize users.

  • Service Account Password: The password for the account above.

7.3. Nuix License Sources

The Nuix License Sources settings tab is used to define the licenses to be used by the Nuix Engines managed by Automate.

Automate supports three types of Nuix License Sources.

7.3.1. Nuix Management Server

A Nuix Management Server (NMS) is the classical way to assign Nuix licenses in an environment with multiple Nuix servers or workstations.

To add a new NMS to the Automate configuration, use the Add + Nuix Management Server button and provide the following information:

  • Name: A user-defined name to assign to the NMS.

  • Description: A user-defined description (optional).

  • Filter: A text filter to select licenses of a certain type from the NMS (optional). If a filter value is provided, a license will be selected from the NMS only if the short name, full name or description of the license contains the text provided in the filter, for example, enterprise-workstation. The filter is case insensitive.

  • Server Name: The host name or the IP address of the NMS.

  • Server Port: The port on which the NMS is configured to listen, by default 27443.

  • Username: The username from the NMS under which Automate will acquire licenses.

  • Password: The password for the Username above.

  • Whitelisted Certificate Fingerprints: The SHA-256 fingerprint of the NMS certificate that should be trusted even if the certificate is self-signed (optional).

By default, NMS uses a self-signed certificate. In this situation, Automate is not able to validate the identify of the NMS and a Whitelisted Certificate Fingerprint must be provided, otherwise, Engines will not be able to acquire licenses from this NMS.
Automate will list the certificate fingerprint if the name listed in the certificate matches the server name. Alternatively, an incorrect certificate fingerprint can be provided temporarily, for example 0000, to have Automate disable the name validation and provide the detected certificate fingerprint value in the error message.
The following PowerShell code can be used to get the SHA-256 certificate fingerprint of a server, where 127.0.0.1 is the IP address of the NMS:
$ServerName = "127.0.0.1"
$Port = 27443

$Certificate = $null
$TcpClient = New-Object -TypeName System.Net.Sockets.TcpClient
try {

    $TcpClient.Connect($ServerName, $Port)
    $TcpStream = $TcpClient.GetStream()

    $Callback = { param($sender, $cert, $chain, $errors) return $true }

    $SslStream = New-Object -TypeName System.Net.Security.SslStream -ArgumentList @($TcpStream, $true, $Callback)
    try {

        $SslStream.AuthenticateAsClient('')
        $Certificate = $SslStream.RemoteCertificate

    } finally {
        $SslStream.Dispose()
    }

} finally {
$TcpClient.Dispose()
}

if ($Certificate) {
    if ($Certificate -isnot [System.Security.Cryptography.X509Certificates.X509Certificate2]) {
        $Certificate = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Certificate2 -ArgumentList $Certificate
    }
    Write-Output $Certificate.GetCertHashString("SHA-256")
}

7.3.2. Cloud License Server

A Cloud License Server (CLS) is a cloud service managed by Nuix that can be used to acquire licenses.

To add a new CLS to the Automate configuration, use the Add + Cloud License Server button and provide the following information:

  • Name: A user-defined name to assign to the CLS.

  • Description: A user-defined description (optional).

  • Filter: A text filter to select licenses of a certain type from the CLS (optional). If a filter value is provided, a license will be selected from the CLS only if the short name, full name or description of the license contains the text provided in the filter, for example, enterprise-workstation. The filter is case insensitive.

  • Username: The username for the CLS account under which Automate will acquire licenses.

  • Password: The password for the Username above.

7.3.3. Nuix Dongle

A Nuix Dongle is a physical USB device that stores Nuix Licenses and is typically used when using Nuix Workstation or the Nuix Engine on a single server or workstation.

To add a new Nuix Dongle to the Automate configuration, use the Add + Nuix Dongle button and provide the following information:

  • Name: A user-defined name to assign to the dongle.

  • Description: A user-defined description (optional).

  • Filter: A text filter to select licenses of a certain type from the Nuix Dongle (optional).

The Nuix Dongle must be connected to the server of the Engine that is using it.

7.4. Engine Servers

The Engine Servers settings tab can be used to define the servers that will host Engines.

Prior to adding an Engine Server to the Automate configuration, the Automate Engine Server component must be deployed and configured on the server in question. See the Automate Installation Guide for details on how to install and configure Engine Servers

To add a new Engine Server to the Automate configuration, use the Add + Engine Server button and provide the following information:

  • Name: A user-defined name to assign to the Engine Server.

  • URL: The URL that can be used to reach the server, for example https://localhost:444

  • Description: A user-defined description (optional).

  • Whitelisted Certificate Fingerprints: The SHA-256 fingerprint of the Engine Server certificate that should be trusted even if the certificate is self-signed (optional).

By default, the Engine Server uses a self-signed certificate. In this situation, Automate is not able to validate the identify of the Engine Server and a Whitelisted Certificate Fingerprint must be provided.
Automate will list the certificate fingerprint if the name listed in the certificate matches the server name. Alternatively, an incorrect certificate fingerprint can be provided temporarily, for example 0000, to have Automate disable the name validation and provide the detected certificate fingerprint value in the error message.

7.5. Engines

The Engines settings tab can be used to define the Engine instances that will run Processing Jobs. An Engine can only run one Automate Processing Job at a time. To run multiple Processing Jobs at the same time, create multiple Engines on one or more Engine Servers, based on the available hardware resources.

To add a new Engine to the Automate configuration, use the Add + Engine button and provide the following information:

  • Name: A user-defined name to assign to the Engine.

  • Server: The Engine Server on which this Engine will run.

  • Nuix License Source: The source from which this Engine will acquire licenses.

  • Priority: The priority of this Engine in Resource Pools. When a Processing Job starts, it is assigned to the first available (i.e. not running another Processing Job) Engine with the highest priority from that Resource Pool.

  • Target Workers: The number of Nuix workers to attempt to acquire a license for, if available.

  • Min Workers: The minimum number of Nuix workers to acquire a license for. If the number of available workers in the Nuix License Source is lower than this value, the Engine will not initialize and will remain in an error state until they become available.

7.6. Resource Pools

The Resource Pools settings tab can be used to group Engines. Processing Jobs are assigned to Resource Pools and run on the first available Engine with the highest priority from the Resource Pool.

Automate supports Resource pools which are local or cloud-based (AWS and Azure).

7.6.1. Local Resource Pool

A Local Resource Pool groups Engines which are manually managed and typically run on local servers.

Additionally, Remote Engines can be configured to join Processing Jobs running in the Resource Pool, allowing for load distribution of a single Processing Job amongst multiple Engines.

Remote Engines only get initialized when a Processing Job is running an operation which requires workers, for example Add Evidence, OCR or Legal Export. After the operation requiring workers is complete, the Remote Engines are spun down, and are available to join other Processing Jobs running in the same Resource Pool.

The Remote Engines feature uses the Nuix Worker Broker and Agent mechanism. A Worker Broker is set up for each main Engine running a Processing Job, using the default IP address of that server. In the event that the Engine Server has multiple network interfaces, the IP address and port range to use for the Worker Brokers can be specified in the Engine Server configuration file (see the Automate Installation Guide for details for details).

To add a new Local Resource Pool to the Automate configuration, use the Add + Local Resource Pool button and provide the following information:

  • Name: A user-defined name to assign to the Resource Pool.

  • Active: The state of the Resource Pool. An inactive Local Resource Pool will not start any new Processing Jobs.

  • Description: A user-defined description (optional).

  • Engines: The list of Engines which are part of the Resource Pool and which will run Processing Jobs.

  • Remote Workers: The list of Engines which will join Processing Jobs as Remote Workers.

An Engine can be part of multiple Resource Pools.

7.6.2. AWS Resource Pool

An AWS Resource Pool automatically manages and runs Engine Servers and Engines in the Amazon AWS cloud environment.

Prior to adding an AWS Resource Pool to the Automate configuration, the AWS environment needs to be configured with either one or multiple EC2 Instances or an EC2 Launch Template, using the following steps:

  1. Create a new EC2 instance

  2. Deploy and configure the Automate Engine Server according to the Automate Installation Guide, similarly to a local deployment.

  3. Validate the deployment by manually adding an Engine Server with the URL of the cloud instance on port 443 and then adding an Engine on that server.

  4. Resolve any eventual certificate and Nuix License Source issues.

  5. Remove the manually added Engine and Engine server corresponding to the cloud instance.

  6. If you want to run Automate Processing Jobs on this instance only, take note of the Instance ID and skip the remaining steps. Optionally, the EC2 instance can be shut down.

  7. If you want to run Automate Processing Jobs on instances which are dynamically created by EC2, create an EC2 Launch Template from the EC2 instance previously configured. For more information on the Launch Templates, see https://docs.aws.amazon.com/autoscaling/ec2/userguide/LaunchTemplates.html

When Automate is starting a Processing Job that is assigned to an AWS Resource Pool using a Launch Template, it first scans for idle EC2 instances started with the Launch Template. If an EC2 instance is found, the Processing Job is assigned to it. Otherwise, if the number of active EC2 instances does not exceed the Max Concurrent Instances value, a new EC2 instance is spawned and assigned the Processing Job.

To add a new AWS Resource Pool to the Automate configuration, use the Add + AWS Resource Pool button and provide the following information:

  • Name: A user-defined name to assign to the Resource Pool.

  • Active: The state of the Resource Pool. An inactive AWS Resource Pool will not start any new Processing Jobs, and will not manage the state of EC2 instances (i.e. it will not shut down or terminate the instance after the running Processing Job finishes, if applicable).

  • Description: A user-defined description (optional).

  • Access Key: The Access Key of the account that will be used to connect to AWS. For details on obtaining an Access Key, see https://aws.amazon.com/premiumsupport/knowledge-center/create-access-key

  • Secret Key: The Secret Key for to the Access Key above.

  • Region: The AWS Region in which the EC2 instance or Launch Template was created.

  • Engines: The settings used to manage the instances that run the Processing Job:

    • Nuix License Source: The Nuix License Source from which Engines will acquire licenses.

    • Target Workers: The number of Nuix workers to attempt to acquire a license for, if available.

    • Min Workers: The minimum number of Nuix workers to acquire a license for. If the number of available workers in the Nuix License Source is lower than this value, then the Engine will not initialize and will remain in an error state until the minimum number of Nuix workers becomes available.

    • Instance Idle Action: The action to perform on the EC2 instance when a Processing Job finishes and no other Processing Jobs from the Backlog are assigned to the instance.

    • Force Idle Action: This setting will force an EC2 instance to Stop or Terminate when a Processing Job finishes, even if other jobs from the Backlog are assigned to run on the instance.

    • Virtual Machine Source: The mechanism used to find the EC2 instances that Automate will manage.

    • Launch Template ID: Dynamically spawn EC2 instances.

      • Launch Template ID: The ID of the Launch Template that will be used to spawn new instances.

      • Max Concurrent Instances: The maximum number of EC2 instances running at the same time using the Launch Template.

    • Instance IDs: Find EC2 instances by IDs.

      • Instance IDs: The IDs of the pre-existing and configured EC2 instances to manage.

    • Tags: Find EC2 instances by tags.

      • Tag Name: The name of the tag in EC2.

      • Tag Value: The value of the tag in EC2.

  • Remote Workers: The settings used to manage the instances that run the workers which are joined to the Processing Jobs. These are similar to the Engines settings. Additionally, the following settings are available:

    • Don’t Trigger Idle Action Before First Processing Job: This setting will prevent Automate from stopping or deleting Remote Worker instances before a Processing Job runs on the Resource Pool.

    • Don’t Trigger Idle Action for Non-Worker Operationss: This setting will prevent Automate from stopping or deleting Remote Worker instances while a Processing Job is running on the Resource Pool, even if the Processing Job does not require remote workers currently.

  • Whitelisted Certificate Fingerprints: The SHA-256 fingerprint of the Engine Server certificate that should be trusted even if the certificate is self-signed (optional).

Selecting the Terminate idle action will permanently delete the EC2 instance.
An instance can be used either as a main Engine, or for Remote Workers, but not for both roles at the same time.

7.6.3. Azure Resource Pool

An Azure Resource Pool automatically manages Engine Servers and Engines in the Microsoft Azure cloud environment.

Prior to adding an Azure Resource Pool to the Automate configuration, the Azure environment needs to be configured with one or multiple virtual machines (VMs), using the following steps:

  1. Create a new VM

  2. Deploy and configure the Automate Engine Server according to the Automate Installation Guide, similarly to a local deployment.

  3. Validate the deployment by manually adding an Engine Server with the URL of the VM on port 443 and then adding an Engine on that server.

  4. Resolve any eventual certificate and Nuix License Source issues.

  5. Remove the manually added Engine and Engine server corresponding to the cloud VM.

  6. Optionally, the VM can be shut down.

  7. Register Automate in the Azure AD using the Azure Command-Line Interface (CLI) , by running the following command:

az ad sp create-for-rbac --name RampivaAutomate
  1. Take note of the appId, password, and tenant values returned by the command above.

To add a new Azure Resource Pool to the Automate configuration, use the Add + Azure Resource Pool button and provide the following information:

  • Name: A user-defined name to assign to the Resource Pool.

  • Active: The state of the Resource Pool. An inactive Azure Resource Pool will not start any new Processing Jobs and will not manage the state of VMs (i.e. it will not shut down or terminate the VM after the running Processing Job finishes, if applicable).

  • Description: A user-defined description (optional).

  • Tenant: The tenant value obtained using the Azure CLI.

  • Key: The password value obtained using the Azure CLI.

  • App ID: The appId value obtained using the Azure CLI.

  • Subscription ID: The Azure Subscription to connect to, if the account provided has access to multiple Azure Subscriptions (optional).

  • Engines: The settings used to manage the instances that run the Processing Job:

    • Nuix License Source: The Nuix License Source from which Engines will acquire licenses.

    • Target Workers: The number of Nuix workers to attempt to acquire a license for, if available.

    • Min Workers: The minimum number of Nuix workers to acquire a license for. If the number of available workers in the Nuix License Source is lower than this value, then the Engine will not initialize and will remain in an error state until the minimum number of Nuix workers becomes available.

    • Instance Idle Action: The action to perform on the VM when a Processing Job finishes and no other Processing Jobs from the Backlog are assigned to the VM.

    • Force Idle Action: This setting will force the VM to Stop or Terminate when a Processing Job finishes, even if other jobs from the Backlog are assigned to run on the instance.

    • Virtual Machine Source: The mechanism used to find the Azure VMs that Automate will manage.

    • VM Names: Find Azure VMs by name

      • VM Names: The names of the pre-existing and configured Azure VMs to manage.

    • Custom VM Image: Dynamically spawn Azure VMs.

      • Region: The Azure region to spawn the VM in.

      • Resource Group ID: The Azure Resource Group ID/name to spawn the VM in.

      • Network Name: The name of a pre-existing Azure network to associate the VM to.

      • Network Subnet Name: The name of a pre-existing Azure network subnet to associate the VM to, for example default.

      • Custom VM Image ID: The ID/name of the Azure custom image to use for spawning the VM. When creating a custom image, first generalize the original VM from which the image is created.

      • Max Concurrent Instances: The maximum number of Azure VMs running at the same time using the Custom VM Image.

      • Custom VM Username: The admin username to set on the VM.

      • Custom VM Password: The admin passwordto set on the VM.

      • VM Type: Create the VM as Spot or On-Demand.

      • VM Size: The size characteristics of the VM in Azure.

      • Disk Size: The size of the OS disk in GB.

    • Tags: Find Azure VMs by tags.

      • Tag Name: The name of the tag in Azure.

      • Tag Value: The value of the tag in Azure.

  • Remote Workers: The settings used to manage the instances that run the workers which are joined to the Processing Jobs. These are similar to the Engines settings. Additionally, the following settings are available:

    • Don’t Trigger Idle Action Before First Processing Job: This setting will prevent Automate from stopping or deleting Remote Worker instances before a Processing Job runs on the Resource Pool.

    • Don’t Trigger Idle Action for Non-Worker Operationss: This setting will prevent Automate from stopping or deleting Remote Worker instances while a Processing Job is running on the Resource Pool, even if the Processing Job does not require remote workers currently.

  • Whitelisted Certificate Fingerprints: The SHA-256 fingerprint of the Engine Server certificate that should be trusted even if the certificate is self-signed (optional).

Selecting the Delete idle action will permanently delete the Azure VM, its OS disk and associated network interface.

7.7. Notification Rules

The Notification Rules settings tab can be used to define rules that trigger notifications when certain events occur, for example, a Processing Job being queued. Notification Rules are added to Execution Profiles which in turn are assigned to Processing Jobs.

For notifications to be sent, the Notification Rule must be added to the Execution Profile used by the Processing Job.

Notifications can be sent by Email using an Email Notification Rule or through collaboration platforms, such as Microsoft Teams, Slack or Discord, using a Webhook Notification Rule.

The Test Rule button and the Test dropdown option can be used to test the settings of the Notification Rule and will send out a test message.

7.7.1. Email Notification Rule

An Email Notification Rule will send out an email when the rule is triggered.

To add a new Email Notification Rule, use the Add + Email Notification Rule button and provide the following information:

  • Name: A user-defined name to assign to the rule.

  • Description: A user-defined description (optional).

  • SMTP Server: The IP address or the name of the SMTP server.

  • SMTP Port: The port of the SMTP server, typically 25 for unauthenticated access and 465, or 587, for authenticated access.

  • SMTP Authentication: If checked, Automate will authenticate to the SMTP server with the account provided.

  • SMTP Username: The username to authenticate to the SMTP server with.

  • SMTP Password: The password for the Username above.

  • Transport Layer Security: If checked, access to the SMTP Server will be TLS-encrypted. Otherwise, access to the SMTP server will be in clear-text.

  • HTML Format: If checked, emails will be sent as HTML with formatting. Otherwise, emails will be sent in text format.

  • From: The email address from which to send the email.

  • To: The email address to which to send the email.

  • CC: The email address to CC in the email.

  • Triggers: The events for which to trigger the Notification Rule.

The To and CC address can contain a single email address or multiple addresses separated by a semi-colon, for example, jsmith@example.com; fmatters@example.com. They can also use the {job_submitted_by} parameter. Other parameters cannot be used.
Regarding the To and CC address, if the Automate usernames are not full email addresses, for example, jsmith, append the respective email domain name as a suffix to the {job_submitted_by} parameter. For example, {job_submitted_by}@example.com.

7.7.2. Webhook Notification Rule

A Webhook Notification Rule will send out a message to Microsoft Teams, Slack or Discord when the rule is triggered.

To add a new Webhook Rule, use the Add + Webhook Notification Rule button and provide the following information:

  • Name: A user-defined name to assign to the rule.

  • Description: A user-defined description (optional).

  • Platform: The collaboration platform to which to send the notification.

  • Webhook URL: The URL of the webhook configured in the collaboration platform.

  • Triggers: The events for which to trigger the Notification Rule.

7.8. Execution Profiles

The Execution Profiles settings tab can be used to define the Engine system settings, such as memory and credentials, as well as additional parameters to apply to running Processing Jobs.

A Processing Job requires an Execution Profile to run.

To add a new Execution Profile, use the Add + Execution Profile button and provide the following information:

  • Name: A user-defined name to assign to the profile.

  • Description: A user-defined description (optional).

  • Username: The user account to run the Engine under (optional). If a Username is not provided, the Engine will run under the same account as the Engine Server service.

  • Password: The password for the Username above.

The Username and Password feature is only available on Microsoft Windows platforms. The Username can be provided in the format domain\username or username@domain.
When specifying a Username in the Execution Profile, the user in question must have the Log on as service right on each server on which the Execution Profile is used. By default, administrative accounts DO NOT have this right. This can be configured either using a Group Policy (see https://docs.microsoft.com/en-us/windows/security/threat-protection/security-policy-settings/log-on-as-a-service), or manually in Services management console by configuring a service to run under the specified account.
  • Command-Line Parameters: The command-line parameters to apply to the Engine (optional). These command-line parameters function like the parameters which can be provided in a batch file when running Nuix Workstation. For example, these can be used to predefine the memory available to the Engine and the Workers and to specify the Workers log folder.

Additionally, the following Rampiva-specific command-line parameters exist:

  • -Drampiva.allowAnyJava=true: Disable requirement to use AdoptOpenJDK with Nuix Engine 9.0 and greater

  • -Drampiva.discover.log=C:\Temp\discover.log: Output the full GraphQL log used in the communication with Nuix Discover to the specified log file.

    • Log Folder: The folder where to redirect the Engine logs.

To redirect all logs related to the execution of a Processing Job, provide the log location in the Log Folder and also as a command-line parameter, for example, -Dnuix.logdir=C:\Temp\logs.
  • Nuix Engine Installation Folder: The folder where a different version of the Nuix Engine is deployed (optional).

The Nuix Engine Installation Folder option can be used to run Processing Jobs in Nuix cases which were created with a different version of the Nuix Engine or Nuix Workstation, without needing to migrate the cases to the latest version of Nuix.
  • Java Installation Folder: The folder where a different version of Java is deployed (optional).

Nuix Engine version 8.x and lower is only supported with Java version 8. Nuix Engine version 9.0 and higher is only supported with Java version 11. When specifying a Nuix Engine in the Execution Profile, also specify the location of a Java installation that is compatible with the version of the Nuix Engine in question.
  • Workflow Parameters: Additional parameters and values to add to those already defined in the Workflow Template (optional).

The Workflow Parameters option can be used, for example, to define the location where a script should write an output file. This location might change depending on the environment in which the Processing Job is run and can be captured using an Execution Profile Workflow Parameter.
  • Notification Rules: The list of rules that apply to the Execution Profile (optional).

  • Timeout Settings: The minimum progress that each operation and the Processing Job must make, respectively, in the allotted time, otherwise the Processing Job is aborted, or the current operation is skipped if it was configured as skippable.

  • Nuix Profiles: List of Nuix profiles to add to your Nuix case. The supported Nuix profiles types are Configuration Profiles, Production Profiles, Processing Profiles, Metadata Profiles, Imaging Profiles and OCR Profiles.

The Nuix Profiles will store your profiles in your case under the profile type, for example, if a metadata profile was added, the profile would be found in your case folder under the path \Stores\User Data\Metadata Profiles\.
  • Additional Files: Additional list of parameters mapped to files that will be added to your Nuix case when running a job.

All parameters within the Additional Files will contain the suffix _file. The files created from these parameters can be found in your Nuix case under the path \Stores\Workflow\Files\.
By default the max file size is 10MB, To change the default value, modify the maxFileSizeUpload parameter in the yaml configuration file. For more information goto the Service settings in the installation guide.

7.9. Client Pools

The Client Pools settings tab can be used to group clients into different pools, for example, based on geographical locations or the team that is in charge of each Client.

To add a new Client Pool, use the Add + Client button and provide the following information:

  • Name: A user-defined name to assign to the pool.

  • Description: A user-defined description (optional).

  • Clients: The list of clients that are part of the pool. A Client can belong to one, multiple or no Client Pools.

7.10. SMTP Servers

The SMTP Servers settings tab is used to define SMTP servers that can be used to send emails.

To add a new SMTP Server, use the Add + SMTP Server button and provide the following information:

  • Name: A user-defined name to assign the server;

  • Description: A user-defined description (optional);

  • Host: The host name or IP address of the SMTP server;

  • Port: The port of the smtp server. Typically, 25 for unauthenticated access and 465 or 587 for authenticated access;

  • TLS: If checked, access to the SMTP Server will be TLS-encrypted, otherwise, it will be in clear-text;

  • Authentication: If checked, access to the SMTP server will be authenticated with the account provided;

  • Username: The username to authenticate to the SMTP server with;

  • Password: The password for the Username above;

  • From: The email address to send the email with;

7.11. Data Repositories

The Data Repositories settings tab can be used to define the locations to store Data Sets as well as the restrictions and automatic transitions of Data Sets.

To add a new Data Repository, use the Add + Data Repository button and provide the following information:

  • Name: A user-defined name to assign the repository.

  • Path: The location to store Data Sets. This can be a local path such as C:\Data or a file share to which the Scheduler service has access to.

  • Description: A user-defined description (optional).

  • Quota: The maximum amount of space that can be used by all Data Sets in the Data Repository (optional).

  • Allowed File Extensions: Only allow files with these extensions to be uploaded (optional).

  • Hide data sets On Processing Job Queue: Automatically transition a Data Set to the Hidden state after a Processing Job is queued using the Data Set (optional).

  • Archive data sets on Processing Job Finish: Automatically transition a Data Set to the Archived state after a Processing Job finishes using the Data Set (optional).

  • Expire archived data sets: Auto expire an archived Data Sets and delete all of its files after the specified time period (optional).

7.12. Notice Templates

The Notice Templates settings tab is used to define notice templates that can be used for legal holds to create notices.

To add a new Notice Template, select the Type using the tabs at the top of the page and then click the Add + Notice Template button. Creating a Notice Template has 4 steps:

  1. Fill-in the Notice Template Settings.

    • Name: A user-defined name to assign the template;

    • Active: Whether to enable the template for use.

    • Description: A user-defined description (optional);

    • Parameters: A list of parameters for values provided when creating legal holds;

Note: Built-In Parameters are at the top and cannot be changed.

  1. Fill-in the Subject and Message.

  2. Optionally build a Survey Form for a notice response.

  3. Review and confirm the details.

Note: Reminder and Escalation Notice Templates do not have a Survey Form option.

7.13. ECC Profiles

The ECC Profiles settings tab is used to define ECC profiles that can be used for connecting to ECC to create collections.

ECC Profiles will query computers and configurations from the ECC server and cache the results on Automate. Additionally, ECC Profiles can be used to sync with Active Directory and provide deployment settings for deploying and removing ECC Clients from Active Directory domain computers.

Warning: Verify that all configurations on your ECC server contain locations, in the case that no locations exist for your configuration any Collection ran with this configuration will fail.

To add a new ECC Profile, click the Add + Enterprise Collection Center Profile button and provide the following information:

  • Name: A user-defined name to assign the template;

  • Description: A user-defined description (optional);

  • Server URL: The URL that can be used to reach the ECC server, for example https://10.5.0.2:8080;

  • Username: The username of the account that will be used to connect to ECC; For details on creating this account, see: https://download.nuix.com/pdf/nuix-enterprise-collection-center

  • Password: The password for the Username above;

  • Whitelisted Certificate Fingerprints: The SHA-256 fingerprint of the NMS certificate that should be trusted even if the certificate is self-signed (optional).

Warning: Do not run scheduled collections through Automate.

7.13.1. ECC Profiles Active Directory Settings

Prior to setting up Active Directory settings, an Authentication Service synchronizing objects from Active Directory is required, for more information (see Creating Authentication Service).

To set up Active Directory settings for an ECC Profile:

  1. Toggle Sync with AD;

  2. Check Deploy agent before collection to deploy the agent before the collection starts;

  3. Check Uninstall agent after collection to remove the agent after the collection has finished;

  4. Select Collection Scope to choose which computers you want to see when using this profile in a collection;

  5. Check Retry running command if install fails to retry the command if it fails to finish within timeout;

  6. Timeout is the time given in minutes for each command to finish;

7.13.2. Remote Deployment Settings

Prior to setting up deployment settings, the ECC Profiles Active Directory settings must be defined.

The Deployment Settings of an ECC Profile allow for commands to be run on Active Directory computers using a service account.

To set up the remote deployment settings, profile the following information:

  • Remote Method the method used to specify how you will deploy and remove the agents;

  • Install Command the command used to install the ECC agent on an Active Directory computer;

  • Uninstall Command the command used to uninstall the ECC agent on an Active Directory computer;

When providing the Install Command and Uninstall Command the parameters {computer_name} and {password} are available to use.
  • Remote Deployment Password the password of the service account used to run commands on Active Directory computers.

In the bottom left of the Deployment Settings you can click the button Show Sample to show an example of your command with the paramters evaluated.

7.14. Security Policies

The Security Policies settings tab can be used to manage the access that users have in the Automate application.

Security Policies are positive and additive, meaning that a user will be allowed to perform a certain action if at least one Security Policy allows the user to perform that action. To prevent a user from performing a certain action, ensure that there are no policies that grant the user that action.

To add a new Security Policy, use the Add + Security Policy button and provide the following information:

  • Name: A user-defined name to assign to the policy.

  • Description: A user-defined description (optional).

  • Active: The state of the policy. An inactive Security Policy will not be evaluated.

  • Principals: The identities to which the policy applies. Principals can be of the following types:

    • Built-In: Authenticated User corresponds to any user account that can log in with the allowed authentication schemes (i.e. Nuix UMS or Microsoft Azure);

    • Azure Username: Explicit Azure user accounts, in the form of an email address;

    • Azure Group ID: Azure user accounts belonging to the Azure Group with the specified ID;

    • UMS Username: Explicit Nuix UMS user accounts, in the form of a username;

    • UMS Group: UMS users belonging to the specified UMS group with the specified name;

    • UMS Privilege: UMS users belonging to a group that has the specified privilege;

    • UMS Role: UMS users which are assigned the specified application role.

  • Permission: The permission granted to the Principals:

    • View: View the details of the objects in scope (and their children);

    • View Non-Recursive: Only applies to Client Pools. View the list of Client Pools along with the IDs of the Clients assigned to the pool, but not the details of the Clients.

    • Modify: Modify the objects in scope (and their children);

    • Modify Children: Modify the children of the object in scope (but not the object itself);

    • Create: Only applies to Client Pools. Create a new Client Pool.

    • Submit Processing Job: Submit a job on the objects in scope (and their children);

    • View Confidential: View the details of the objects in scope (and their children) even if marked as confidential;

    • Download Logs: Download the logs of jobs or system resources;

    • Exclude Metrics: Mark the Processing Job utilization metrics for exclusion;

  • Scope: The scope on which the Permission is granted. Scopes can be of the following types:

    • Built-In: Used to assign permissions on all objects of a certain type (for example All Clients) or to Processing Jobs which do not have a Library or a Client assigned;

    • Client/Matter: A specific or all Matters from a specific Client;

    • Library/Workflow: A specific or all Workflow Templates from a specific Library;

    • Nuix License Source: A specific Nuix License Source;

    • Execution Profile: A specific Execution Profile;

    • Resource Pool: A specific Resource Pool;

    • Client Pool: A specific Client Pool;

    • Notification Rule: A specific Notification Rule.

The Built-In All System Resources scope corresponds to all Nuix License Sources, Engine Servers, Engines, Resource Pools, Notification Rules and Execution Profiles.

7.14.1. Sample Permission Requirements

View a Processing Job:

  • View permissions on: Client and Matter on which the Processing Job was submitted; or Built-In All Clients; or Built-In All Client Pools; or Built-In Unassigned Client.


Submit a Processing Job:

  • View and Submit Processing Job permissions on: Client and Matter; or Built-In All Clients; or Built-In All Client Pools; or Built-In Unassigned Client; and

  • View and Submit Processing Job permissions on: Library and Workflow Template; or Built-In All Libraries; or Built-In Unassigned Library.


Assign Processing Job to a Resource Pool:

  • View and Submit Processing Job permissions on: Resource Pool; or Built-In All Resource Pools; or Built-In All System Resources.


Assign a Processing Job to an Execution Profile:

  • View permissions on: Execution Profile; or Built-In All Execution Profiles; or Built-In All System Resources.


Set Processing Job Priority to Highest:

  • Modify permissions on: Resource Pool; or Built-In All Resource Pools; or Built-In All System Resources.


View Security Policies:

  • View permissions on: Built-In Security.


Manage Engine Servers and Engines:

  • View and Modify permissions on: Built-In All System Resources.


Set Default User Settings:

  • Modify permissions on: User Settings.


Add Matters to a Client, but don’t allow the user to modify the Client:

  • View and Modify Children permissions on: Client.


Download Logs of a Processing Job

  • View and Download Logs permissions on: Client and Matter on which the Processing Job was submitted; or Built-In All Clients; or Built-In All Client Pools; or Built-In Unassigned Client.


Download System Logs

  • Download Logs permissions on: Built-In All System Resources


Download Anonymized Utilization Data

  • View permissions on: Built-In System Resources; or Built-In All Clients; or Built-In All Client Pools..


Download Full Utilization Data

  • View permissions on: Built-In All Clients; or Built-In All Client Pools.


Manage own API keys.

  • View and Modify permissions on: Built-In API Keys.


Manage the API keys of all users.

  • View and Modify permissions on: Built-In All API Keys.


7.15. API Keys

The API Keys settings tab can be used to facilitate the authentication to Automate when integrating with other platforms or for making API calls using scripting languages.

When making API requests to Automate using an API key, the request will have the same permissions as the user who created the API key.

To add a new API key to the Automate configuration, use the Add + API Key button and provide the following information:

  • Name: A user-defined name to assign to the key.

  • Validity: The number of days that the key is valid for.

The key secret is only available in the window shown immediately after the key is created. If the secret is not recorded at this time or is lost, the key should be deleted and a new replacement key should be created.

To make an API access with an API key, set the Authorization HTTP header to Bearer id:secret, where id is the key ID and secret is the key secret, for example:

Authorization: Bearer 78882eb7-8fc1-454d-a82c-a268c204fbba:788LvzrPksUKXKTrCyzKtvIMamTjlbsa

7.16. Webhooks

The Webhooks settings tab can be used to integrate Automate with third-party applications, such that when an event occurs in Automate, a webhook call is made to the third-party application.

Webhooks can be created manually, or registered using the API.

To add a new Webhook registration, use the Add + Webhook button and provide the following information:

  • Name: A user-defined name to assign to the Webhook.

  • Active: The state of the Webhook. An inactive Webhook will not get triggered.

  • History Enabled: If the Webhook history is not enabled, the list of Webhook calls will not be displayed in the Webhook panel and if the Scheduler service is restarted with pending Webhook calls, these calls are lost.

  • Triggers: The types of events that will trigger Webhook calls.

  • Whitelisted Certificate Fingerprints: The SHA-256 fingerprint of the third-party application receiving the Webhook calls that should be trusted even if the certificate is self-signed (optional).

The Webhook signature key is only available in the window shown immediately after the Webhook is created. If the signature key is not recorded at this time or is lost, the Webhook registration should be deleted and a new replacement Webhook registration should be created..

When a Webhook event is triggered, an API call will be attempted including details about the username and the action that triggered the event. If the third-party application receiving the Webhook call is not accessible or does not acknowledge the Webhook call, the call will be retried with an exponential backoff delay, up to a maximum of 18 hours.

The details of the past 20 Webhook events and calls statuses can be seen in the Webhook panel.

7.17. User Settings

The User Settings tab can be used to customize the behavior of the user interface for the current user and to set the default settings for all users.

For each User Settings category, the Reset to Default button resets the customizations performed by the user to the default values and the Set as Default button sets the current values as the default values for all users.

The Set as Default button only applies to the default values and does not overwrite any existing user customizations.

7.17.1. Language

Change the language of the user interface to one of:

  • Browser Default - the language detected from the browser;

  • Arabic - (United Arab Emirates)

  • Danish - Denmark

  • German - Germany

  • English - United States

  • Spanish - Latin America

  • French - Canada

  • Hebrew - Israel

  • Japanese - Japan

  • Korean - South Korea

  • Dutch - Netherlands

  • Portuguese - Brazil

  • Simplified Chinese - China

7.17.2. Show Disabled Items

Configure if Clients or Matters which are inactive should be displayed in the Clients view and if Libraries or Workflow Templates which are inactive should be displayed in the Libraries view.

7.17.3. Processing Job Card

Modify the elements displayed in each Processing Job Card, the location of these elements, and the size and format of the text.

7.17.4. Add Processing Job

Allows moving the Notes section at the bottom of the Add Processing Job screen.

7.17.5. Default Processing Job Settings

Configures the default Execution Profile and Resource Pool to use during the Processing Job submission. This will apply only if the Client or Matter to which the Processing Job is submitted does not have these default values defined.

The user can change the Execution Profile and Resource Pool to which a Processing Job is assigned during the submission process even if default values are configured in this section.

7.17.6. Processing Job Sort Order

Configures the order in which to display jobs in the Backlog, Running and Finished Processing Job Lanes.

Processing Jobs can be sorted by:

  • Submission Date: Processing Jobs are sorted by the date and time when the Processing Job was submitted, or for Paused Processing Jobs, by the date and time when the Processing Job was Paused and moved back to the Backlog lane;

  • Priority and Submission Date: Processing Jobs are first sorted by their Priority and then by the Submission Date;

  • Last Changed Date: Processing Jobs are sorted by the date and time when the Processing Job state last changed. A state change occurs when the job is queued, finishes running, is cancelled, or starts running in a Cloud Resource Pool;

  • Priority and Last Changed Date: Processing Jobs are sorted first by Priority and then by the Last Changed Date.

7.17.7. Text Highlights

Highlight and style text in the Processing Job Pane and Processing Job Card that match user-defined regexes.

When using a regex that triggers a catastrophic backtracking, it can be impossible to open the Settings page to correct the regex. In this situation, open the Automate webpage by adding ?disableHighlightText at the end of the URL, for example https://automate.rampiva.com/?disableHighlightText. This will have the effect of temporarily disabling highlights for that session.

7.18. User Resources

The User Resources tab provides links to additional resources:

  • User Guide: This document.

  • Installation Guide: The Automate installation guide.

  • Third-Party Licenses: The list of third-party licenses used by Automate.

  • API Documentation: A live documentation of the Automate API in the OpenAPI 3.0 format, which can be used to integrate Automate with other applications.

  • OData Reporting: The URL for reading the Utilization and Reporting data in the OData 4.0 format.

7.19. System Resources

The System Resources tab can be used to manage logs and utilization data.

7.19.1. System Logs

Download and view information for centralized logging.

The system logs tab contains information about:

  • Log Retention Period: The duration in days that logs will be retained in the database.

  • Earliest Log Available: The earliest log available in the database.

Additionally the system logs tab contains a form for downloading system logs within a given date range.

To view or download system logs, centralized logging must be enabled and you need the permissions to download system logs (see Download System Logs).

7.19.2. Utilization

The utilization data can be downloaded either anonymized or in full using the Download Anonymized or Download Full options. The resulting data is be a zip archive containing a JSON file with the utilization data.

To upload utilization data from an external system, use the Load External option and select either a utilization JSON file, or a zip archive containing a JSON utilization file.