Disclaimer

The material in this document is for informational purposes only. The products it describes are subject to change without prior notice, due to the manufacturer’s continuous development program. Nuix makes no representations or warranties with respect to this document or with respect to the products described herein. Nuix shall not be liable for any damages, losses, costs or expenses, direct, indirect or incidental, consequential or special, arising out of, or related to the use of this material or the products described herein.

© Nuix Canada Inc. 2024 All Rights Reserved

Introduction

This guide describes the features and options of Automate. This document works like a reference - use the table of contents to look for the topic of interest.

The Automate software and this documentation may contain bugs, errors, or other limitations. If you encounter any issues with the Automate software or with this documentation, please contact nuix support.

Styles Used in This Guide

Note: This icon indicates that additional clarifications are provided, for example what the valid options are.
Tip: This icon lets you know that a useful tidbit is provided, such as how to achieve a certain behavior.
Warning: This icon highlights information that may help you avoid an undesired behavior.

Emphasized: This style indicates the name of a menu, option or link.

code: This style indicates code that should be used verbatim, and can refer to file paths, parameter names or Nuix search queries.

User Interface Patterns

In addition to standard Web user interface patterns, Automate makes use of the following patterns:

Optional Value

The field border is grey when the field is empty.

Optional Value

Invalid Value

The field border is red.

Invalid Value

Perform Action

When viewing the details of an item, such as a Job or a Client, a list of available actions is displayed by clicking on the dropdown button at the right of the item name.

Add to Selection

A list of available options is displayed in the left pane. Items can be highlighted and added to the selection using the right arrow > button. Items can be removed form the selection with the left arrow < button.

To search through a list of items in a dropdown, press any printable character key to activate the search bar. Clearing the search text, closing the dropdown, or selecting an item will cancel the search.

1. Logging In

Scheduler can be configured to allow logging in with a Nuix account, by providing a username and password, or with a Microsoft account, by using the Sign in with Microsoft button.

If the Sign in with Microsoft button is not visible, contact the administrator to have this option enabled.

After a period of inactivity (by default 15 minutes), a warning will be displayed and the user will be logged out if no action is performed.

2. Jobs

The Jobs view is used to monitor the Jobs queue, manage Schedules and view Archived jobs.

2.1. Queue

The queue view is the default screen displayed after login. It can be accessed using the JobsQueue menu in the top navigation bar as well as by clicking on the Automate logo.

2.1.1. Submitting a Job

To submit a Job, click on the Add Job + button at the top-left of the Queue view. A submission has 4 steps:

  1. Select the Client and the Matter for which the Job is submitted, or select Unassigned if the Job is not submitted for a specific project;

  2. Select the Library and the Workflow to run for this Job or add a new Workflow to the selected Library with the Add + Workflow button;

The Unassigned Client/Matter option and the Workflow File Library/Workflow option are only visible if the user has the appropriate permissions (see Security Policies).
  1. Fill-in the Job settings:

    • Select an Execution Profile from the dropdown;

    • Select a Resource Pool from the dropdown;

    • Adjust the Job Priority as needed;

    • Adjust the queue to Staging or Backlog as needed;

    • Adjust the Job Name as needed;

    • Fill-in the Job Notes. This section can be used for documentation purposes and to inform other users about the Job settings.

    • Fill-in the Job Parameters or load their values from a tab-separated value (TSV) file using the …​ button.

To set the priority value Highest, the user must have the modify permission on the Resource Pool to which the Job is assigned.
  1. Review and confirm the details of the submission.

2.1.2. Data Sets

A Job can process data from a Data Set if the Workflow selected uses Data Set Parameters. These are special Parameters with the name ending in _dataset}, for example, {source_dataset}.

When submitting a Job with a Data Set Parameter, the user will be prompted to select a Data Set from the list of Data Sets from the Matter for which the Job is queued. At this stage, only Data Sets in the Finalized stage are presented to the user.

2.1.3. File Libraries

A Job can use files from Files Libraries if the Execution Profile selected contains File Library files or the Workflow selected uses File Parameters. These are special Parameters with the name ending in _file}, for example, {sample_file}.

When submitting a Job with an Execution Profile that contains files from File Libraries, the Nuix profiles will be stored in the Nuix case under the profile type. For example: if a metadata profile was added, the profile can be found in the Nuix case folder under the path \Stores\User Data\Metadata Profiles\. For each additional file in the Execution Profile a parameter will be created with the path of the file, these files can be found in the Nuix case under the path \Stores\Workflow\Files\.

If the Workflow of the Job submitted has the option Require all Nuix profiles to be supplied in the Execution Profile enabled in any of the Configuration operations, the Execution Profile must contain all Nuix profiles of the Workflow. When the Execution Profile does not contain all Nuix profiles, the Job will not start and wait until the selected Execution Profile contains all Nuix profiles.

When submitting a Job with a File Parameter, the user will be prompted to select a File from the list of Files from the File Libraries. Only files with the type Custom File are presented to the user.

2.1.4. Execution Order

There are several factors which come into play when determining the order in which Jobs will run.

If two Jobs are assigned to the same Resource Pool and there are no active locks, the Job with the highest Priority will start first. If the Jobs have the same priority, the one that was added to the Backlog first will run first.

If two Jobs are assigned to different Resource Pools, the Job from the Resource Pool which has available Engines and which can acquire a Nuix license will run first.

2.1.5. Job Locks

By default, Jobs in Automate can run in parallel. If it’s required to have certain Jobs run sequentially, locks can be used. These can be set by using the Synchronized Jobs option or by using Lock Parameters.

Synchronized Jobs

When the Synchronized Jobs option in the Matter settings is checked, Jobs assigned to that matter will run one at a time.

If multiple Jobs are assigned to a Matter with the Synchronized Jobs option checked and if the order in which the jobs run is important, assign them to the same Resource Pool. Otherwise, the order in which the Jobs start is not guaranteed and depends on the Nuix licenses and Engines available under the respective Resource Pools.

Lock Parameters

Lock Parameters are special Parameters which can be defined in the Workflow to ensure that two Jobs don’t run at the same time, regardless of the Matters to which the Jobs are assigned. The name of Lock Parameters ends with _lock}, for example {project_lock}.

When using Lock Parameters, Jobs are guaranteed to run sequentially only if they have a Lock Parameter with the same name and the same value.

2.1.6. Job Execution States

Jobs can be in one of the following states:

  • Not Started: The Job was submitted / staged - Staging / Backlog lane;

  • Running: The Job is currently running - Running lane;

  • Pausing: The Job will pause after the current operation completes - Running lane;

  • Paused: The Job ran and was paused - Staging / Backlog lane;

  • Stopping: The Job will stop during the current operation or after the current operation completes - Running lane;

  • Stopped: The Job ran and was stopped - Finished lane;

  • Finished: The Job ran and completed successfully - Finished lane;

  • Finished With Warnings: The Job ran and completed with warnings - Finished lane;

  • Finished With Soft Errors: The Job ran and completed with soft errors - Finished lane;

  • Error: The Job ran and encountered an error - Finished lane;

  • Cancelled: The Job was cancelled before running - Finished lane;

2.1.7. Job Lanes

In the Jobs view, queued, running and finished jobs are displayed under different lanes:

  • Staging: These jobs are in staging and will not run until they are submitted to the Backlog lane.

  • Backlog: These are jobs that have been queued for execution and will run when resources are available and there are no warnings preventing the Job to run.

  • Running: These jobs are currently running.

  • Finished: These jobs finished running or were cancelled.

Jobs that have been archived are displayed in the Jobs Archive view (see Jobs Archive).

The order in which jobs are displayed in the lanes can be changed from the User Settings (see Job Sort Order).

2.1.8. Job Card

For each job, a Job Card is displayed in the corresponding Job Lane. The information displayed in the Job Cards can be customized from the User Settings (see Job Card).

2.1.9. Job Panel

To see the details of a job, click on the Job Card to open the Job Panel.

The Job Panel contains the following sections:

  • Header: The left side contains the job name and the job action dropdown. The right side contains the job status, the job completion percentage and the job status icon;

  • Job Settings: A table view of the job settings;

  • Notes: The notes supplied by the user when the job was submitted;

  • Parameters: The parameters along with the values supplied by the user when the job was submitted;

  • Required Profiles: The required Nuix profiles of the workflow selected when the job was submitted.

The Required Profiles section is only visible when the workflow selected when the job was submitted has the option Require all Nuix profiles to be supplied in the Execution Profile enabled in any of the Configuration operations.
  • Workflow: The list of operations that are part of the workflow selected when the job was submitted;

  • Execution Log: The log generated by the job execution (this section is not visible for jobs that have not started);

  • Mime Type Stats: The stats for items processed / exported, of the numbered operation. (see Operation Mime Type Stats);

  • Operation Running Log: The log generated by the running operation, this log is only shown when the operation supports Mime Type stats;

  • Change Log: The job audit log indicating the job submission, execution, and change events, as well as the time when these events occurred, who they were performed by and where applicable additional details, such as the changes that were made to the job settings.

2.1.10. Job Actions

To perform an action on a job, open the Job Panel by clicking on the corresponding Job Card, and then click on the dropdown button at the right of the Job name.

The following actions can be performed on jobs, depending on the lane the Job is in and the user permissions:

  • Resubmit: Queues a job with the same settings as the selected job and archives the selected job if not already archived;

  • Duplicate: Initiates the submission of a job with the same settings as the selected job;

  • Download Logs: Download a zipped copy of the job logs. To download logs of a job, centralized logging must be enabled and the user needs the permissions to download job logs (see Download Logs of a Job). The zipped copy of job logs contains the following files:

    • Engine Logs

    • Worker Logs

    • Workflow File

    • Job Changelog

    • Execution Log

    • Workflow Parameters

  • Print: Print the job panel, for example, to a PDF file;

  • Cancel Execution: Cancel and move the job to the Finished lane with an error status;

  • Skip Operation: Stop the execution of the current operation and continue the Job. This option is only available if the currently running operation was configured as skippable during the workflow design.

  • Pause: Puts the job in a pausing state. After the currently running operation finishes, the job will be placed in the paused state and will be moved to the Staging lane. Once paused, the Nuix case is closed and the Nuix license is released. The job will not resume execution unless it is re-submitted to the Backlog lane.

  • Stop: Sends a stop command to the current operation and puts the job in a stopping state. If the operation supports stopping, execution is stopped mid-way. Otherwise, the execution is stopped after the operation completes. Once stopped, the Nuix case is clsoed and the Nuix license is released.

  • Abort: Attempts to first stop the job gracefully for 5 seconds, and if not possible, aborts the job execution by forcibly closing the running processes.

  • Archive: Archives the job and moves it to the Archive lane.

Aborting a job leaves the Nuix case in a corrupted state and should only be used as a last resort if a job is non-responsive.
Table 1. Actions available in each Job Lane
Action Staging Backlog Running Finished

Submit

X

Move to Staging

X

Resubmit

X

Duplicate

X

X

X

X

Print

X

X

X

X

Download Logs

X

X

X

X

Cancel Execution

X

Pause

X

Stop

X

Abort

X

Archive

X

Exclude / Include Metrics

X

X

2.1.11. Operation Mime Type Stats

The Operation Mime Type stats are displayed in the Job Panel and display stats for operations that have processed or exported items. In the Job Panel the Mime Type stats are displayed using the execution position of the operation followed by the name of the operation, for example 3. Add Evidence.

The following operations generate Mime Type stats:

  • Add Evidence Operation

  • Re-scan Evidence Repositories Operation

  • Brainspace Load Items Operation

  • Case Subset Export Operation

  • Export Items Operation

  • Generate Printed Images Operation

  • Legal Export Operation

  • Logical Image Export Operation

  • Metadata Export Operation

  • Metadata to SQL Operation

  • Native OCR Images Operation

  • Native OCR Items Operation

  • OCR Operation

  • Populate Binary Store Operation

  • Promote to Nuix Discover Operation

  • Reload Items Operation

  • Replace Items Operation

  • Processing Report Operation

2.2. Purview

The Purview Jobs section track Jobs that run on a Purview Service.

The Purview Jobs feature requires a Corporate-edition license.

To submit a Purview Job, click on the Add + Purview Job and perform the following steps:

  1. Select the Client and the Matter for which the Job is submitted;

  2. Select the Library and the Workflow to run for this Job or add a new Workflow to the selected Library with the Add + Workflow button;

  3. Fill-in the Job settings:

    • Select an Execution Profile from the dropdown;

    • Select a Resource Pool from the dropdown;

    • Adjust the Job Priority as needed;

    • Adjust the queue to Staging or Backlog as needed;

    • Adjust the Job Name as needed;

    • Fill-in the Job Notes. This section can be used for documentation purposes and to inform other users about the Job settings.

    • Fill-in the Job Parameters or load their values from a tab-separated value (TSV) file using the …​ button.

  4. Follow the steps required by the specific Purview workflow;

  5. Review the details and submit the Job.

2.3. Vault

The Vault Jobs section track Jobs that run on a Google Vault Service.

The Vault Jobs feature requires a Corporate-edition license.

To submit a Vault Job, click on the Add + Vault Job and perform the following steps:

  1. Select the Client and the Matter for which the Job is submitted;

  2. Select the Library and the Workflow to run for this Job or add a new Workflow to the selected Library with the Add + Workflow button;

  3. Fill-in the Job settings:

    • Select an Execution Profile from the dropdown;

    • Select a Resource Pool from the dropdown;

    • Adjust the Job Priority as needed;

    • Adjust the queue to Staging or Backlog as needed;

    • Adjust the Job Name as needed;

    • Fill-in the Job Notes. This section can be used for documentation purposes and to inform other users about the Job settings.

    • Fill-in the Job Parameters or load their values from a tab-separated value (TSV) file using the …​ button.

  4. Follow the steps required by the specific Vault workflow;

  5. Review the details and submit the Job.

2.4. Schedules

The Jobs Schedule view can be accessed using the JobsSchedule menu in the top navigation bar. It can be used to manage the Schedules which automatically add Jobs for executions either at specified time intervals or when a specific event for another Job occurs.

The Jobs Schedule feature requires an Enterprise class license.

2.4.1. Create a Schedule

To create a Schedule, click on the Create Schedule + button at the top-left of the Jobs Schedules view and provide the following information:

  1. Schedule settings:

    • Name: A user-defined name to assign to the Schedule. The Jobs submitted by the Schedule will have the same name.

    • Active: The state of the Schedule. An inactive Schedule will not queue any new Jobs.

    • Description: A user-defined description (optional). The Jobs submitted by the Schedule will have the same description.

    • Conditions: Additional optional conditions that must be met for the Schedule to submit new Jobs:

      • Commence after: Schedule will only add Jobs after this date.

      • Expire after: Schedule will not queue any new Jobs after this date.

      • Skip if X Jobs from this schedule are running: Schedule will not queue new Jobs if there already are X Jobs running which were submitted by this Schedule. After the number of running Jobs drops below X, the Schedule becomes once again eligible to add Jobs.

      • Skip if X Jobs from this schedule are queued: Schedule will not queue new Jobs if there already are X Jobs queued which were submitted by this Schedule. After the number of queued Jobs drops below X, the Schedule becomes once again eligible to add Jobs.

  2. Triggers

    • On a timer: Jobs will be queued at the predefined time interval.

    • On an event: A Job will be queued when any of the specified events occurs and when all of the specified conditions are met for the event in question.

    • On a Webhook Trigger: A Job will be queued when a request with the specified verb and signature key is made to the generated webhook URL.

      • Headers: The response headers returned when the webhook URL is hit

      • HTTP Response code: The HTTP response code returned when the webhook URL is hit

      • Body: The response body returned when the webhook URL is hit

The option Add next-in-line Job to Staging will create the next Job in the Staging queue. This Job will be configured to auto-submit to the Backlog queue according to the specified time interval.
Jobs are queued using the permissions of the user who last modified the Schedule.
For example, to automatically retry failed jobs that were submitted with a High or Highest priority, the Schedule Job Events would contain the event Job Error, and the Event Conditions would have the Submission Mechanism set to Regular Job, and the Priorities set to Highest and High.
When using a Schedule that triggers on an event, it’s recommended to set the Submission Mechanism condition to Regular Job only. Otherwise, it’s possible to create a loop of events, where the Job queued by the Schedule will in turn trigger the Schedule again.
Configuring a Webhook with no signature key allows anyone with knowledge of the Webhook URL to trigger a Job.
  1. Client / Matter

    • The Client and the Matter for which the Schedule will submit the Job.

When using the trigger On an event, it’s possible to select the Client and Matter Same as Triggering Job. This will have the effect of queueing a new Job with for the same Matter as the original Job which triggered the Schedule.
  1. Library / Workflow

    • The Library and the Workflow that the scheduled Job will run.

When using the trigger On an event, it’s possible to select the Library / Workflow Same as Triggering Job. This will have the effect of queueing a new Job with the Workflow as the original Job which triggered the Schedule. In this case the Job parameters will also be copied from the Triggering Job and cannot be set explicitly in the Schedule.
  1. Job Settings

    • Execution Profile: The Execution Profile of the queued Job, or Unassigned;

    • Resource Pool: The Resource Pool of the queued Job, or Unassigned;

    • Priority: The Priority of the queued Job;

    • Parameters: The parameters of the queued Job;

When using the Library / Workflow Same as Triggering Job, the Parameters cannot be explicitly set and instead will take the same values as the Triggering Job. The Execution Profile, the Resource Pool and the Priority can either be explicitly defined, or can be set to Same as Triggering Job.
  1. Review and confirm the details of the submission.

To edit, delete, deactivate or activate a Schedule, select the Schedule and then click on the dropdown button at the right of the Schedule name at the top of the Schedule panel.

Schedules triggered by a Legal Hold event provide a Legal Hold Schedule Event object to the queued Job. This object can be accessed by all Script Operations in the Job’s Workflow.

Sample object:

{
	type: 'LEGAL_HOLD_ACTIVATED',
	legalHoldId: '5253f18b-3148-4843-a4f1-2c529f76fefc',
	legalHoldName: 'Hold 01',
	custodians: [
		{
			userId: 'b7d37112-7f99-3b86-dbb0-2a58ed7b5e01',
			name: 'jsmith',
			email: 'jsmith@example.com'
			status: 'ON_HOLD',
			holdIssuedDate: 1653060473590,
			platform: 'INTERNAL',
			platformId: NONE
			attributes: {}
		}
	]
}

Sample usage:

# Print object properties
print scheduleEvent.type
print scheduleEvent.legalHoldId
print scheduleEvent.legalHoldName

for custodian in scheduleEvent.custodians:
    print "\n"
    print custodian.name
    print custodian.email

2.5. Archive

The Jobs Archive view can be accessed using the JobsArchive menu in the top navigation bar. It displays jobs that have been archived either manually with the Archive action, or automatically when the archive conditions are met.

By default, a Job is automatically archived 2 weeks after it finished, or when there are more than 100 Jobs in the Finished lane. These settings can be changed by modifying the Scheduler configuration file (see the Automate Installation Guide for details).

3. Legal Hold

The Legal Hold view is used to access the Overview of outstanding Notices, manage Legal Hold Matters and search for Notices.

The Legal Hold feature requires a Corporate-edition license or higher.

The Legal Holds Overview view can be accessed using the Legal HoldsOverview menu in the top navigation bar. The page displays a summary of the number of matters that the user is subject to and that the administrator is managing, as well the cards for the notices which need to be actioned.

The Legal Holds Matters view can be accessed using the Legal HoldsMatters menu in the top navigation bar. It can be used to add, modify and delete Legal Holds.

To create a Legal Hold, click on the Add + Legal Hold button at the top-left of the Legal Hold Matters view. Creating a Legal Hold has 7 steps:

  1. Select the Client and the Matter.

  2. Configure the Hold and Release notices that will be used when issuing holds and releases to custodians. Optionally, configure Survey and Recurring notices, Survey notices are sent to the custodians when issuing holds, or when the Survey is added, if the Legal Hold is already active. Recurring notices are sent on a schedule to remind custodians that they are still on hold.

    • Optionally, provide a Respond by date using either a fixed date or a number of days after the sent date;

    • Optionally, enable Reminders with an interval in days and a Reminder Notice Template;

    • Optionally, enable Escalations with an Escalation Notice Template;

    • Optionally, disable Comments;

    • Optionally, disable Admin Notes.

To disable sending user notices and implicit permissions, unselect Send User Notices in Step 2.
Reminders and Escalations require a Respond by date.
Only Administrators can send or read Admin Notes.
  1. Optionally, configure Triggers, triggers can be configured to start jobs when actions are performed on a Legal hold or a custodian.

Legal Hold Job trigger information:

Trigger Type Trigger Description Trigger Scope

On Custodian Hold

Triggers when a custodian is put on hold

Single Custodian

Multi Custodian

On Custodian Release

Triggers when a custodian is released

Single Custodian

Multi Custodian

On Custodian Reminder

Triggers when a custodian is sent a reminder notice

Single Custodian

Multi Custodian

Notice

On Custodian Escalation

Triggers when a custodian is sent an escalation notice

Single Custodian

Multi Custodian

Notice

On Custodian Response

Triggers when a custodian submits a targeted response

Single Custodian

Multi Custodian

Notice

Custodian Response

On Matter Custodians Hold

Triggers when the Legal Hold Matter is activated

Multi Custodian

On Matter Custodian Release

Triggers when the Legal Hold Matter is released

Multi Custodian

On Matter Activate

Triggers when the Legal Hold Matter is activated

Matter

On Matter Release

Triggers when the Legal Hold Matter is released

Matter

On Matter Archive

Triggers when the Legal Hold Matter is archived

Matter

On Matter Delete

Triggers when the Legal Hold Matter is deleted

Matter

Only Workflow Templates with a Legal Hold parameter type can be used for Triggers.

Based on the scope of the trigger, additional parameters will be populated in the workflow. The following is a list of scopes followed by parameters in each scope:

All triggers will come with the default parameters, triggers with the scope Matter only contain the default parameters:

  • {legal_hold_id}: The ID of the Legal Hold, for example 5ce309dc-eef0-49c3-8cc9-028bcc8a1570

  • {legal_hold_name}: The name of the Legal Hold, for example Globex vs. ABC Corp

  • {legal_hold_event_trigger}: The trigger that caused the Job, can be one of the Trigger Types from the table above. For example ON_CUSTODIAN_HOLD

Multi Custodian:

  • {legal_hold_custodian_ids}: The ID’s of the custodians affected in JSON format, for example ["cc4b515f-b2aa-4085-871f-1c89295424b6", "27edf9b3-6a2c-4faa-8192-e989835ad3c8", …​]

  • {legal_hold_custodian_names}: The names of the custodians affected in JSON format, for example ["", "", …​]

  • {legal_hold_custodian_emails}: The emails of the custodians affected in JSON format, for example ["jon@globex.com", "jane@globex.com", …​]

Single Custodian:

  • {legal_hold_custodian_id}: The ID of the custodian affected, for example cc4b515f-b2aa-4085-871f-1c89295424b6

  • {legal_hold_custodian_name}: The name of the custodian affected, for example John Doe

  • {legal_hold_custodian_email}: The email of the custodian affected, for example jane@globex.com

Notice:

  • {legal_hold_notice_event_id}: The ID of the notice that triggered the Job, for example cd7ecfec-63c2-4aa6-af20-b3d4b520722d

Custodian Response:

  • {legal_hold_notice_event_response}: The response values from the notice in JSON format, for example: {"68de2c78-b605-4085-9938-35b98af295c3": true, "7370ba9a-6d2e-42fa-a3fd-5717192cfe30": "Some data", "f35733b9-174b-428a-a75f-0dc873ad1cec": "C:\Users\John"}

  1. Submit the Legal Hold Settings;

    • Fill-in the Name;

    • Optionally, fill-in the Description. This section can be used for documentation purposes and to inform custodians about the Legal Hold;

    • If a Notice was configured with the option for Custodians to upload data, the Data Repository dropdown will be presented and a Data Repository will need to be selected (see Data Repositories);

    • Select an SMTP Server from the dropdown (see SMTP Servers);

    • Optionally, select the Execution Profile to set for triggers from the dropdown

    • Optionally, select the Resource Pool to set for triggers from the dropdown

    • Optionally, select the Priority to set for triggers from the dropdown

    • Adjust the Scheduler URL if needed. This URL is used when sending notification emails to Custodians;

    • Optionally, select Enable single sign-on links to include single sign-on links (SSO) in emails;

    • Fill-in the Parameters or load their values from a tab-separated value (TSV) file using the …​ button.

The Execution Profile, Resource Pool and Priority are only required when creating a Legal Hold with triggers, the options will be visible if the user has at least one trigger configuration defined.
Only users who are from an LDAP or Azure AD Authentication Service (see Authentication Services) with single sign-on links enabled will receive the SSO links.
  1. Select the Administrators of the Legal Hold.

  2. Select Custodians of the Legal Hold.

To import a list of custodian emails, click the metadataAdd button in between the Available and Selected columns and select the file containing the emails.
Only custodians in the Available column can be imported into the Selected column.
  1. Review and confirm the details.

Recurring notices trigger on a frequency of days or months which is based when a custodian was added to a Legal Hold, for example if a user created a Legal Hold with three users and configured a Recurring notice that triggered every 3 months then for each of those custodians a Recurring notice would be sent every 3 months from the date that the custodian was issued a hold. For every custodian added after the initial hold the Recurring notices will be sent 3 months after they were added.

To see the details of a legal hold, click on the Legal Hold Row to open the Legal Hold Panel.

The Legal Hold Panel contains the following sections:

  • Header: The left side contains the legal hold name and action dropdown. The right side contains the legal hold state and icon;

  • Settings: A table view of the settings;

  • Description: The description;

  • Parameters: The parameters along with the supplied values;

  • Notice Configurations: A table view of the notice configurations when in the Draft state;

  • Notices: A table view of all the notices for the legal hold;

  • Trigger Configurations: A table view of all the triggers for the legal hold;

  • Administrators: A table view of all the legal hold administrators;

  • Custodians: A table view of all the legal hold custodians. The following actions can be performed on the custodians:

    • To import and optionally issue holds to a list of custodian, click the metadataAdd button at the top-left of the table view and select the file containing the email addresses;

    • To issue or re-issue a hold, select the custodians in the table view as needed and click the actionHoldAdd button at the top-right of the table view;

    • To release a custodian, select the custodians in the table view as needed and click the actionHoldRelease button at the top-right of the table view.

To issue holds or releases, the legal hold must be in the Active state.
  • Triggered Jobs: A table view of the jobs that have been triggered from a trigger configuration

  • Change Log: The legal hold audit log indicating change events, the time when these events occurred, who they were performed by, and where applicable additional details.

Legal Holds can be in one of the following states:

  • Draft: The Legal Hold is a draft. Administrators can log onto Scheduler and modify the Legal Hold;

  • Active: The Legal Hold is active. Notices and Linked Jobs are actively issued and custodians can log in to Automate and respond to issued notices;

  • Released: The Legal Hold is released. Custodians are released and can log in to Automate to view the responses provided in the notices;

  • Archived: The Legal Hold is archived. Custodians cannot log in to Automate anymore.

  • Deleted: The Legal Hold information is deleted.

Administrators and custodians of a legal hold are given implicit permissions for the duration of the hold.

Table 2. Implicit permissions available for each legal hold state
State Administrator Custodian

Draft

Add and Remove Custodians

Configure Notices, Linked Jobs and Legal Hold Settings

Active

Add, Remove, Issue Holds, and Release Custodians

Configure Notices, Linked Jobs and Legal Hold Settings

View and Manage Notices

View and Reply to own Notices

Released

View and Manage Notices

View and Reply to own Notices

Archived

Deleted

To perform an action on a legal hold, open the Legal Hold Panel by clicking on the corresponding Legal Hold Row, and then click on the dropdown button at the right of the Legal Hold name.

The following actions can be performed on legal holds:

  • Edit: Modify the legal hold;

  • Export: Export selected legal hold notices;

  • Duplicate: Initiates the creation of a legal hold with the same settings as the selected legal hold;

  • Delete: Delete the legal hold;

  • Activate: Activate the legal hold and issue hold and survey notices to all custodians;

  • Release: Release the legal hold and issue release notices to all custodians;

  • Archive: Archive the legal hold.

The following actions send emails to administrators and custodians.

Table 3. Email Triggers
Trigger Administrator Custodian

Legal Hold State Changed

X

Custodians Issued Hold/Release

X

Notice Received

X

Notice Comment

X

X

Notice Admin Note

X

Notice Responded

X

The Legal Holds Notices view can be accessed using the Legal HoldsNotices menu in the top navigation bar. It displays a filtered list of user notices.

4. Clients

The Clients view can be accessed using the Clients link in the top navigation bar. It can be used to create, modify and delete Clients and their Matters.

4.1. Clients

Clients are used to organize and track Jobs, and can correspond to external clients, internal clients, departments, or teams.

A Client has a name, a description, and optionally a default Execution Profile and a default Resource Pool.

If a Client is assigned a default Execution Profile or a default Resource Pool value, these values will be automatically selected when submitting a Job for the Client in question. The user still has the option to change these values during Job submission.

When a Client is inactive it will not be visible in the Job submission steps.

To add a new Client, use the Add Client + button at the top-left of the Clients view.

To edit, delete, deactivate or activate a Client, select the Client and then click on the dropdown button at the right of the Client name at the top of the Client panel.

Clients can create Workflow parameters to add to parameters already defined in a Workflow. To see more information about Workflow parameters see Workflow Parameters

4.2. Matters

Matters are created under Clients, and have a name, a description, and optionally a default Execution Profile and a default Resource Pool. Additionally, Matters can be configured with the Synchronized Jobs option (see Synchronized Jobs).

If a Matter is assigned a default Execution Profile or a default Resource Pool value, these values will be automatically selected when submitting a Job for the Matter in question. The user still has the option to change these values during Job submission.

To deactivate or activate a Matter, switch the toggle at the left of the Matter name in the Client panel.

When a Matter is inactive it will not be visible in the Job submission steps.

To add a new Matter, use the Add + button at the top of the Client panel.

Additionally, to edit, delete, deactivate or activate a Matter, select the Matter and then click on the dropdown button at the right of the Matter name.

Matters can create Workflow parameters to add to parameters already defined in a Workflow. To see more information about Workflow parameters see Workflow Parameters

4.2.1. Workflow Parameters

Workflow parameters can be created under Client Pools, Clients, Matters and Execution Profiles. The parameters defined add to those already defined in the Workflow.

Workflow parameters will overwrite one another if the same parameter is defined in any of the types that the parameters can be created under. The order is as follows, Client Pool parameters take precedence over Execution Profile parameters, Client parameters take precedence over Client Pool parameters and Matter parameters take precedence over Client parameters. For example if the user assigned the parameter {source_location} in an Execution Profile and a Client then the clients value for {source_location} would take precedence over the Execution Profiles value for the parameter.

Workflow parameters will only overwrite one another in the context of a Job, the user can define the same parameter in multiple Matters, Clients or Execution Profiles and the parameters used will be from the selected Matter, Client and Execution Profile.

Client Pools cannot contain the same parameters if the Client Pool has one or more Clients in common.

4.3. Data Sets

Data Sets are created under Matters, and are used to store data that is then used by Jobs.

To create a Data Set, select a Matter and click the Add + Data Set button in the Matter Pane. After a Data Set is created, its name, description and Data Repository cannot be changed.

There are two types of Data Sets.

4.3.1. Managed Data Sets

Managed Data Sets are used to upload data.

The location of where the data is stored, as well as quotas and file extension restrictions are defined by administrators in the Data Repositories.

To upload data, click on the upload button upload at the top left of the files table, select the files to upload, and start the upload by clicking on the Upload button at the bottom right of the pane.

Files may fail to upload if the length of their filename surpasses the limit set by the file system (MAX_PATH for Windows 10 is defined as 260 characters).

Uploads can be paused, resumed and cancelled. If an upload is interrupted, for example due to the browser being closed or crashing, when re-uploading the files that did not complete during the initial upload, the system will automatically continue uploading the files from the offset that was last transmitted, if the result information is available on the server.

Uploads that are idle, due to being interrupted or paused, will auto-expire after a set period which by default is 1 hour. For more information on configuring this setting, see the Scheduler Service Settings in the installation guide.

4.3.2. In-Place Data Sets

In-Place Data Sets are used to select existing data.

The location from where existing data can be selected are defined by administrators in the Data Repositories.

4.3.3. Data Sets State

A Data Set can be in one of the following states:

  • Draft: Files and metadata can be uploaded and modified. This is the default state a Data Set is in after creation.

  • Finalized: The contents of the Data Set is frozen. The Data Set can be used when queueing Jobs.

  • Hidden: Hidden from the user when queueing new Jobs.

  • Archived: Prevents new Jobs from using the Data Set.

  • Expired: The Data Set files are deleted.

The Data Repository under which the Data Set is created can be configured to automatically transition the Data Set to the Hidden state after a Job is submitted to prevent accidentally using the Data Set more than once, and to Archive the Data Set after a Job completes, and to trigger the later expiration of the Data Set after a predefined time.

When a data set expires, all of its files are deleted. This action cannot be reverted.
Data Sets Metadata

Each file in a Data Set can be associated metadata values, such as custodian information and other labels.

To edit the files metadata, use the metadata edit button metadataEdit.

To upload files metadata in bulk, first download the existing file list and metadata using the metadata download button metadataDownload, modify the metadata file as needed, and the upload the file using the metadata upload button metadataAdd.

Required Metadata Headers

The Required Metadata Headers can be used to enforce the metadata values that the user must supply before a Data Set can be finalized. Required Metadata Header names along with an option regular expression that the values must satisfy can be defined at the Client Pool, Client, and Matter level.

The resulting Required Metadata Headers is the combination of all of the requirements from the Matter, Client, and Client Pool that a Data Set is associated to. If a specific header is required in more than one place, then the supplied value must satisfy all of the regular expressions provided.

Built-in Metadata Headers

By default, the system automatically populates the Name, Uploaded By, Size (bytes), Size (display) and Hash (MD5) metadata header values. These values cannot be overwritten by the user

4.4. Client Pools

Clients can be further grouped into Client Pools. A Client can belong to one, multiple or no Client Pools (see Client Pools).

Client Pools can be used to group and assign permissions to the Clients managed by a specific team.

5. Libraries

The Libraries view can be accessed using the Libraries link in the top navigation bar. It can be used to create, modify and delete Libraries and their Workflows.

5.1. Libraries

Libraries are used to organize Workflows, and can correspond to the types of projects on which Jobs are run.

A Library has a name and a description. When a Library is inactive, that Library will not be visible in the Job submission steps.

To add a new Library, use the Add Library + button at the top-left of the Library view.

To edit, delete, deactivate or activate a Library, select the Library and then click on the dropdown button at the right of the Library name at the top of the Library panel.

5.2. Workflows

Workflows are created under Libraries, and have a name, a description, a list of parameters with default values, and a list of operations.

To deactivate or activate a Workflow, switch the toggle at the left of the Workflow name in the Libraries panel.

When a Workflow is inactive it will not be visible in the Job submission steps.

To add a new Workflow, use the Add + button at the top of the Library panel and select one of the following options:

  • Blank Workflow: Create a new workflow starting with a blank canvas. This options starts the Workflow Builder.

  • Template: Build a workflow by starting from an existing template. This options starts the Workflow Builder.

  • Workflow Wizard: Create a workflow that processes and exports data by answering a series of questions.

  • Workflow File: Upload a previously created workflow file.

If a Workflow with the same name already exists, a prompt is shown with the option to update the existing workflow, or to upload the new workflow as a copy.

Additionally, to edit, delete, download, deactivate or activate a Workflow, select the Workflow and then click on the dropdown button at the right of the Workflow name.

The options of a workflow are only visible when the user has the View Sensitive permission on the Workflow.
To download a workflow, the user must have the View Sensitive permission on the Workflow.

5.2.1. Workflow Builder

The Workflow Builder can be used to create new Workflows or to edit existing ones. The builder has 2 panes: the Operations pane, where the list of operations along with their options are defined, and the Details pane where the Workflow name, description and other fields are set.

6. Settings

The Settings view can be accessed using the Settings link in the top navigation bar. It can be used to manage system settings, such as licenses, engines, security policies, as well as user settings related to the user interface.

6.1. Automate License

The Automate License settings tab is used to inspect and update the current deployed Automate License. Automate licenses can either use a License ID and Key mechanism which is validated against the Automate License Service, or an offline license file.

6.2. Network Configuration

The Network Configuration settings tab is used to configure the network settings used by Automate.

6.2.1. Proxy Server

Automate can be configured to use a Proxy Server.

To configure a Proxy Server, use the Update button in the Proxy Server section and provide the following information:

  • Host: The host name of the proxy server.

  • Port: The port number of the proxy server.

  • Non-Proxy Hosts: A list of hosts that should be reached directly, bypassing the proxy (optional).

  • Whitelisted Certificate Fingerprints: A list of SHA-256 certificate fingerprints to be applied for every network connection (optional).

The Non-Proxy Hosts option does not apply for all connections made by Automate. It will not apply to connections made to the RLS and to connections made from the Engine Server.

6.3. Authentication Services

The Authentication Services settings tab is used to define the services that can be used to authenticate users when logging on to Automate. The services can also be used to populate the list of users and computers used in Legal Hold and Collections.

When Automate is accessible using multiple URLs, it’s possible to restrict the Authentication Services that are presented to the user based on the URL used to access Automate using the Restrict Access By URL option in the service.

The Restrict Access By URL should not be used as a security mechanism. An attacker may discover and use the Authentication Services that are available for a specific URL by setting a custom Host or X-Forwarded-Base-Uri header in the HTTP connection, thus emulating accessing Automate using that URL.

6.3.1. Internal Authentication Service

The Internal Authentication Service is used to authenticate users against the credentials stored in the configuration file. This service cannot be modified from the user interface.

6.3.2. Managed Authentication Service

The Managed Authentication Service is used to define a list of users and corresponding email addresses that are eligible for Legal Hold.

Users defined in this service will only be able to log in using the link delivered by email, when a Legal Hold notice event is generated

This service is only visible if the Legal Holds feature is enabled.

To add a new Managed Authentication Service, use the Add + Managed Authentication Service button and provide the following information:

  • Name: A user-defined name to assign to the UMS Authentication Service.

  • Active: The state of the service. If the service is inactive, it cannot be used for authentication.

  • Description: A user-defined description (optional).

  • Users Eligible for Legal Hold Administration: Allows users defined in this service to be eligible for being set as administrators in Legal Holds.

  • Users Eligible for Legal Hold Custodians: Allows users defined in this service to be eligible for being set as custodians in Legal Holds.

  • Authentication Scope: The scope for which single sign-on authentication links can be used.

  • Expire Links After: The duration during which authentication links are valid.

  • Users: The list of name and email address of the users.

6.3.3. LDAP Authentication Service

An LDAP Authentication Service is used to authenticate users against an LDAP directory service, such as Active Directory.

To add a new LDAP Authentication Service, use the Add + LDAP Authentication Service button and provide the following information:

  • Name: A user-defined name to assign to the LDAP Authentication Service.

  • Active: The state of the service. If the service is inactive, it cannot be used for authentication.

  • Description: A user-defined description (optional).

  • Domain DN: The DN to which users that can be authenticated belong to.

  • Host: The host name or IP address of the LDAP directory service.

  • Port: The port of the LDAP directory service, typically 389 for unsecure LDAP and 636 for secure LDAP.

  • Secure LDAP: Require the use of a TLS connection for connecting to the LDAP directory service.

  • Synchronize Users: Synchronize users from the LDAP directory service (optional).

  • Users Eligible for Legal Hold Administration: Allows users defined in this service to be eligible for being set as administrators in Legal Hold (optional).

  • Users Eligible for Legal Hold Custodians: Allows users defined in this service to be eligible for being set as custodians in Legal Hold (optional).

  • User Base DN: The DN from where to synchronize users.

  • User Search Scope The LDAP search scope to use when performing the search to synchronize users.

  • Synchronize Computers: Synchronize computers from the LDAP directory service (optional).

  • Computer Base DN: The DN from where to synchronize computers.

  • Computer Search Scope: The LDAP search scope to use when performing the search to synchronize computers.

  • Synchronization Interval: The interval for periodically synchronizing users and computers with the above settings.

  • Service Account Name: The account used to perform the search to synchronize users and computers.

  • Service Account Password: The password for the account above.

  • Enable single sign-on links: Allow users to login with single sign-on links received from Automate emails.

  • Authentication Scope: The scope for which single sign-on links can be used.

  • Expire Links After: The duration during which authentication links are valid.

  • Whitelisted Certificate Fingerprints: The SHA-256 fingerprint of the LDAP directory service certificate that should be trusted even if the certificate is self-signed (optional).

Users that are not under the Domain DN will not be able to authenticate.

To assign security policies to LDAP groups, use the LDAP group DN, for example CN=Automate Users,CN=Builtin,DC=example,DC=local

6.3.4. UMS Authentication Service

A UMS Authentication Service is used to authenticate users against a Nuix UMS.

To add a new UMS Authentication Service, use the Add + UMS Authentication Service button and provide the following information:

  • Name: A user-defined name to assign to the UMS Authentication Service.

  • Active: The state of the service. If the service is inactive, it cannot be used for authentication.

  • Description: A user-defined description (optional).

  • UMS URL: The URL of the Nuix UMS.

  • Synchronize Users: Synchronize users from the Nuix UMS (optional).

  • Users Eligible for Legal Hold Administration: Allows users defined in this service to be eligible for being set as administrators in Legal Hold (optional).

  • Users Eligible for Legal Hold Custodians: Allows users defined in this service to be eligible for being set as custodians in Legal Hold (optional).

  • Synchronization Interval: The interval for periodically synchronizing users with the above settings.

  • Service Account Name: The account used to perform the search to synchronize users.

  • Service Account Password: The password for the account above.

6.3.5. OIDC Authentication Service

A OIDC Authentication Service is used to authenticate users against an Open ID connect provider, and to provide access for managing collections from Google Vault.

The authentication flow can be initiated for an OIDC Authentication Service with a link by using the oidcScope query parameter and the URI encoded Name of the service. For example, https://automate.example.com/#/?oidcScope=Example%20OIDC%20Service, where automate.example.com corresponds to the server name on which Automate is deployed and Example%20OIDC%20Service is the URI encoded name of the OIDC Authentication Service.
Configuring an OIDC Authentication Service

To add a new OIDC Authentication Service, use the Add + OIDC Authentication Service button and provide the following information:

  • Name: A user-defined name to assign to the OIDC Authentication Service.

  • Active: The state of the service. If the service is inactive, it cannot be used for authentication.

  • Description: A user-defined description (optional).

  • Well-Known Configuration URI: the URI for the well-known configuration for the OIDC provider.

  • Scope: The OpenID scope, which typically must include openid and can include additional values defined by the OpenID Connect provider.

If the OIDC provider provides refresh tokens, Automate will use them if offline_access is included in the Scope.
  • Username Claim: The name of the username claim from the access token, for example preferred_username

  • Group Claim: (Optional) The name of the group claim from the access token

  • Client ID: The ID of the client application in the OpenID provider settings

  • Client Secret: The secret of the client application in the OpenID provider settings

  • Enable Authentication: Determines if the OIDC service can be used to authentication users to the Automate application. If this option is not selected, users will not have the option to log in to Automate using this service. However, the service will be available for other features of the application.

  • Authorization Code Flow: Enable the OIDC Authorization Code Flow. With this option, when an unauthenticated user navigates to the Automate webpage, the user will be redirected to the Identity Provider’s web page. After the authentication is complete, the user will be redirected back to Automate, at the https://automate.example.com/api/v1/users/oidcResponse URL, where automate.example.com corresponds to the server name on which Automate is deployed.

  • JWT Access Token: Enable the use of JWT access token issued by the OIDC service. Use this option when Automate is deployed behind a service which performs the authentication and adds the JWT access token to all requests proxied to Automate.

  • Link Auth With Synchronized Users: When enabled, this option will make users authenticated through the OIDC service have the same ID and display name as users from another service (ex: Microsoft, Google) which has the option of synchronizing users turned on. The match is performed on the email claim.

Configuring Google Workspace as an Authentication Service

If using the Google Workspace for authentication, configure the Automate access in the Google API by taking the following steps:

  1. Log in to the Google API Console at https://console.developers.google.com/

  2. In the OAuth consent screen tab, create a consent screen with the following settings:

    1. Application type: Internal

    2. Application name: Automate

    3. Scopes for Google APIs: email, profile, openid, (optionally) offline_access to support refreshing tokens

    4. Authorized domains: automate.example.com, where automate.example.com corresponds to the server name on which Automate is deployed

    5. Application Homepage link: https://automate.example.com

  3. In the Credentials tab, select Create Credentials and choose the type Oauth client ID

  4. Set the Application type to Web application and provide a name

  5. Set the Authorised JavaScript origins to http://automate.example.com

  6. Set the Authorised redirect URIs to https://automate.example.com/api/v1/users/oidcResponse

  7. Take note of the Client ID and Client Secret for the OIDC Authentication service.

The Well-Known Configuration URI for Google Workspace is https://accounts.google.com/.well-known/openid-configuration.

Optionally, to allow managing Google Vault:

To perform collections from Google Vault, a user account with the Manage Matters, Manage Searches and Manage Exports Google Vault privileges is required. To perform holds, the Manage Holds privilege is required.
Automate will use the permissions of a user logged in to the Google Vault Third-Party Service to query for Google users, groups, organizational units, drives and chat spaces.
To query for Google Chat spaces, the Google Chat App must be configured online in Google Cloud. See https://developers.google.com/workspace/chat/configure-chat-api for more details.
Configuring Relativity as an Authentication Service

Take the following steps in Relativity to prepare access for Automate:

  1. Log in to Relativity as an administrator

  2. Open the AuthenticationOAuth2 Client page

  3. Select the New OAuth2 Client to create an OIDC client for Automate, with the following settings:

    1. Name: Automate

    2. Enabled: Yes

    3. Flow Grant Type: Code

    4. Redirect URIs: https://automate.example.com/api/v1/users/oidcResponse, where https://automate.example.com corresponds to the URL used to access Automate.

    5. Access Token Lifetime: 43200

Then, take note of the Client Id and Client Secret.

The Access Token Lifetime value 43200 signifies that Relativity will issue tokens to Automate which are valid for 30 days. Because the Relativity OAuth2 Client does not support refreshing the tokens, a long enough value for the token lifetime must be used. The tokens are issued when the user logs in to Automate with the Relativity credentials and are used in Jobs containing Relativity operations. If the token expires before the Job finishes, the Relativity operations in the Job will fail.

Finally, in Automate add a new OIDC Authentication Service, using the Add + OIDC Authentication Service button and provide the following information:

Configuring a Generic OpenID Connect an Authentication Service

If using another OpenID Connect provider such as OKTA, configure the Automate access by taking the following steps:

  1. Allowed grant types: Client acting on behalf of a user - Authorization Code

  2. Login redirect URI: https://automate.example.com/api/v1/users/oidcResponse, where automate.example.com corresponds to the server name on which Automate is deployed

  3. Logout redirect URI: https://automate.example.com/api/v1/users/oidcResponse

  4. Take note of the Client ID and Client secret for the OIDC Authentication service.

  5. Automate will call the OIDC authorization endpoint will the following arguments:

    1. response_type: code

    2. response_mode: form_post

    3. scope: The scope set in the configuration

    4. redirect_uri: The login redirect URI

    5. client_id: The client ID

    6. state: Internally managed value

    7. nonce: Internally managed value

  6. Automate extracts the OIDC username and group from the claim in the access token. The names of these claims are defined in the usernameClaim and groupClaim settings.

  7. When logging out, automate calls the OIDC end-session endpoint with the following arguments:

    1. id_token_hint: The ID token

    2. post_logout_redirect_uri: The logout redirect URI

6.3.6. Microsoft Authentication Service

A Microsoft Service is used to authenticate users against an Azure AD, to provide access to synchronizing users from Azure AD for legal holds, and to provide access for managing collections from Microsoft Purview.

To add a new Microsoft Authentication Service, use the Add + Microsoft Authentication Service button and provide the following information:

  • Name: A user-defined name to assign to the OIDC Authentication Service.

  • Active: The state of the service. If the service is inactive, it cannot be used for authentication.

  • Description: A user-defined description (optional).

  • Environment: The commercial or government environment of Azure.

  • OAuth Version: Azure AD (v1) or Azure AD (v2).

  • Tenant: The domain name or the ID of the Azure AD tenant

  • Client ID: The ID of the client application from Azure AD

  • Client Secret: The secret of the client application from Azure AD

  • Enable Authentication: Select this option to enable this service to be used for authenticating to Automate.

  • Synchronize Users: Select this option to synchronize users from the Azure AD service and render them available for Legal Hold notifications.

  • Include Guest Users: Include guest users when synchronizing users.

  • Users Eligible for Legal Hold Administration: Allows users defined in this service to be eligible for being set as administrators in Legal Hold (optional).

  • Users Eligible for Legal Hold Custodians: Allows users defined in this service to be eligible for being set as custodians in Legal Hold (optional).

  • Synchronization Interval: The time interval at which the synchronization should occur.

To import a list of mailboxes corresponding to deleted users to the Microsoft Authentication Service, for use with Legal Hold and collection, use the Upload CSV function, with a CSV file produced by the following PowerShell commands:

Install-Module -Name ExchangeOnlineManagement
Import-Module ExchangeOnlineManagement
Connect-ExchangeOnline

Get-Mailbox -InactiveMailboxOnly | Select-Object -Property ExchangeGuid,ExternalDirectoryObjectId,UserPrincipalName,DisplayName,PrimarySmtpAddress | Export-CSV -NoTypeInformation -Path InactiveMailboxes.csv
Configuring Microsoft Authentication for an Authentication Service

Take the following steps in Azure AD to prepare access for Automate:

  1. Log in to the Microsoft Azure Portal at https://portal.azure.com/

  2. Open the Azure Active Directory resource

  3. Select the App registrations panel

  4. Create a New registration

  5. Set the application name to Automate, the Supported account types to Accounts in this organizational directory only and the Redirect URI to Web https://automate.example.com/api/v1/users/oidcResponse, where automate.example.com corresponds to the server name on which Automate is deployed

  6. Register the app

  7. From the Overview pane, take note of the Directory (tenant) ID

  8. Take note of the Application (client) ID

  9. In the Certificates & secrets pane, create a New client secret

  10. Set the secret description to Automate and set the expiration to Never

  11. Take note of the client secret value

  12. Open the API permissions pane

  13. Add a permission from the Microsoft Graph. From the Delegated permission section, select the permission User.Read

  14. Optionally, to allow querying user profile pictures and to synchronize users, add the Application permissionUser.Read.All.

  15. Optionally, to allow managing Microsoft Purview, add the following additional permissions:

    1. Delegated permissioneDiscovery.ReadWrite.All.

    2. Delegated permissionDirectory.Read.All.

    3. Delegated permissionSites.Read.All.

    4. Application permissionTeam.ReadBasic.All. If this is not granted, Automate will not be able to list the teams a user is associated with and not a direct member of.

    5. Application permissionDirectory.Read.All.

    6. Application permissionSites.Read.All. This permission is optional. If this is not granted, Automate will attempt to list the SharePoint sites in the organizations with the Delegated permission of a user logged in to the Purview Third-Party Service.

  16. Optionally, to allow SMTP to authenticate and send emails, add the following additional permission:

    1. Delegated permissionMail.Send.

  17. Optionally, to allow downloading exports from Microsoft Purview, perform the following actions:

    1. Log in to Azure AD with PowerShell with the following command: Connect-Graph -Scopes "Application.ReadWrite.All"

    2. Create a Service Principal for the MicrosoftPurviewEDiscovery application by running the following PowerShell command: New-MgServicePrincipal -AppId b26e684c-5068-4120-a679-64a5d2c909d9

    3. Add a new permission: APIs my organization usesMicrosoftPurviewEDiscoveryDelegated permissioneDiscovery.Download.Read

  18. From the API permissions, Grant admin consent

  19. Optionally, to allow logging in with Power BI with a Microsoft account, open the Expose an API pane and . Add a scope with the following settings:

    1. Application ID URI: https://automate.example.com, where automate.example.com corresponds to the server name on which Automate is deployed

    2. Scope name: user_impersonation

    3. Who can consent?: Admins and Users

    4. Admin consent display name: Impersonate the User

    5. Admin consent description: Allows the app to access Automate on behalf of the user

    6. User consent display name: Impersonate the User

    7. User consent description: Allows the app to access Automate on behalf of the user

All users which are defined Azure AD will be able to log in to Automate. The access level of each user is determined by the security policies defined in the Automate web page, in the Settings tab.

To perform collections from Microsoft Purview, a user account with the eDiscovery Manager role is required.

Users from the Managed and LDAP services can sign-on to Automate with the use of Single Sign-On (SSO) links. An SSO link can only be used once and is valid for a limited time period or until Automate shuts down. If enabled, users will receive the SSO link in all email communications received from Automate, for the specified scope.

If a link expires (Automate is shut down or on timeout), the user will be prompted to receive a new refreshed link to their associated email. SSO links can be refreshed as long as both the scope, and the authentication service allows it.

Users from the Microsoft Authentication service are redirected to the configured Azure AD authentication page when using SSO links.

6.4. Nuix License Sources

The Nuix License Sources settings tab is used to define the licenses to be used by the Nuix Engines managed by Automate.

Automate supports three types of Nuix License Sources.

6.4.1. Nuix Management Server

A Nuix Management Server (NMS) is the classical way to assign Nuix licenses in an environment with multiple Nuix servers or workstations.

To add a new NMS to the Automate configuration, use the Add + Nuix Management Server button and provide the following information:

  • Name: A user-defined name to assign to the NMS.

  • Description: A user-defined description (optional).

  • Filter: A text filter to select licenses of a certain type from the NMS (optional). If a filter value is provided, a license will be selected from the NMS only if the short name, full name or description of the license contains the text provided in the filter, for example, enterprise-workstation. The filter is case insensitive.

  • Server Name: The host name or the IP address of the NMS.

  • Server Port: The port on which the NMS is configured to listen, by default 27443.

  • Username: The username from the NMS under which Automate will acquire licenses.

  • Password: The password for the Username above.

  • Whitelisted Certificate Fingerprints: The SHA-256 fingerprint of the NMS certificate that should be trusted even if the certificate is self-signed (optional).

By default, NMS uses a self-signed certificate. In this situation, Automate is not able to validate the identify of the NMS and a Whitelisted Certificate Fingerprint must be provided, otherwise, Engines will not be able to acquire licenses from this NMS.
Automate will list the certificate fingerprint if the name listed in the certificate matches the server name. Alternatively, an incorrect certificate fingerprint can be provided temporarily, for example 0000, to have Automate disable the name validation and provide the detected certificate fingerprint value in the error message.
The following PowerShell code can be used to get the SHA-256 certificate fingerprint of a server, where 127.0.0.1 is the IP address of the NMS:
$ServerName = "127.0.0.1"
$Port = 27443

$Certificate = $null
$TcpClient = New-Object -TypeName System.Net.Sockets.TcpClient
try {

    $TcpClient.Connect($ServerName, $Port)
    $TcpStream = $TcpClient.GetStream()

    $Callback = { param($sender, $cert, $chain, $errors) return $true }

    $SslStream = New-Object -TypeName System.Net.Security.SslStream -ArgumentList @($TcpStream, $true, $Callback)
    try {

        $SslStream.AuthenticateAsClient('')
        $Certificate = $SslStream.RemoteCertificate

    } finally {
        $SslStream.Dispose()
    }

} finally {
$TcpClient.Dispose()
}

if ($Certificate) {
    if ($Certificate -isnot [System.Security.Cryptography.X509Certificates.X509Certificate2]) {
        $Certificate = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Certificate2 -ArgumentList $Certificate
    }
    Write-Output $Certificate.GetCertHashString("SHA-256")
}

6.4.2. Cloud License Server

A Cloud License Server (CLS) is a cloud service managed by Nuix that can be used to acquire licenses.

To add a new CLS to the Automate configuration, use the Add + Cloud License Server button and provide the following information:

  • Name: A user-defined name to assign to the CLS.

  • Description: A user-defined description (optional).

  • Filter: A text filter to select licenses of a certain type from the CLS (optional). If a filter value is provided, a license will be selected from the CLS only if the short name, full name or description of the license contains the text provided in the filter, for example, enterprise-workstation. The filter is case insensitive.

  • Username: The username for the CLS account under which Automate will acquire licenses.

  • Password: The password for the Username above.

6.4.3. Nuix Dongle

A Nuix Dongle is a physical USB device that stores Nuix Licenses and is typically used when using Nuix Workstation or the Nuix Engine on a single server or workstation.

To add a new Nuix Dongle to the Automate configuration, use the Add + Nuix Dongle button and provide the following information:

  • Name: A user-defined name to assign to the dongle.

  • Description: A user-defined description (optional).

  • Filter: A text filter to select licenses of a certain type from the Nuix Dongle (optional).

The Nuix Dongle must be connected to the server of the Engine that is using it.

6.5. Engine Servers

The Engine Servers settings tab can be used to define the servers that will host Engines.

Prior to adding an Engine Server to the Automate configuration, the Automate Engine Server component must be deployed and configured on the server in question. See the Automate Installation Guide for details on how to install and configure Engine Servers

To add a new Engine Server to the Automate configuration, use the Add + Engine Server button and provide the following information:

  • Name: A user-defined name to assign to the Engine Server.

  • URL: The URL that can be used to reach the server, for example https://localhost:444

  • Description: A user-defined description (optional).

  • Whitelisted Certificate Fingerprints: The SHA-256 fingerprint of the Engine Server certificate that should be trusted even if the certificate is self-signed (optional).

By default, the Engine Server uses a self-signed certificate. In this situation, Automate is not able to validate the identity of the Engine Server and a Whitelisted Certificate Fingerprint must be provided.
Automate will list the certificate fingerprint if the name listed in the certificate matches the server name. Alternatively, an incorrect certificate fingerprint can be provided temporarily, for example 0000, to have Automate disable the name validation and provide the detected certificate fingerprint value in the error message.

6.6. Engines

The Engines settings tab can be used to define the Engine instances that will run Jobs. An Engine can only run one Automate Job at a time. To run multiple Jobs at the same time, create multiple Engines on one or more Engine Servers, based on the available hardware resources.

To add a new Engine to the Automate configuration, use the Add + Engine button and provide the following information:

  • Name: A user-defined name to assign to the Engine.

  • Server: The Engine Server on which this Engine will run.

  • Execution Mode: The mode in which the Engine will run.

    • Automate: The Engine only runs Native workflows and does not consume a Nuix license. These are workflows which only use native Automate operations, such as configuration operations and external commands.

    • Nuix Engine: The Engine runs both Nuix as well as Native workflows. The Engine only consume a Nuix License when running Nuix workflows.

The Automate mode requires a Automate Premium license edition.
  • Nuix License Source: The source from which this Engine will acquire licenses.

  • Initialization Execution Profile: The Execution Profile used to initialize the Engine.

  • Priority: The priority of this Engine in Resource Pools. When a Job starts, it is assigned to the first available Engine (i.e. not running another Job) with the highest priority from that Resource Pool.

When a Resource Pool contains available Native and Nuix Engines and a Job does not require a Nuix license, the Job will be assigned to the Native Engine with the highest priority, regardless of the priority of the Nuix Engines.
  • Target Workers: The number of Nuix workers to attempt to acquire a license for, if available.

  • Min Workers: The minimum number of Nuix workers to acquire a license for. If the number of available workers in the Nuix License Source is lower than this value, the Engine will not initialize and will remain in an error state until they become available.

6.7. Resource Pools

The Resource Pools settings tab can be used to group Engines. Jobs are assigned to Resource Pools and run on the first available Engine with the highest priority from the Resource Pool.

Automate supports Resource pools which are local or cloud-based (AWS and Azure).

6.7.1. Local Resource Pool

A Local Resource Pool groups Engines which are manually managed and typically run on local servers.

Additionally, Remote Engines can be configured to join Jobs running in the Resource Pool, allowing for load distribution of a single Job amongst multiple Engines.

Remote Engines only get initialized when a Job is running an operation which requires workers, for example Add Evidence, OCR or Legal Export. After the operation requiring workers is complete, the Remote Engines are spun down, and are available to join other Jobs running in the same Resource Pool.

The Remote Engines feature uses the Nuix Worker Broker and Agent mechanism. A Worker Broker is set up for each main Engine running a Job, using the default IP address of that server. In the event that the Engine Server has multiple network interfaces, the IP address and port range to use for the Worker Brokers can be specified in the Engine Server configuration file (see the Automate Installation Guide for details for details).

To add a new Local Resource Pool to the Automate configuration, use the Add + Local Resource Pool button and provide the following information:

  • Name: A user-defined name to assign to the Resource Pool.

  • Active: The state of the Resource Pool. An inactive Local Resource Pool will not start any new Jobs.

  • Description: A user-defined description (optional).

  • Engines: The list of Engines which are part of the Resource Pool and which will run Jobs.

  • Remote Workers: The list of Engines which will join Jobs as Remote Workers.

An Engine can be part of multiple Resource Pools.

6.7.2. AWS Resource Pool

An AWS Resource Pool automatically manages and runs Engine Servers and Engines in the Amazon AWS cloud environment.

Prior to adding an AWS Resource Pool to the Automate configuration, the AWS environment needs to be configured with either one or multiple EC2 Instances or an EC2 Launch Template, using the following steps:

  1. Create a new EC2 instance

  2. Deploy and configure the Automate Engine Server according to the Automate Installation Guide, similarly to a local deployment.

  3. Validate the deployment by manually adding an Engine Server with the URL of the cloud instance on port 443 and then adding an Engine on that server.

  4. Resolve any eventual certificate and Nuix License Source issues.

  5. Remove the manually added Engine and Engine server corresponding to the cloud instance.

  6. To run Automate Jobs on this instance only, take note of the Instance ID and skip the remaining steps. Optionally, the EC2 instance can be shut down.

  7. To run Automate Jobs on instances which are dynamically created by EC2, create an EC2 Launch Template from the EC2 instance previously configured. For more information on the Launch Templates, see https://docs.aws.amazon.com/autoscaling/ec2/userguide/LaunchTemplates.html

When Automate is starting a Job that is assigned to an AWS Resource Pool using a Launch Template, it first scans for idle EC2 instances started with the Launch Template. If an EC2 instance is found, the Job is assigned to it. Otherwise, if the number of active EC2 instances does not exceed the Max Concurrent Instances value, a new EC2 instance is spawned and assigned the Job.

To add a new AWS Resource Pool to the Automate configuration, use the Add + AWS Resource Pool button and provide the following information:

  • Name: A user-defined name to assign to the Resource Pool.

  • Active: The state of the Resource Pool. An inactive AWS Resource Pool will not start any new Jobs, and will not manage the state of EC2 instances (i.e. it will not shut down or terminate the instance after the running Job finishes, if applicable).

  • Description: A user-defined description (optional).

  • Access Key: The Access Key of the account that will be used to connect to AWS. For details on obtaining an Access Key, see https://aws.amazon.com/premiumsupport/knowledge-center/create-access-key

  • Secret Key: The Secret Key for to the Access Key above.

  • Region: The AWS Region in which the EC2 instance or Launch Template was created.

  • Engines: The settings used to manage the instances that run the Job:

    • Nuix License Source: The Nuix License Source from which Engines will acquire licenses.

    • Target Workers: The number of Nuix workers to attempt to acquire a license for, if available.

    • Min Workers: The minimum number of Nuix workers to acquire a license for. If the number of available workers in the Nuix License Source is lower than this value, then the Engine will not initialize and will remain in an error state until the minimum number of Nuix workers becomes available.

    • Instance Idle Action: The action to perform on the EC2 instance when a Job finishes and no other Jobs from the Backlog are assigned to the instance.

    • Force Idle Action: This setting will force an EC2 instance to Stop or Terminate when a Job finishes, even if other jobs from the Backlog are assigned to run on the instance.

    • Virtual Machine Source: The mechanism used to find the EC2 instances that Automate will manage.

    • Launch Template ID: Dynamically spawn EC2 instances.

      • Launch Template ID: The ID of the Launch Template that will be used to spawn new instances.

      • Max Concurrent Instances: The maximum number of EC2 instances running at the same time using the Launch Template.

    • Instance IDs: Find EC2 instances by IDs.

      • Instance IDs: The IDs of the pre-existing and configured EC2 instances to manage.

    • Tags: Find EC2 instances by tags.

      • Tag Name: The name of the tag in EC2.

      • Tag Value: The value of the tag in EC2.

  • Remote Workers: The settings used to manage the instances that run the workers which are joined to the Jobs. These are similar to the Engines settings. Additionally, the following settings are available:

    • Don’t Trigger Idle Action Before First Job: This setting will prevent Automate from stopping or deleting Remote Worker instances before a Job runs on the Resource Pool.

    • Don’t Trigger Idle Action for Non-Worker Operationss: This setting will prevent Automate from stopping or deleting Remote Worker instances while a Job is running on the Resource Pool, even if the Job does not require remote workers currently.

  • Whitelisted Certificate Fingerprints: The SHA-256 fingerprint of the Engine Server certificate that should be trusted even if the certificate is self-signed (optional).

Selecting the Terminate idle action will permanently delete the EC2 instance.
An instance can be used either as a main Engine, or for Remote Workers, but not for both roles at the same time.

6.7.3. Azure Resource Pool

An Azure Resource Pool automatically manages Engine Servers and Engines in the Microsoft Azure cloud environment.

Prior to adding an Azure Resource Pool to the Automate configuration, the Azure environment needs to be configured with one or multiple virtual machines (VMs), using the following steps:

  1. Create a new VM

  2. Deploy and configure the Automate Engine Server according to the Automate Installation Guide, similarly to a local deployment.

  3. Validate the deployment by manually adding an Engine Server with the URL of the VM on port 443 and then adding an Engine on that server.

  4. Resolve any eventual certificate and Nuix License Source issues.

  5. Remove the manually added Engine and Engine server corresponding to the cloud VM.

  6. Optionally, the VM can be shut down.

  7. Register Automate in the Azure AD using the Azure Command-Line Interface (CLI) , by running the following command:

az ad sp create-for-rbac --name NuixAutomate --role "Contributor" --scope "/subscriptions/11111111-1111-1111-1111-111111111111"

where 11111111-1111-1111-1111-111111111111 is the ID of the Azure subscription.

  1. Take note of the appId, password, and tenant values returned by the command above.

To add a new Azure Resource Pool to the Automate configuration, use the Add + Azure Resource Pool button and provide the following information:

  • Name: A user-defined name to assign to the Resource Pool.

  • Active: The state of the Resource Pool. An inactive Azure Resource Pool will not start any new Jobs and will not manage the state of VMs (i.e. it will not shut down or terminate the VM after the running Job finishes, if applicable).

  • Description: A user-defined description (optional).

  • Environment: The commercial or government environmnet of Azure.

  • Tenant: The tenant value obtained using the Azure CLI.

  • Key: The password value obtained using the Azure CLI.

  • App ID: The appId value obtained using the Azure CLI.

  • Subscription ID: The Azure Subscription to connect to, if the account provided has access to multiple Azure Subscriptions (optional).

  • Engines: The settings used to manage the instances that run the Job:

    • Nuix License Source: The Nuix License Source from which Engines will acquire licenses.

    • Target Workers: The number of Nuix workers to attempt to acquire a license for, if available.

    • Min Workers: The minimum number of Nuix workers to acquire a license for. If the number of available workers in the Nuix License Source is lower than this value, then the Engine will not initialize and will remain in an error state until the minimum number of Nuix workers becomes available.

    • Instance Idle Action: The action to perform on the VM when a Job finishes and no other Jobs from the Backlog are assigned to the VM.

    • Force Idle Action: This setting will force the VM to Stop or Terminate when a Job finishes, even if other jobs from the Backlog are assigned to run on the instance.

    • Virtual Machine Source: The mechanism used to find the Azure VMs that Automate will manage.

    • VM Names: Find Azure VMs by name

      • VM Names: The names of the pre-existing and configured Azure VMs to manage.

    • Custom VM Image: Dynamically spawn Azure VMs.

      • Region: The Azure region to spawn the VM in.

      • Resource Group ID: The Azure Resource Group ID/name to spawn the VM in.

      • Network Name: The name of a pre-existing Azure network to associate the VM to.

      • Network Subnet Name: The name of a pre-existing Azure network subnet to associate the VM to, for example default.

      • Custom VM Image ID: The ID/name of the Azure custom image to use for spawning the VM. When creating a custom image, first generalize the original VM from which the image is created.

      • Max Concurrent Instances: The maximum number of Azure VMs running at the same time using the Custom VM Image.

      • Custom VM Username: The admin username to set on the VM.

      • Custom VM Password: The admin password to set on the VM.

      • VM Type: Create the VM as Spot or On-Demand.

      • VM Size: The size characteristics of the VM in Azure.

      • Disk Size: The size of the OS disk in GB.

    • Tags: Find Azure VMs by tags.

      • Tag Name: The name of the tag in Azure.

      • Tag Value: The value of the tag in Azure.

  • Remote Workers: The settings used to manage the instances that run the workers which are joined to the Jobs. These are similar to the Engines settings. Additionally, the following settings are available:

    • Don’t Trigger Idle Action Before First Job: This setting will prevent Automate from stopping or deleting Remote Worker instances before a Job runs on the Resource Pool.

    • Don’t Trigger Idle Action for Non-Worker Operations: This setting will prevent Automate from stopping or deleting Remote Worker instances while a Job is running on the Resource Pool, even if the Job does not require remote workers currently.

  • Whitelisted Certificate Fingerprints: The SHA-256 fingerprint of the Engine Server certificate that should be trusted even if the certificate is self-signed (optional).

Selecting the Delete idle action will permanently delete the Azure VM, its OS disk and associated network interface.

6.8. Notification Rules

The Notification Rules settings tab can be used to define rules that trigger notifications when certain events occur, for example, a Job being queued. Notification Rules are added to Execution Profiles which in turn are assigned to Jobs.

For notifications to be sent, the Notification Rule must be added to the Execution Profile used by the Job.

Notifications can be sent by Email using an Email Notification Rule or through collaboration platforms, such as Microsoft Teams, Slack or Discord, using a Webhook Notification Rule.

The Test Rule button and the Test dropdown option can be used to test the settings of the Notification Rule and will send out a test message.

6.8.1. Email Notification Rule

An Email Notification Rule will send out an email when the rule is triggered.

To add a new Email Notification Rule, use the Add + Email Notification Rule button and provide the following information:

  • Name: A user-defined name to assign to the rule.

  • Description: A user-defined description (optional).

  • SMTP Server: The IP address or the name of the SMTP server.

  • SMTP Port: The port of the SMTP server, typically 25 for unauthenticated access and 465, or 587, for authenticated access.

  • SMTP Authentication: If checked, Automate will authenticate to the SMTP server with the account provided.

  • SMTP Username: The username to authenticate to the SMTP server with.

  • SMTP Password: The password for the Username above.

  • Transport Layer Security: If checked, access to the SMTP Server will be TLS-encrypted. Otherwise, access to the SMTP server will be in clear-text.

  • HTML Format: If checked, emails will be sent as HTML with formatting. Otherwise, emails will be sent in text format.

  • From: The email address from which to send the email.

  • To: The email address to which to send the email.

  • CC: The email address to CC in the email.

  • Triggers: The events for which to trigger the Notification Rule.

The To and CC address can contain a single email address or multiple addresses separated by a semi-colon, for example, jsmith@example.com; fmatters@example.com. They can also use the {job_submitted_by} parameter. Other parameters cannot be used.
Regarding the To and CC address, if the Automate usernames are not full email addresses, for example, jsmith, append the respective email domain name as a suffix to the {job_submitted_by} parameter. For example, {job_submitted_by}@example.com.

6.8.2. Webhook Notification Rule

A Webhook Notification Rule will send out a message to Microsoft Teams, Slack or Discord when the rule is triggered.

To add a new Webhook Rule, use the Add + Webhook Notification Rule button and provide the following information:

  • Name: A user-defined name to assign to the rule.

  • Description: A user-defined description (optional).

  • Platform: The collaboration platform to which to send the notification.

  • Webhook URL: The URL of the webhook configured in the collaboration platform.

  • Triggers: The events for which to trigger the Notification Rule.

6.9. File Libraries

The File Libraries settings tab can be used to define Nuix profiles and custom files. The files in the File Library can be applied to Jobs using an Execution Profile or file parameters.

To add a new File Library, use the Add + File Library button and provide the following information:

  • Name: A user-defined name to assign to the library;

  • Description: A user-defined description (optional);

6.9.1. Files

The Files within a File Library can be used to define Nuix Profiles and custom files.

To add a new File, select a File Library and click the Add + File button inside the File Library Pane and provide the following information:

  • File Name: A user-defined file selected to upload;

  • Notes: A user-defined description for the file (optional);

By default the maximum file size is 10MB. For more information on configuring this limit, see the Service settings in the installation guide.

A File can be one of the following types:

  • Configuration Profile

  • Processing Profile

  • Production Profile

  • OCR Profile

  • Metadata Profile

  • Imaging Profile

  • Custom File

Custom Files are user-defined files that are not Nuix profiles, for example: .txt, .tsv.
File Actions

To perform an action on a File, open the File Library Panel by clicking on the corresponding File Library row, then open the File View by clicking on the corresponding File row, then click on the dropdown button at the right of the File name.

The following actions can be performed on files:

  • Update: Allows a user to update the file.

When updating a file, the type of the file cannot be changed. This also means that once a custom file is defined the extension of the custom file cannot be changed, only the contents of the file can be updated.
  • Delete: Deletes the file.

If a file is being used by an Execution Profile, the user will be unable to delete the file until it has been removed from the Execution Profile.
  • Download: Download a copy of the file;

6.10. Execution Profiles

The Execution Profiles settings tab can be used to define the Engine system settings, such as memory and credentials, as well as additional parameters to apply to running Jobs.

A Job requires an Execution Profile to run.

To add a new Execution Profile, use the Add + Execution Profile button and provide the following information:

  • Name: A user-defined name to assign to the profile.

  • Description: A user-defined description (optional).

  • Username: The user account to run the Engine under (optional). If a Username is not provided, the Engine will run under the same account as the Engine Server service.

  • Password: The password for the Username above.

The Username and Password feature is only available on Microsoft Windows platforms. The Username can be provided in the format domain\username or username@domain.
When specifying a Username in the Execution Profile, the user in question must have the Log on as service right on each server on which the Execution Profile is used. By default, administrative accounts DO NOT have this right. This can be configured either using a Group Policy (see https://docs.microsoft.com/en-us/windows/security/threat-protection/security-policy-settings/log-on-as-a-service), or manually in Services management console by configuring a service to run under the specified account.
  • Command-Line Parameters: The command-line parameters to apply to the Engine (optional). These command-line parameters function like the parameters which can be provided in a batch file when running Nuix Workstation. For example, these can be used to predefine the memory available to the Engine and the Workers and to specify the Workers log folder.

Additionally, the following Automate-specific command-line parameters exist:

  • -Dautomate.allowAnyJava=true: Disable requirement to use AdoptOpenJDK with Nuix Engine 9.0 and greater

  • -Dautomate.discover.log=C:\Temp\discover.log: Output the full GraphQL log used in the communication with Nuix Discover to the specified log file.

    • Log Folder: The folder where to redirect the Engine logs.

To redirect all logs related to the execution of a Job, provide the log location in the Log Folder and also as a command-line parameter, for example, -Dnuix.logdir=C:\Temp\logs.
  • Nuix Engine Installation Folder: The folder where a different version of the Nuix Engine is deployed (optional).

The Nuix Engine Installation Folder option can be used to run Jobs in Nuix cases which were created with a different version of the Nuix Engine or Nuix Workstation, without needing to migrate the cases to the latest version of Nuix.
The fields Command-Line Parameters, Log Folder and Nuix Engine Installation Folder support evaluating parameters when submitting a job. Meaning when a job is submitted with an Execution Profile that has parameters in any of these fields, either the Workflow Parameters must contain the parameters or the workflow. If the workflow or the Execution Profile do not contain the required parameter the user will be unable to submit a job with the Execution Profile.
  • Java Installation Folder: The folder where a different version of Java is deployed (optional).

Engine version 8.x and lower is only supported with Java version 8. Engine version 9.0 and higher is only supported with Java version 11. When specifying a Nuix Engine in the Execution Profile, also specify the location of a Java installation that is compatible with the version of the Nuix Engine in question.
  • Workflow Parameters: Additional parameters and values to add to those already defined in the Workflow (optional), for more information about Workflow parameters see Workflow Parameters

The Workflow Parameters option can be used, for example, to define the location where a script should write an output file. This location might change depending on the environment in which the Job is run and can be captured using an Execution Profile Workflow Parameter.
  • Notification Rules: The list of rules that apply to the Execution Profile (optional).

  • Progress Settings: Enable the automatic update of operation weights when a Job finishes successfully. The operation weights are calculated based on the time each operation took to complete compared to the total Job execution time. When updating the operation weights, the values obtained from the last execution are weighed against the setting defined in this section.

A last successful execution weight of 0% will keep the operation weights unchanged and a setting of 100% will update the operation weights entirely from the last execution.
The operation weights are only updated if the workflow does not change since the time the job was submitted and the last successful execution. Updates are only performed for jobs that do not insert / append operations dynamically.
  • Timeout Settings: The minimum progress that each operation and the Job must make, respectively, in the allotted time, otherwise the Job is aborted, or the current operation is skipped if it was configured as skippable.

  • Nuix Profiles: List of Nuix Profiles selected from File Libraries to add to the Nuix case.

The Nuix Profiles will store the profiles in the case under the profile type, for example, if a metadata profile was added, the profile would be found in the case folder under the path \Stores\User Data\Metadata Profiles\.
  • Additional Files: Additional list of parameters mapped to File Library Files that will be added to the Nuix case when running a job.

All parameters within the Additional Files will contain the suffix _file. The files created from these parameters can be found in the Nuix case under the path \Stores\Workflow\Files\.

6.11. Client Pools

The Client Pools settings tab can be used to group clients into different pools, for example, based on geographical locations or the team that is in charge of each Client.

To add a new Client Pool, use the Add + Client button and provide the following information:

  • Name: A user-defined name to assign to the pool.

  • Description: A user-defined description (optional).

  • Clients: The list of clients that are part of the pool. A Client can belong to one, multiple or no Client Pools.

Client Pools can create Workflow parameters to add to parameters already defined in a Workflow. To see more information about Workflow parameters see Workflow Parameters

6.12. Third-Party Services

The Third-Party Services settings tab is used to define and authenticate the third-party services that can be used in Jobs.

The services must be authenticated by a user before they can be used. Users can authenticate a service by using the Sign In menu option in the service pane, or with a prompt during the job submission. The service will run under the user who authenticated the service.

Services can be authenticated on a Service scope or on a User scope. The Service scope authenticates the service for all users while the User scope authenticates the service only for the current user.

For services using the User scope, the user credential of the user who submitted the Job will be used for the third-party service. For Schedules, the user credential of the user who last modified the Schedule will be used.

6.12.1. Nuix Discover Services

The Nuix Discover Service is used to configure and authenticate to Nuix Discover.

To add a new Nuix Discover Service, use the Add + Nuix Discover Service button and provide the following information:

  • Name: A user-defined name to assign the service.

  • Active: The state of the service. If the service is inactive, it cannot be used.

  • Description: A user-defined description (optional).

  • Available by Default to All Jobs: Select whether the service should be set by default as a parameter to all Jobs.

  • Scope: The authentication scope:

    • Service: Login as a service and make the connection available to all users with permissions to the service.

    • User: Login as a user. Each user will need to log in.

  • Method: The authentication method:

    • API Key: The API key of the username to connect with. This key can be obtained from the Nuix Discover User Administration page → Users → username → API Access.

  • Hostname: The hostname of the service, for example ringtail.us.nuix.com

  • Whitelisted Certificate Fingerprints: The SHA-256 fingerprint of the server certificate that should be trusted even if the certificate is self-signed (optional).

6.12.2. Nuix Enrich Services

The Nuix Enrich is used to configure and authenticate to Enrich.

To add a new Nuix Enrich Service, use the Add + Nuix Enrich Service button and provide the following information:

  • Name: A user-defined name to assign the service.

  • Active: The state of the service. If the service is inactive, it cannot be used.

  • Description: A user-defined description (optional).

  • Available by Default to All Jobs: Select whether the service should be set by default as a parameter to all Jobs.

  • Scope: The authentication scope:

    • Service: Login as a service and make the connection available to all users with permissions to the service.

    • User: Login as a user. Each user will need to log in.

  • Method: The authentication method:

    • API Key: The API key of the username to connect with.

  • URL: The URL of the service, for example https://enrich.us.nuix.com

  • Feeds API URL: Optionally, the URL of the feeds API service, for example https://enrich.us.nuix.com:14410

  • Whitelisted Certificate Fingerprints: The SHA-256 fingerprint of the server certificate that should be trusted even if the certificate is self-signed (optional).

6.12.3. Nuix Investigate Services

The Nuix Investigate is used to configure and authenticate to Nuix Investigate.

To add a new Nuix Investigate Service, use the Add + Nuix Investigate Service button and provide the following information:

  • Name: A user-defined name to assign the service.

  • Active: The state of the service. If the service is inactive, it cannot be used.

  • Description: A user-defined description (optional).

  • Available by Default to All Jobs: Select whether the service should be set by default as a parameter to all Jobs.

  • Scope: The authentication scope:

    • Service: Login as a service and make the connection available to all users with permissions to the service.

    • User: Login as a user. Each user will need to log in.

  • Method: The authentication method:

    • OIDC (Client Credentials): Use the OIDC Client Credentials authentication flow.

    • Service: the OIDC service to use for authentication.

  • URL: The URL of the service, for example https://investigate.us.nuix.com

  • Whitelisted Certificate Fingerprints: The SHA-256 fingerprint of the server certificate that should be trusted even if the certificate is self-signed (optional).

6.12.4. Microsoft Purview Service

The Microsoft Purview Service is used to configure and authenticate Microsoft Purview.

Microsoft Purview services requires users with the eDiscovery Manager role.

To add a new Microsoft Purview Service, use the Add + Microsoft Purview Service button and provide the following information:

  • Name: A user-defined name to assign the service.

  • Active: The state of the service. If the service is inactive, it cannot be used.

  • Description: A user-defined description (optional).

  • Available by Default to All Jobs: Select whether the service should be set by default as a parameter to all Jobs.

  • Scope: The authentication scope:

    • Service: Login as a service and make the connection available to all users with permissions to the service.

    • User: Login as a user. Each user will need to log in.

  • Method: The authentication method:

    • OIDC (Authorization Code): A popup window will redirect users to the login page for authentication.

  • Service: The Microsoft authentication service to use for authentication (see Configuring Microsoft as an Authentication Service).

  • Use Purview Proxy Download: Select whether to enable the use of the MicrosoftPurviewEDiscovery app for performing downloads from Purview (see https://learn.microsoft.com/en-us/purview/ediscovery-premium-get-started#step-4-verify-that-required-ediscovery-apps-are-enabled).

  • Whitelisted Certificate Fingerprints: The SHA-256 fingerprint of the server certificate that should be trusted even if the certificate is self-signed (optional).

6.12.5. Google Vault Service

The Google Vault Service is used to configure and authenticate Google Vault.

To add a new Google Vault Service, use the Add + Google Vault Service button and provide the following information:

  • Name: A user-defined name to assign the service.

  • Active: The state of the service. If the service is inactive, it cannot be used.

  • Description: A user-defined description (optional).

  • Available by Default to All Jobs: Select whether the service should be set by default as a parameter to all Jobs.

  • Scope: The authentication scope:

    • Service: Login as a service and make the connection available to all users with permissions to the service.

    • User: Login as a user. Each user will need to log in.

  • Method: The authentication method:

    • OIDC (Authorization Code): A popup window will redirect users to the login page for authentication.

  • Service: The Google authentication service to use for authentication (see Configuring Google Workspace as an Authentication Service).

  • Whitelisted Certificate Fingerprints: The SHA-256 fingerprint of the server certificate that should be trusted even if the certificate is self-signed (optional).

6.12.6. Veritone Service

The Veritone Service is used to configure and authenticate to an on-premise Veritone environment.

To add a new Veritone Service, use the Add + Veritone Service button and provide the following information:

  • Name: A user-defined name to assign the service.

  • Active: The state of the service. If the service is inactive, it cannot be used.

  • Description: A user-defined description (optional).

  • Available by Default to All Jobs: Select whether the service should be set by default as a parameter to all Jobs.

  • Scope: The authentication scope:

    • Service: Login as a service and make the connection available to all users with permissions to the service.

    • User: Login as a user. Each user will need to log in.

  • Method: The authentication method:

    • API Key: Use an API key to authenticate to the service.

  • URL: The URL of the service, for example http://10.15.10.15:7000

  • Parallel Job Submissions: The max number of Veritone jobs an Automate job will have active at a time.

  • Output Writer Engine ID: The ID of the output writer engine in Veritone.

  • Translation Engines: The list of translation engines in Veritone.

  • Transcription Engines: The list of transcription engines in Veritone.

  • Whitelisted Certificate Fingerprints: The SHA-256 fingerprint of the server certificate that should be trusted even if the certificate is self-signed (optional).

Parallel Job Submissions limit is on a per-job basis.

6.12.7. Relativity Service

This product module may only be used by parties with valid licenses for Relativity or Relativity One, products of Relativity ODA LLC. Relativity ODA LLC does not test, evaluate, endorse or certify this product.

The Relativity Service is used to configure and authenticate to Relativity for operations and to populate Relativity parameters during Job submission.

To add a new Relativity Service, use the Add + Relativity Service button and provide the following information:

  • Name: A user-defined name to assign the service.

  • Active: The state of the service. If the service is inactive, it cannot be used.

  • Description: A user-defined description (optional).

  • Available by Default to All Jobs: Select whether the service should be set by default as a parameter to all Jobs.

  • Scope: The authentication scope:

    • Service: Login as a service and make the connection available to all users with permissions to the service.

    • User: Login as a user. Each user will need to log in.

  • Method: The authentication method:

    • OIDC (Authorization Code): A popup window will redirect users to the Relativity login page for authentication. This mechanism requires the configuration of a Relativity OIDC Authentication Service (see Configuring Relativity as an Authentication Service).

    • Username/Password: Users will be prompted to supply their Relativity credentials directly in the Automate interface. This mechanism requires that Basic Authentication is enabled on the Relativity front-end servers.

  • Service: The OIDC service used for authentication.

  • Host Name: The Relativity host name, for example relativity.example.com.

  • Service Endpoint: The Relativity Service Endpoint, for example /relativitywebapi.

  • Endpoint Type: The Relativity Endpoint Type, for example HTTPS.

  • REST Version: The version of the REST services to use when querying Relativity objects, such as workspace and folders. For Relativity One, use REST (v1 Latest).

The REST (Server 2021) version requires the Relativity Server Patch (Q3 2021) or later.
  • Import Threads: The number of parallel threads to use for Relativity uploads, such as Legal Export, Relativity Loadfile Upload, Relativity Images Overlay, Relativity Metadata Overlay, Relativity CSV Overlay;

The Import threads value is independent of the number of Nuix workers. When using more than 1 import thread, the loadfile or the overlay file will be split and data will be uploaded to Relativity in parallel. Because multiple threads load the data in parallel, this method will impact the order in which documents appear in Relativity when no sort order is specified.
  • Import Thread Timeout: The number of seconds to allow a Relativity upload thread to be idle. If no progress is reported for longer than the allowed timeout, the import thread will be aborted.

  • Import Thread Retries: The number of times to retry running an import thread, in situations where import encountered a fatal error or timed out.

  • Metadata Threads: The number of parallel threads to use for Relativity metadata operations, such as Create Relativity Folders.

  • Patch Invalid Entries: If selected, this option will automatically patch entries that fail uploading due to the following issues:

    • Field value too long - the uploaded field value is trimmed to the maximum allowed length in Relativity.

    • Field value invalid, for example due to incorrectly formatted date - the field value is removed from the item uploaded to Relativity.

    • Missing native of text file - the native or text component is removed from the item uploaded to Relativity. [.line-through]##

  • Custom Client Version: When unchecked, Automate will use the Relativity client version which is the closest match to the Relativity server version. When checked, Automate will use the specified Relativity client version, if available.

  • Whitelisted Certificate Fingerprints: The SHA-256 fingerprint of the server certificate that should be trusted even if the certificate is self-signed (optional).

The Nuix ECC Service is used to configure and authenticate Nuix ECC operations.

To add a new Nuix ECC Service, use the Add + Nuix ECC Service button and provide the following information:

  • Name: A user-defined name to assign the service.

  • Active: The state of the service. If the service is inactive, it cannot be used.

  • Description: A user-defined description (optional).

  • Available by Default to All Jobs: Select whether the service should be set by default as a parameter to all Jobs.

  • Scope: The authentication scope:

    • Service: Login as a service and make the connection available to all users with permissions to the service.

    • User: Login as a user. Each user will need to log in.

  • Method: The authentication method:

    • Username/Password: Users will be prompted to supply their Nuix ECC credentials directly in the Automate interface. This mechanism requires the High-level API to be enabled on the Nuix ECC instance.

  • Hostname: The URL to access the Nuix ECC API, for example https://localhost:8091.

  • Whitelisted Certificate Fingerprints: The SHA-256 fingerprint of the server certificate that should be trusted even if the certificate is self-signed (optional).

6.12.8. Derby Control Service

The Derby Control is used to configure and authenticate to Derby Control for shared case access.

To add a new Derby Control Service, use the Add + Derby Control Service button and provide the following information:

  • Name: A user-defined name to assign the service.

  • Active: The state of the service. If the service is inactive, it cannot be used.

  • Description: A user-defined description (optional).

  • Available by Default to All Jobs: Select whether the service should be set by default as a parameter to all Jobs.

  • Scope: The authentication scope:

    • Service: Login as a service and make the connection available to all users with permissions to the service.

    • User: Login as a user. Each user will need to log in.

  • Method: The authentication method:

    • OIDC (Client Credentials): Use the OIDC Client Credentials authentication flow.

    • Service: the OIDC service to use for authentication.

  • URL: The URL of the service, for example https://neo.us.nuix.com/DERBY-CONTROL

  • Whitelisted Certificate Fingerprints: The SHA-256 fingerprint of the server certificate that should be trusted even if the certificate is self-signed (optional).

6.12.9. Graph Service

The Graph Service is used to configure and authenticate to Graph.

To add a new Graph Service, use the Add + Graph Service button and provide the following information:

  • Name: A user-defined name to assign the service.

  • Active: The state of the service. If the service is inactive, it cannot be used.

  • Description: A user-defined description (optional).

  • Available by Default to All Jobs: Select whether the service should be set by default as a parameter to all Jobs.

  • Scope: The authentication scope:

    • Service: Login as a service and make the connection available to all users with permissions to the service.

    • User: Login as a user. Each user will need to log in.

  • Method: The authentication method:

    • Username/Password: Use a username and password to authenticate.

    • None: Do not authenticate to the Graph service.

  • URL: The URL of the service, for example bolt://localhost:7687

  • Whitelisted Certificate Fingerprints: The SHA-256 fingerprint of the server certificate that should be trusted even if the certificate is self-signed (optional).

6.12.10. Gen AI Service

The Gen AI Service is used to configure and authenticate to a Gen AI service, for example, Open AI Chat GPT, Azure OpenAI, AWS Bedrock Anthropic, local Ollama or another local Open AI compatible service.

To add a new Gen AI Service, use the Add + Gen AI Service button and provide the following information:

  • Name: A user-defined name to assign the service.

  • Active: The state of the service. If the service is inactive, it cannot be used.

  • Description: A user-defined description (optional).

  • Available by Default to All Jobs: Select whether the service should be set by default as a parameter to all Jobs.

  • Scope: The authentication scope:

    • Service: Login as a service and make the connection available to all users with permissions to the service.

    • User: Login as a user. Each user will need to log in.

  • Method: The authentication method:

    • API Key: Use an API key to authenticate to the service.

    • None: Do not authenticate to the service.

  • Protocol: The protocol used to communicate with the service. Most services support the Open AI protocol.

  • URL: The URL of the service, for example https://api.example.com/v1

  • Model: The name of the model, if applicable, for example gpt-10

  • Enable System Role: Select this option if the model supports the system role. If this option is selected and the model does not support the system role, an error message will be displayed when testing the service.

    • Multi-Threading: The number of requests to make in parallel to the service. Local service typically only support 1 thread at a time, while cloud services can support more threads, depending on the subscription level.

6.12.11. Configuring an Open AI ChatGPT Gen AI Service

To use an Open AI ChatGPT service, configure Automate with the following information: * Authentication Method: API Key * Protocol: Open AI * URL: https://api.openai.com/v1 * Model: The Chat GPT model name, for example gpt-4o * Check the option Enable System Role * Multi-Threading: 16

6.12.12. Configuring an AWS Bedrock Gen AI Service

To use an AWS Bedrock service, take the following steps:

  • Connect to the AWS Management Console

  • Create a new user that will be used to access the Bedrock service:

    • In the AWS Management Console and select the IAM service.

    • Create a new user

    • Attach the AmazonBedrockFullAccess policy to the user.

The AmazonBedrockFullAccess policy provide full access to the Bedrock service, including the ability to create and delete models. If this is not desired, a more restrictive set of permissions can be assigned to the user which include read and InvokeModel actions. * Create an access key for the user and take note of the Access Key ID and Secret Access Key.
  • Enable models:

    • Select the region in which you want to operate, for example, us-east-1

    • Select the Model access tab

    • Request access to the Anthropic models you would like to use

  • Configure the Gen AI service with the following information:

    • Authentication Method: Username/Password

    • Protocol: Bedrock Anthropic

    • URL: The Bedrock API URL, for example https://bedrock.us-east-1.amazonaws.com

    • Model: The Anthropic model ID, for example anthropic.claude-3-5-sonnet-20241022-v2:0

      • Uncheck the option Enable System Role

    • Multi-Threading: 16

Certain models can only be used with an inference profile. By default, Bedrock creates the inference profiles with the prefix of the region in which the model is deployed, for example: us.anthropic.claude-3-5-sonnet-20241022-v2:0
  • When signing in, use the Access Key ID as the username and the Secret Access Key as the password.

6.12.13. Configuring an Ollama Gen AI Service

To use an Ollama service, configure Automate with the following information:

  • Authentication Method: None

  • Protocol: Ollama

  • URL: http://ollama.host.internal/api/chat

  • Model: The model name, for example llama3.2-vision:11b

  • Check the option Enable System Role

  • Multi-Threading: 1

6.12.14. Semantic Service

The Semantic Service is used to configure and authenticate to a DJL serving service.

To add a new Semantic Service, use the Add + Semantic Service button and provide the following information:

  • Name: A user-defined name to assign the service.

  • Active: The state of the service. If the service is inactive, it cannot be used.

  • Description: A user-defined description (optional).

  • Available by Default to All Jobs: Select whether the service should be set by default as a parameter to all Jobs.

  • URL: The URL of the service, for example http://djl-serving.example.com:8080

  • Whitelisted Certificate Fingerprints: The SHA-256 fingerprint of the server certificate that should be trusted even if the certificate is self-signed (optional).

6.13. SMTP Service

The SMTP Service is used to configure SMTP servers that can be used to send emails.

To add a new SMTP Server, use the Add + SMTP Service button and provide the following information:

  • Name: A user-defined name to assign the server;

  • Description: A user-defined description (optional);

  • Method: The authentication method:

    • OIDC (Authorization Code): A popup window will redirect users to the login page for authentication.

    • Username/Password: Use a username and password to authenticate;

    • None: Do not authenticate to the Graph service;

  • Service: The Microsoft authentication service to use for authentication (see Configuring Microsoft as an Authentication Service);

  • Host: The host name or IP address of the SMTP server;

  • Port: The port of the smtp server. Typically, 25 for unauthenticated access and 465 or 587 for authenticated access;

  • From: The email address to send the email with;

  • TLS: If checked, access to the SMTP Server will be TLS-encrypted, otherwise, it will be in clear-text;

  • Email retry interval: How often to retry sending failed emails;

  • Max email reattempts: The number of times to retry sending a failed email before giving up on the email.

6.14. Data Repositories

The Data Repositories settings tab can be used to define the locations for Data Sets as well as the restrictions and automatic transitions of Data Sets.

6.14.1. Managed Data Repositories

Managed Data Repositories are used to store Managed Data Sets.

To add a new Managed Data Repository, use the Add + Managed Data Repository button and provide the following information:

  • Name: A user-defined name to assign the repository.

  • Path: The location to store Managed Data Sets. This can be a local path such as C:\Data or a file share to which the Scheduler service has access to.

  • Description: A user-defined description (optional).

  • Data Repository Quota: The maximum amount of space that can be used by all Data Sets in the Data Repository (optional).

  • Dataset Quota: The maximum amount of space that can be used by a single Data Set in the Data Repository (optional).

  • File Size Limit: The maximum size of a file that can be uploaded to a Data Set in the Data Repository (optional).

  • Allowed File Extensions: Only allow files with these extensions to be uploaded (optional).

  • Compute file system free space: Include the amount of free space in the file system when calculating the available space for the Data Repository (optional).

  • Hide data sets On Job Queue: Automatically transition a Data Set to the Hidden state after a Job is queued using the Data Set (optional).

  • Archive data sets on Job Finish: Automatically transition a Data Set to the Archived state after a Job finishes using the Data Set (optional).

  • Expire archived data sets: Auto expire an archived Data Sets and delete all of its files after the specified time period (optional).

The available space for a Dataset will be the lowest of the remaining space of the Data Repository Quota, the Dataset Quota and the Computed file system free space.
Either the Data Repository Quota must be provided or the Compute file system free space must be selected.

6.14.2. In-Place Data Repositories

In-Place Data Repositories are used to provide a location from where an In-Place Data Set can select existing data.

To add a new In-Place Data Repository, use the Add + In-Place Data Repository button and provide the following information:

  • Name: A user-defined name to assign the repository.

  • Path: The location from where In-Place Data Sets can select existing data. This can be a local path such as C:\Data or a file share to which the Scheduler service has access to.

  • Description: A user-defined description (optional).

  • Hide data sets On Job Queue: Automatically transition a Data Set to the Hidden state after a Job is queued using the Data Set (optional).

  • Archive data sets on Job Finish: Automatically transition a Data Set to the Archived state after a Job finishes using the Data Set (optional).

  • Expire archived data sets: Auto expire an archived Data Sets and delete all of its files after the specified time period (optional).

6.14.3. Azure Storage Account

Azure Storage Accounts are used to manage the locations where exports for Microsoft Purview are performed.

To add a new Azure Storage Account, use the Add + Azure Storage Account button and provide the following information:

  • Name: A user-defined name to assign the account.

  • Description: A user-defined description (optional).

  • Storage Account URL: The URL location of the storage account.

  • Account Name: The name of the storage account in Azure.

  • Account Access Key: The secret access key for the account.

To get the Storage Account URL, open the storage account in the Azure portal, and navigate to the Endpoints section. The copy the URL from the Blob service section.
To get the Account Access Key, open the storage account in the Azure portal, and navigate to the Access keys section. The copy one of the available keys.

6.15. Notice Templates

The Notice Templates settings tab is used to define notice templates that can be used for legal holds to create notices.

To add a new Notice Template, select the Type using the tabs at the top of the page and then click the Add + Notice Template button. Creating a Notice Template has 4 steps:

  1. Fill-in the Notice Template Settings.

    • Name: A user-defined name to assign the template;

    • Active: The state of the Notice Template. If it is inactive, it cannot be used.

    • Description: A user-defined description (optional);

    • Parameters: A list of parameters for values provided when creating legal holds;

Note: Built-In Parameters are at the top and cannot be changed.

  1. Fill-in the Subject and Message.

  2. Optionally build a Survey Form for a notice response.

  3. Review and confirm the details.

Note: Reminder and Escalation Notice Templates do not have a Survey Form option.

6.16. Security Policies

The Security Policies settings tab can be used to manage the access that users have in the Automate application.

Security Policies are positive and additive, meaning that a user will be allowed to perform a certain action if at least one Security Policy allows the user to perform that action. To prevent a user from performing a certain action, ensure that there are no policies that grant the user that action.

To add a new Security Policy, use the Add + Security Policy button and provide the following information:

  • Name: A user-defined name to assign to the policy.

  • Description: A user-defined description (optional).

  • Active: The state of the policy. An inactive Security Policy will not be evaluated.

  • Principals: The identities to which the policy applies. Principals can be of the following types:

    • Built-In: Authenticated User corresponds to any user account that can log in with the allowed authentication schemes (i.e. Nuix UMS or Microsoft Azure);

    • Azure Username: Explicit Azure user accounts, in the form of an email address;

    • Azure Group ID: Azure user accounts belonging to the Azure Group with the specified ID;

    • UMS Username: Explicit Nuix UMS user accounts, in the form of a username;

    • UMS Group: UMS users belonging to the specified UMS group with the specified name;

    • UMS Privilege: UMS users belonging to a group that has the specified privilege;

    • UMS Role: UMS users which are assigned the specified application role.

  • Permission: The permission granted to the Principals:

    • View: View the details of the objects in scope (and their children);

    • View Limtied: View a limited number of details of the objects in scope (and their children). For most objects this is the object ID, name, and description. Error and warning messages are masked;

    • View Non-Recursive: Only applies to Client Pools. View the list of Client Pools along with the IDs of the Clients assigned to the pool, but not the details of the Clients.

    • Modify: Modify the objects in scope (and their children);

    • Modify Children: Modify the children of the object in scope (but not the object itself);

    • Create: Create an object of that type. Applies to Client Pools, Legal Holds and Collections and Collection Templates.

    • Add Job: Submit a job on the objects in scope (and their children);

    • Stage Job: Stage a job on the objects in scope (and their children);

    • View Sensitive: View the details of the objects in scope (and their children) even if marked as confidential;

    • Download Logs: Download the logs of jobs or system resources;

    • Exclude Metrics: Mark the Job utilization metrics for exclusion;

  • Scope: The scope on which the Permission is granted. Scopes can be of the following types:

    • Built-In: Used to assign permissions on all objects of a certain type (for example All Clients) or to Jobs which do not have a Library or a Client assigned;

    • Client/Matter: A specific or all Matters from a specific Client;

    • Library/Workflow: A specific or all Workflows from a specific Library;

    • Nuix License Source: A specific Nuix License Source;

    • Execution Profile: A specific Execution Profile;

    • Resource Pool: A specific Resource Pool;

    • Client Pool: A specific Client Pool;

    • Notification Rule: A specific Notification Rule.

When a Security Policy allows the modification of Execution Profiles, Workflows, Scripts, or External Applications, it implicitly allows the users in scope to execute privileged code on the platform using one of these mechanisms.
The Built-In All System Resources scope includes all system resources, such as Collection Templates, Data Repositories, Engine Servers, Engines, Execution Profiles, File Libraries, Logs, Notice Templates, Notification Rules, Nuix License Sources, Automate License, Resource Pools, Servers, SMTP Servers, User Services.

6.16.1. Sample Permission Requirements

View a Job:

  • View permissions on: Client and Matter on which the Job was submitted; or Built-In All Clients; or Built-In All Client Pools; or Built-In Unassigned Client.


Submit a Job:

  • View and Add Job permissions on: Client and Matter; or Built-In All Clients; or Built-In All Client Pools; or Built-In Unassigned Client; and

  • View and Add Job permissions on: Library and Workflow; or Built-In All Libraries; or Built-In Unassigned Library.


Stage a Job:

  • View and Stage Job permissions on: Client and Matter; or Built-In All Clients; or Built-In All Client Pools; or Built-In Unassigned Client; and

  • View and Stage Job permissions on: Library and Workflow; or Built-In All Libraries; or Built-In Unassigned Library.


Assign a Job to a Resource Pool:

  • View and Add Job permissions on: Resource Pool; or Built-In All Resource Pools; or Built-In All System Resources.


Assign a Job to an Execution Profile:

  • View and Add Job permissions on: Execution Profile; or Built-In All Execution Profiles; or Built-In All System Resources.


Set a Job Priority to Highest:

  • Modify permissions on: Resource Pool; or Built-In All Resource Pools; or Built-In All System Resources.


View the details of a Workflow, including scripts code:

  • View Sensitive permissions on: Library and Workflow; or Built-In All Libraries; or Built-In Unassigned Library.


Modify Workflow description or general settings, except for the operations:

  • Modify permissions on: Library and Workflow; or Built-In All Libraries; or Built-In Unassigned Library.


Modify Workflow operations, except for scripted parameters, scripts, or external applications:

  • View Sensitive and Modify permissions on: Library and Workflow; or Built-In All Libraries; or Built-In Unassigned Library.


Modify Workflow operations, including scripted parameters, scripts, or external applications:

  • View Sensitive and Modify permissions on: Library and Workflow; or Built-In All Libraries; or Built-In Unassigned Library; and

  • Modify permissions on: Scripts and/or External Applications.


View Security Policies:

  • View permissions on: Built-In Security.


View Security Policy Change Logs:

  • View and View Sensitive permissions on: Built-In Security.


Manage Engine Servers and Engines:

  • View and Modify permissions on: Built-In All System Resources.


Set Default User Settings:

  • Modify permissions on: User Settings.


Add Matters to a Client, but don’t allow the user to modify the Client:

  • View and Modify Children permissions on: Client.


Submit a Legal Hold:

  • View and Modify permissions on: Client; or Matter; or Built-In All Clients; or Built-In All Client Pools; and

  • View and Create permissions on: Legal Hold; and

  • View permission on: All Notice Templates; or All System Resources; and

  • If setting an SMTP Server, View permission on: SMTP Server; or All System Resources; and

  • If setting a Data Repository, View permission on: Data Repository; or All System Resources; and

  • If setting trigger configurations, and the user wants to queue the job:

    • View and Add Job permissions on: Client and Matter; or Built-In All Clients; or Built-In All Client Pools; or Built-In Unassigned Client; and

    • View and Add Job permissions on: Library and Workflow; or Built-In All Libraries; or Built-In Unassigned Library.

    • View and Add Job permissions on: Resource Pool; or Built-In All Resource Pools; or Built-In All System Resources.

    • View and Add Job permissions on: Execution Profile; or Built-In All Execution Profiles; or Built-In All System Resources.

  • If setting trigger configurations, and the user wants to stage the job:

    • View and Stage Job permissions on: Client and Matter; or Built-In All Clients; or Built-In All Client Pools; or Built-In Unassigned Client; and

    • View and Stage Job permissions on: Library and Workflow; or Built-In All Libraries; or Built-In Unassigned Library.


View a Legal Hold:

  • View permissions on: Client; or Matter; or Built-In All Clients; or Built-In All Client Pools; and

  • View permission on: Legal Hold.


Create a File Library:

  • View and Modify permissions on: File Library; or Built-In All File Libraries; or Built-In All System Resources.


View a File Library:

  • View permissions on: File Library; or Built-In All File Libraries; or Built-In All System Resources.


Download Logs of a Job:

  • View and Download Logs permissions on: Client and Matter on which the Job was submitted; or Built-In All Clients; or Built-In All Client Pools; or Built-In Unassigned Client.


Download System Logs:

  • Download Logs permissions on: Built-In All System Resources


Download Anonymized Utilization Data:

  • View permissions on: Utilization; or Built-In System Resources; and

  • View permissions on: Built-In All Clients; or Built-In All Client Pools; or Built-In System Resources.


Download Full Utilization Data:

  • View permissions on: Utilization; or Built-In System Resources; and

  • View permissions on: Built-In All Clients; or Built-In All Client Pools.


Manage own API keys:

  • View and Modify permissions on: Built-In API Keys.


Manage the API keys of all users:

  • View and Modify permissions on: Built-In All API Keys.


6.17. API Keys

The API Keys settings tab can be used to facilitate the authentication to Automate when integrating with other platforms or for making API calls using scripting languages.

When making API requests to Automate using an API key, the request will have the same permissions as the user who created the API key.

To add a new API key to the Automate configuration, use the Add + API Key button and provide the following information:

  • Name: A user-defined name to assign to the key.

  • Validity: The number of days that the key is valid for.

The key secret is only available in the window shown immediately after the key is created. If the secret is not recorded at this time or is lost, the key should be deleted and a new replacement key should be created.

To make an API access with an API key, set the Authorization HTTP header to Bearer id:secret, where id is the key ID and secret is the key secret, for example:

Authorization: Bearer 78882eb7-8fc1-454d-a82c-a268c204fbba:788LvzrPksUKXKTrCyzKtvIMamTjlbsa

6.18. Webhooks

The Webhooks settings tab can be used to integrate Automate with third-party applications, such that when an event occurs in Automate, a webhook call is made to the third-party application.

Webhooks can be created manually, or registered using the API.

To add a new Webhook registration, use the Add + Webhook button and provide the following information:

  • Name: A user-defined name to assign to the Webhook.

  • Active: The state of the Webhook. An inactive Webhook will not get triggered.

  • History Enabled: If the Webhook history is not enabled, the list of Webhook calls will not be displayed in the Webhook panel and if the Scheduler service is restarted with pending Webhook calls, these calls are lost.

  • Triggers: The types of events that will trigger Webhook calls.

  • Whitelisted Certificate Fingerprints: The SHA-256 fingerprint of the third-party application receiving the Webhook calls that should be trusted even if the certificate is self-signed (optional).

The Webhook signature key is only available in the window shown immediately after the Webhook is created. If the signature key is not recorded at this time or is lost, the Webhook registration should be deleted and a new replacement Webhook registration should be created..

When a Webhook event is triggered, an API call will be attempted including details about the username and the action that triggered the event. If the third-party application receiving the Webhook call is not accessible or does not acknowledge the Webhook call, the call will be retried with an exponential backoff delay, up to a maximum of 18 hours.

The details of the past 20 Webhook events and calls statuses can be seen in the Webhook panel.

6.19. User Settings

The User Settings tab can be used to customize the behavior of the user interface for the current user and to set the default settings for all users.

For each User Settings category, the Reset to Default button resets the customizations performed by the user to the default values and the Set as Default button sets the current values as the default values for all users.

The Set as Default button only applies to the default values and does not overwrite any existing user customizations.

6.19.1. Language

Change the language of the user interface to one of:

  • Browser Default - the language detected from the browser;

  • Arabic - (United Arab Emirates)

  • Danish - Denmark

  • German - Germany

  • English - United States

  • Spanish - Latin America

  • French - Canada

  • Hebrew - Israel

  • Japanese - Japan

  • Korean - South Korea

  • Dutch - Netherlands

  • Portuguese - Brazil

  • Simplified Chinese - China

6.19.2. Accessibility

Show focus outline for input and text area elements.

6.19.3. Show Disabled Items

Clients - Display inactive Clients.

Matters - Display inactive Matters.

Libraries - Display inactive Libraries.

Workflows - Display inactive Workflows.

6.19.4. Job Card

Modify the elements displayed in each Job Card, the location of these elements, and the size and format of the text.

6.19.5. Job Panel

Show operation processing speed - The processing speed is computed as the sum of the audited size of the items handled by the operation per hour.

Show execution parameters.

6.19.6. Add Job

Notes at the bottom - Move the notes section to the bottom of the job submission pane.

Auto-archive on resubmit - When resubmitting a Job, automatically archive the original Job.

6.19.7. Default Job Settings

Execution Profile - The default Execution Profile to select when submitting a Job. Resource Pool - The default Resource Pool to select when submitting a Job.

The user can change the Execution Profile and Resource Pool to which a Job is assigned during the submission process even if default values are configured in this section.

6.19.8. Job Sort Order

The order in which to display jobs in the Backlog, Running and Finished Job Lanes.

Jobs can be sorted by:

  • Submission Date: Jobs are sorted by the date and time when the Job was submitted, or for Paused Jobs, by the date and time when the Job was Paused and moved back to the Staging lane;

  • Priority and Submission Date: Jobs are first sorted by their Priority and then by the Submission Date;

  • Last Changed Date: Jobs are sorted by the date and time when the Job state last changed. A state change occurs when the job is queued, finishes running, is cancelled, or starts running in a Cloud Resource Pool;

  • Priority and Last Changed Date: Jobs are sorted first by Priority and then by the Last Changed Date.

6.19.9. Job Lanes

Hide Staging lane when empty - Only shows the Staging lane if at least one Job is in staging. Show Job list count - Display a count of the number of Jobs in each lane in the lane header.

6.19.10. Workflow

Show workflow options - Display the detailed view of the operations in the workflow with all of their options. Show operations progress weight - Display the operation progress weight setting in the Workflow Builder.

6.19.11. Dataset

Upload Behavior: Prompt user to remove invalid files when starting upload

Built-In Headers - The default built-in headers to display in the Dataset.

Show a warning when adding a notice comment as an Administrator

6.19.13. Text Highlights

Highlight and style text in the Job Panel and Job Card that match user-defined regexes.

When using a regex that triggers a catastrophic backtracking, it can be impossible to open the Settings page to correct the regex. In this situation, open the Automate webpage by adding ?disableHighlightText at the end of the URL, for example https://automate.example.com/?disableHighlightText. This will have the effect of temporarily disabling highlights for that session.

6.19.14. Troubleshoot

Select troubleshooting options:

  • Show Object IDs: Show the identifiers of objects in each panel.

  • Enable Inferred Utilization Edition: Show the Download and Upload Inferred buttons in the System Resources.

6.20. User Resources

The User Resources tab provides links to additional resources:

  • User Guide: This document.

  • Installation Guide: The Automate installation guide.

  • Third-Party Licenses: The list of third-party licenses used by Automate.

  • API Documentation: A live documentation of the Automate API in the OpenAPI 3.0 format, which can be used to integrate Automate with other applications.

  • OData Reporting: The URL for reading the Utilization and Reporting data in the OData 4.0 format.

When querying the Utilization and Reporting data, it’s possible to supply a date range filter. This is done by appending using the URL parameters after and before. For example, if the retular OData Reporting URL is https://automate.example.com/api/v2/reporting/odata, in order to retrieve data corresponding to the 2022 calendar year only, use the following URL: https://automate.example.com/api/v2/reporting/odata?after=2022-01-01&before=2022-12-31

6.20.1. OData Authentication

To authenticate to the OData Reporting stream using a username and password, choose the Basic authentication option in the BI platform.

If more than one authentication services supporting username and password authentication is configured (such as Internal, UMS or LDAP) then add the suffix #service at the end of the username, for example jsmith#Internal or company\jsmith#AD.

To authenticate to the OData Reporting stream using a Microsoft account, choose the Organizational account authentication option in the BI platform.

6.21. System Resources

The System Resources tab can be used to manage the user data dir, logs and utilization data.

6.21.1. User Data Dir

The User Data Dir is used to provide Nuix profiles and other files to the Nuix Engines.

The configured folder must be accessible from Scheduler, and will be synchronized with each Engine Server for use by the managed Nuix Engines.

In the Workflow Builder, operations requiring Nuix profiles will have a dropdown selection with the profiles found in the User Data Dir.

The profiles found in the User Data Dir can also be used with their respective parameters.

To configure the User Data Dir, use the Set Path button and provide the following information:

  • Path: The location of the folder with the files for use in the Nuix Engines.

6.21.2. System Logs

Download and view information for centralized logging.

The system logs tab contains information about:

  • Log Retention Period: The duration in days that logs will be retained in the database.

  • Earliest Log Available: The earliest log available in the database.

Additionally the system logs tab contains a form for downloading system logs within a given date range.

To view or download system logs, centralized logging must be enabled and the user needs the permission to download system logs (see Download System Logs).

6.21.3. Utilization

The utilization data can be downloaded either anonymized or in full using the Download Anonymized or Download Full options. The resulting data is be a zip archive containing a JSON file with the utilization data.

To upload utilization data from an external system, use the Load External option and select either a utilization JSON file, or a zip archive containing a JSON utilization file.

To update Clients and Matters inferred from the Nuix Case name from utilization data corresponding to activity outside of Automate, first download the inferred data, then update the Matter ID column in the NuixCases.csv file to the desired Matter that the Nuix Case should be associated to, and then load the updated NuixCases.csv file.