Automating Data Uploads to LucityCloud for Use with Lucity Import & Update

Brian Treff -

Beginning with Lucity 2017, the Lucity Import & Update (I&U) program can download source files from an Amazon S3 bucket for processing. This workflow was specifically designed for LucityCloud customers that want to automate the processing of on-premise data sources (e.g. - a SCADA export). We deploy all LucityCloud customer environments with a bucket for uploading files along with access credentials.

Note: This document assumes a working knowledge of the I&U program and therefore is not a complete step-by-step reference.

General Workflow

The process for automating data uploads to LucityCloud involves the following steps:

  • Customer chooses an existing or new on-premise directory to store the data file they wish to upload (e.g. – a scheduled export from a SCADA system).
  • Customer needs a workstation or server on which they will create a Windows scheduled task that will upload a staged data file to their S3 bucket. While there are a variety of ways to upload to S3, the best method for automation in a Windows environment is using the AWS Tools for Windows Powershell. These should be installed on the on-premise workstation or server where the scheduled task will be created and can be downloaded for free from After installation, the customer will create an on-premise Windows scheduled task using the Write-S3Object powershell command that uploads the file to their bucket (example command line will follow).
  • Customer remotes to their Desktop instance in LucityCloud and configures a scheduled Lucity I&U job that will import the file that they uploaded to the S3 bucket. The job will utilize the Pre-Processing tab for downloading the file.

Powershell Command and Scheduled Task for Upload

The following powershell command should be used for uploading a file to the customer’s S3 bucket:

Write-S3Object -BucketName MyBucketName -File "MyFileName" -Key "import/myfile.txt" -Region MyRegion -ServerSideEncryption AES256 -AccessKey MyAccessKey -SecretKey MySecretKey


MyBucketName = the customer's temp S3 bucket name (Lucity will provide this value)
MyFileName = the full path to the on-premise file being uploaded (e.g. – "C:\somedirectory\myfile.txt")
import/myfile.txt = the name of the file you're uploading pre-fixed with the "import/" key
MyRegion = the customer’s AWS region (Lucity will provide this value)
MyAccessKey = the customer’s temp S3 bucket access key (Lucity will provide this value)
MySecretKey = the customer’s temp S3 bucket secret key (Lucity will provide this value)

Important parameter notes:

  • Enclose values for -File and -Key parameters in double quotes
  • The value specified for -Key must always begin with "import/"
  • Bucket, File and Key names are case-sensitive

While you can enter this command manually when configuring the Action in Task Scheduler, it is recommended that you save the command as a Powershell script and simply point to the script from Task Scheduler. To do this, enter the complete command in a text editor and save the file with a "ps1" extension (e.g. – myscript.ps1).

The Action of the scheduled task should look like this:


The complete argument is cutoff within the Edit Action dialog, but reads:

-ExecutionPolicy Bypass c:\myscripts\myscript.ps1

Finish creating the scheduled task by providing a Name, Description, security options (make sure the task can run if a user is not logged on) and a trigger (when the script will run). Please note that any file uploaded to this bucket will be purged after 14 days, regardless of whether it has been processed.

Configuring the Lucity Import & Update Job

To setup I&U to download the file that has been uploaded to S3, you will need to configure a Pre-Processing step and setup a file-based Data Source for the I&U job:

  • Pre-Processing - Choose Amazon S3 as the Download Source and then provide values for Bucket, Region and KeyName. All other values should remain blank, including AccessKey and SecretKey. These two values are unnecessary because the customer's cloud Desktop instance implicitly has access to their bucket. Prefix is not currently utilized and should be left blank at this time. You may optionally choose to Delete Source File after Download. Otherwise, the file in S3 will be overwritten the next time the customer uploads a file by the same name. As stated in the previous section, the KeyName must always start with "import/". Note: Only a single file may be processed at a time. If you require multiple files, you will need to configure multiple jobs.


  • Data Source - Setup a Text File, Access or OLE (in the case of an Excel file) data source on the Data Source tab that points to where you want I&U to place the file that it downloads from S3. The filename does not have to be the same as the file that is in S3 – I&U can rename the file "in flight". However, it's less confusing to keep them the same.


Finally, prepare the remainder of the I&U job as usual (i.e. – General Tab, Mapping Tab, etc.)  and make sure you schedule it by clicking the calendar icon in I&U. More information is available at:

Have more questions? Submit a request