Testing Methodology

Given the known Environment-specific variables as well as Configuration options, let’s now set out to run some useful tests.  

We recommend running two separate Test Scenarios:

  • Test Scenario 1:  Single ‘Large’ File
  • Test Scenario 2:  ‘Large’ Set of ‘Small’ Files

Attunity is pleased to offer these tables as guidance (feel free to fill in values):

Configuration Overview:
When configuring a job within the RepliWeb client, choose Properties:

Within the General Tab --> Report Options --> Transfer Report Style:  choose ‘Detailed’.
(this will allow us to monitor performance with great granularity)

Within the Performance tab, we will play with the following fields:

Test Scenario 1:

"Single Large File"

1.  Create a Base Line. 

  1. Configure new upload job.  Supply common configuration per Quick Start guide.
  2. General tab --> Report Options --> Detailed
  3. Choose single file, ~1GB in size.  Files --> Include --> Browse to file
  4. Performance tab --> per configuration outlined in table, select the following:
  • Concurrent Transfer = 1
  • Transfer Engine = Large File Accelerator (default for jobs to S3)
  • Compression = leave unselected
  • Concurrent Session = 2
  1. Scheduling tab --> Choose "Scheduled" --> "Run on Demand".  (this will allow for easier modification, submission of child jobs)

  1. Submit job.
  2. Allow job to run for more than 60 seconds.  Slower networks may require a longer sample time (upwards of ~3 min) to display a useful data set.  Then 'abort' job (it is not necessary to allow job to run to completion).
  3. Right click on 'aborted' instance of job, and choose 'Reports' --> 'Transfer'.

  1. Within the Transfer Report, please note the 'Start Time', then at the 60 sec mark, the volume of Data Transferred.  In this case, 1.3GB.
  2. Feel free to start capturing this data in the Table provided.
  1. Adjust Job Configurations as Outlined in the Table.
  1. Once you've established a baseline, it's now time to adjust configurations to locate the 'Sweet Spot' (the combination which moves the most data in 60 seconds).
  2. Simply 'modify' the on-demand job.
  3. Within the Performance tab, adjust the # of Concurrent Sessions.
  4. Execute the job as outlined above, making note of the results.
  5. When you discover that adjusting the Concurrent Sessions actually slows down the transfer, you've now narrowed down the range.  

Test Scenario 2:

"Large Set of Small Files"

1.  Create a Base Line. 

  1. Configure new upload job.  Supply common configuration per Quick Start guide.
  2. General tab --> Report Options --> Detailed
  3. Choose a set of files totaling around ~1GB in size, but each file just a few KB in size (or so).
  4. Performance tab --> per configuration outlined in table, select the following:
  • Concurrent Transfer = 3
  • Transfer Engine = Large File Accelerator (default for jobs to S3)
  • Compression = leave unselected
  • Concurrent Session = 2
  1. Scheduling tab --> Choose "Scheduled" --> "Run on Demand".  (this will allow for easier modification, submission of child jobs)

  1. Submit job.
  2. Allow job to run for more than 60 seconds.  Slower networks may require a longer sample time (upwards of ~3 min) to display a useful data set.  Then 'abort' job (it is not necessary to allow job to run to completion).
  3. Right click on 'aborted' instance of job, and choose 'Reports' --> 'Transfer'.

  1. Within the Transfer Report, please note the 'Start Time', then at the 60 sec mark, the volume of Data Transferred.  In this case, 1.3GB.
  2. Feel free to start capturing this data in the Table provided.
  1. Adjust Job Configurations as Outlined in the Table.
  1. Once you've established a baseline, it's now time to adjust configurations to locate the 'Sweet Spot' (the combination which moves the most data in 60 seconds).
  2. Simply 'modify' the on-demand job.
  3. Within the Performance tab, adjust the # of Concurrent Sessions and Concurrent Transfers.
  4. Execute the job as outlined above, making note of the results.
  5. When you discover that adjusting the Concurrent Sessions and Concurrent Sessions actually slows down the transfer, you've now narrowed down the range.  

And now you've found the Sweet Spot!

Dev Tool:

Request: products/cloudbeam/cloudbeam-amazon-s3-performance-tuning
Matched Rewrite Rule: (.?.+?)(?:/([0-9]+))?/?$
Matched Rewrite Query: pagename=products%2Fcloudbeam%2Fcloudbeam-amazon-s3-performance-tuning&page=
Loaded Template: page.php