Performance Metrics

We understand that you may ask us questions such as, Just how fast can Shadowbase software move my data? and How much overhead will the Shadowbase data replication engine incur?


Performance Requirements

With proper configuration and infrastructure (e.g., sufficient network, disk, and processor capacity), we have been able to meet all of our customers’ load and scaling requirements to date.

While we strive to be easy to work with and communicate openly, we generally do not publish performance metrics regarding the HPE Shadowbase product suite, for the following reasons:

  1. Broad generic performance metrics for a highly extensible and scalable technology are not feasible. HPE Shadowbase is not a universal multipurpose one-configuration-fits-all solution. One configuration for a particular situation may not apply in another, creating a wide margin for error. Typically, our finely optimized and tuned testing environments do not exactly emulate a specific customer’s system, communication connections, data, and applications. Therefore, we feel that publishing metrics with wide margins is not beneficial and can be misleading.
  2. We strongly suggest benchmarking HPE Shadowbase software using the same configuration as the final implementation (the same applications, data, operating system, platform, database, CPU/disk, communications equipment, etc.), to obtain accurate performance and capacity metrics.
    • For customers who do not have suitable testing environments, the HPE ATC may be available to provide one. In these cases, we suggest customers contact their HPE account team to help arrange a suitable ATC testing environment.
    • Of course, we are available to assist in these performance tuning, capacity analysis, and testing efforts, and look forward to working with your team should you embark on such a project.
  3. A number of our partner agreements prevent us from publishing performance benchmark numbers for certain platform and database combinations.
  4. The HPE Shadowbase replication architecture is designed to scale and handle massive loads and can be expanded to read, send, and apply large quantities of data in parallel.