HPE Shadowbase Streams for Data Integration and Synchronization

Replicate Data Changes from One Database into Another

Integrate Data for Competitive Advantage
Over time, the number of legacy applications developed to support an enterprise’s operations grows significantly. Applications maintain databases of information, but the database contents are not typically exposed for use by other independently developed applications.

As companies grow more reliant on their IT resources, it is apparent that many are storing data in silo legacy databases. As companies merge, it is often necessary to join disparate databases into a common repository for the new corporation.

Real-world Shadowbase Data Integration Use Cases

Integrate heterogeneous databases with HPE Shadowbase Data and Application Integration for a Multi-database Solution

The Data Continuum

From onset to action, there is value in data. The Data Continuum represents data’s typical lifecycle:

  1. Create – From the Edge, through IoT and other means, data is measured and tracked.
  2. Store – Once identified, the data is written to a database or other form of storage.
  3. Analyze – Analytics programs inspect the data, scanning for anomalies and other valuable information.
  4. Learn – The data is translated into meaningful information stating facts, key insights, and accurate metrics.
  5. Act – Strategic analysis of the information creates specific and advantageous actions.

Companies that record and analyze data about their business can analyze their performance, which can help determine the next action to take.

Hewlett Packard Enterprise

Application Capacity Expansion (ACE)

Application Capacity is Key of IT Resource Scaling

Application capacity defines how many users an IT application service can support in a given response time. Another critical factor is the application’s ability to scale, which is dependent upon the application’s architecture.

The two most important scaling factors for an application are increasing the number of users that can be processed:

  • With the current environment (scaling up)
  • By scaling across environments (scaling out)

Scaling up usually consists of replacing existing hardware with more powerful versions, whereas scaling out usually consists of adding more processing power by spreading the load across multiple environments. Scaling up is often limited by the maximum capacity that a single monolithic environment can achieve.


Big Data Solutions

Information is Exploding at Unprecedented Rate

The amount of information being generated each year is exploding at an unprecedented rate. It is estimated that 80% of the world’s data was generated in the last two years, and this rate is increasing. Social media such as Twitter and Facebook, articles and news stories posted online, blogs, emails, YouTube and other videos are all contributing to big data.

In today’s 24×7 online environment, having query access to a remote database is not sufficient. Querying for data is a lengthy and complex process, and applications must react far more quickly to data changes than querying allows. A data pipeline can be created here to parse data from a data pump.


Connect to the Cloud & Turn Data into Intelligence

Hybrid Cloud Architectures

HPE Shadowbase solutions play a pivotal role in integration of the cloud with private (internal) IT infrastructure, allowing for hybrid approaches that assign critical processing to highly available private systems such as HPE NonStop Servers (among others), and noncritical processing to the public cloud.

Hewlett Packard Enterprise

Data Transformation Solutions

Create a Data Pipeline to Transform Data between Source and Target Database Formats

Shadowbase software can transform the data between source and target database formats either automatically or via Shadowbase User Exit customizations. Data may be aggregated, disaggregated, and/or transformed.

Use Shadowbase Data Transformation solutions for:
  • Database Specification (DBS) for data mapping Other Server databases, specify which source tables need to be transformed into which target tables
  • Shadowbase Map (SBMAP) for mapping an HPE NonStop data source and an Other Server target in a “scripting” like language (also known as creating a data transformation stream). No programming is required!
  • Shadowbase Data Definition Language Utility (SBDDLUTL) for transforming Enscribe files into their equivalent SQL table equivalent
  • Shadowbase SQL/MP Schema Conversion Utility (SBCREATP) for converting and mapping HPE NonStop SQL/MP table schemas into their target SQL equivalents
  • Shadowbase User Exits for extending Shadowbase replication to perform additional processing with either scripting or via embedding custom code into the replication engine
  • Limitless Transformation for extracting even more value from your data
Learn More About Data Transformation

Discovering Meaningful Patterns Using Data Integration

Applications Interoperate at Event-driven Level in Real-time

Applications that once were isolated can now interoperate at event-driven level in real-time. Critical data generated by one application is distributed and acted upon immediately by other applications, enabling the implementation of powerful Event-Driven Architectures (EDA).

Several production use cases are included that illustrate how this data distribution technology brings new opportunities and value to various enterprises.


HPE Shadowbase Streams

Diverse Applications Interoperate at the Data Level

What is needed is a way for one application to immediately have real-time access to the data updated by another application, just as if that data were stored locally. Furthermore, big data analytics engines require a large network of tens, hundreds, or even thousands of heterogeneous, purpose-built servers, each performing its own portion of the task.

Since these systems must intercommunicate with each other in real-time, they must share an integrated high-speed, flexible, and reliable data distribution network.

Learn More About Shadowbase Streams

Use HPE Shadowbase Streams for Data Integration and Synchronization to:

Achieve Zero Data Loss (ZDL)
  • Elimination of data loss in the event of an outage
  • Elimination of data collisions in active/active architectures (future release)
Build Real-Time Event-Driven Applications
  • Rapid and reliable reformation and transference of large data amounts between heterogeneous databases and applications in real-time
  • Data distribution backbone for a big data analytics system
  • Elimination of middleware and application modification
Master Data Management
  • Uni-directional and bi-directional data synchronization
  • Transformation, filtering, cleansing, and consolidation of data
  • Same (homogeneous) or different (heterogeneous) source and target databases and platforms
  • Data warehouse feeds
  • Real-time data replication
  • Offline and online loading/integration
  • Trickle-feed and batch refreshing
Transform Data
  • Database Specification (DBS) Mapping
  • Shadowbase Map (SBMAP)
  • Shadowbase Data Definition Language Utility (SBDDLUTL)
  • Shadowbase User Exits
  • Miscellaneous Additional Transformation Methods