Star Storage article

Large data repositories from now to future – part 2

As promised last week, today we continue with two more gold rules regarding your data repository architecture and we also give you a short hint about helpful solutions.

Ease backup / restore procedures and avoid data lock-in

A common repository architecture is to have your system and custom application information stored into a database while the content resides in a normal filesystem. But this kind of architecture creates an issue because all the content is stuck in the current application as it cannot be easily accessed by another application. Also using this architecture means that your backup/restore procedure consists in a database backup/restore and a disk filesystem backup/restore procedure which can be a complex, laborious and a time-consuming procedure. In this way the need to improve the repository architecture appears by storing the content in storage device which support standard protocols like HTTP(S) or CIFS allowing the content to be opened to different application thus enabling private-cloud architectures. Also if backup-less storage devices are used the IT backup/restore procedure is massively improved as it will consist only in a database backup/restore procedure.

Increase your content availability and object recovery options

Using a normal disk filesystem does not offer the possibility to configure it to offer content high availability architecture as applications build on top of your repository will not switch to a secondary replica filesystem. Also in case of application object lost (corrupted database) or mistaken delete operations, the recovery options are limited to a problematic restore procedure.

 

Using large data repositories should force IT managers or chief information officers to rethink the strategy as their options in such cases does not offer many possibilities regarding automatic content failover/fallback options or application object recovery based on existing information at content level.

 

Luckily, solutions to cover the above storage requirements already exist on the market and one of the best solutions is to use the Hitachi Content Platform (HCP) from HDS which can solve the storage related problems (resilient, secure, backup-less WORM environment). But only having one of the best storage options does not solve entirely your problems as there is a need to use specialized connector to link a data repository with HCP storage in order to truly benefit from all the features that HCP platform offers.

 

Examples of such connectors are the ones offered by Star Storage for HCP platform that enable EMC Documentum (  Star Documentum Connector for HCP) or Alfresco (Star Alfresco Connector for HCP) repositories to use HCP storage as an external storage using the best and fastest technology to link these systems whilst leveraging automatic failover/fallback options and also application object recovery based on stored HCP information.