Online Magazine
Database check-up

Without customer or financial data, operations quickly come to a standstill. It is therefore important not to expose databases to unnecessary risks and make them fit for the existing requirements. With this 4-point check-up, you can see where common problems are hiding and how you can solve them.
By Roland Stirnimann

1. Lack of redundancy
Lack of redundancy is a classic but common reason why companies repeatedly experience IT infrastructure failures or data loss. This basically means that there isn’t a sufficiently powerful system available to compensate for a failure of your database infrastructure or data centre and thus prevent the paralysis of your operation or data loss. Consequences can include high recovery costs, loss of productivity and/or image and threats to the company existence.
Our tip
Ideally, you need to separate the systems geographically to ensure redundancy. That way, even if an entire data processor fails, you can avoid losing the IT infrastructure or data. For small businesses, even the cloud can be a second location. Highly available database systems are state-of-the-art, preventing the worst case scenario by configuring standby databases. For example, Oracle offers this with Data Guard for Enterprise Edition. Trivadis also has its own product, db* STANDBY, which also offers this standby function for Oracle's cost-effective standard edition. This allows you to protect your data in a state-of-the-art way, without needing to switch to the more expensive Oracle Enterprise Edition.
2. Poorly planned backups
There should be no question that your data must be regularly and automatically backed up. However, this is precisely what often causes problems. If backup operations are poorly planned or start simultaneously and therefore overlap, this leads to load peaks. For example, backups may be incomplete due to overloading, or they may interfere with your employees' day-to-day work by slowing down applications. Such problems usually need to be resolved manually and entail additional effort and costs. In addition, static control of backups leads to much redundancy or increased backup volume, which also increases costs unnecessarily.
Our tip
We recommend having the backup scheduling carried out intelligently and dynamically by an algorithm. This reacts flexibly to changing situations or schedules backups automatically in such a way that there are no problems and static backup plans can be dispensed with. Trivadis offers the product db* BACKUP precisely for this purpose: An intelligent algorithm aligned to defined guidelines automatically schedules the backup jobs based on the current system situation. This means that backups do not just occur in a static and inflexible manner, but are demand-oriented. All tasks can be centrally managed and evaluated – for example to meet compliance requirements. The benefits are obvious: Cost savings and increased efficiency through fewer manual tasks and a reduction of the backup volume as well as greater transparency and thus better quality backups.
3. Unheeded programming guidelines and code quality
Other risk factors that should not be underestimated regarding databases are unheeded programming guidelines and the quality of the code. Especially if several developers work together, resource-consuming discrepancies can occur – especially during maintenance – if everyone does not adhere to the same standards and coding rules or if the readability of the code cannot be guaranteed. If the quality of the code is insufficient, the consequences can even be errors or bugs, which lead to undesirable side effects in the database or also to security gaps.
Our tip
To minimise the risk described above, you must be able to check the programming guidelines and code quality. It is efficient and cost-effective to carry out these checks automatically and in a regulated manner. db* CODECOP from Trivadis is a tool for this purpose. It can analyse and evaluate SQL and PL/SQL code based on certain rules. With this tool, deviations from the coding guideline are detected in the code at an early stage and can be eliminated before they lead to problems. This process can be repeated as many times as desired. You can also generate reports on code quality and recommendations for quality-enhancing measures. With these tools, you can approach the perfect code step by step.
4. Insufficient or no capacity management
Your database needs to be able to do a lot – ideally whatever the workloads currently require. Insufficient capacity leads to problems. If you plan with unnecessary resources, things can quickly become expensive. For companies that have insufficient or no capacity management for their database, the latter is often the case. They want to be on the safe side and therefore pay for much more than they would actually use for their business needs. This is particularly painful if you allocate too many resources in the cloud, where the pay-as-you-use principle applies.
Our tip
You can easily eliminate this problem by systematically analysing and planning your needs. For this task, too, there are tools that do it all automatically. Trivadis offers the product db* CAPMAN to handle capacity and resource planning for Oracle, MySQL, MariaDB, PostgreSQL and MSSQL database systems. db* CAPMAN continuously collects all relevant performance and configuration data from your servers and databases, which can then be clearly displayed and evaluated in reports and diagrams. This allows you to keep track of your needs, then forecast and plan exactly what your ideal – and thus efficient and cost-effective – capacity management looks like.
