Friday, November 2, 2012

Is the baby ugly?

Oracle Exadata is a powerful new database platform that has created quite a stir in the database market. Its combination of integrated hardware and software dedicated to executing the Oracle database at the best possible speed practically sells itself.

I have a problem with Oracle Exadata, though. It hides bad code with brute horsepower. Customers see Oracle Exadata as a solution to their environments where they are currently hitting a wall in terms of performance. Most of these customers were probably in the same boat several years ago when they saw a hardware upgrade as the only solution. Given that, hiding the bad code is not an Exadata problem exclusively. Exadata just does a better job of hiding it than any previous database platform.

When I use the term “code,” I’m referring to processes and procedures that interact with the database. Calling the code “bad” may not infer that the code itself is bad, but that it’s being used in a manner that is producing less than desirable results.

Over the past couple of months, I have witnessed situations where even the advent of Oracle Exadata has not solved all the problems it was projected to solve. These situations were caused by bad coding practices. These practices were probably put in place at a time when they were “good enough” to run things as they were, but had neither the flexibility nor scalability to sustain increased operations down the road.

One situation involved a process that to perform a multi-million row update of a single database table, one update statement complete with an explicit commit was executed per row to be updated. This combined this with the practice of using literal values and no bind variables, the shared pool got overwhelmed. Tricky adjustments could be made to keep them running while the bad code could be examined and revised. I wonder if they even needed to deploy Oracle Exadata if these coding deficiencies had previously been identified and resolved.

Another situation involved performing a select from a database table, using these values to search another table, inserting the data if it didn’t exist in the second table, and committing the change. After that, it moved on to the next row. Needless to say, this practice will soon overwhelm the Exadata platform they are currently on, as was the case with their previous platform.

Why do these practices continue to exist through one or more platform migrations? I doubt the main cause was the ignorance that nothing was wrong with the code, although it may be a contributing factor. I suppose part of it could be blamed on disagreements between developers and DBAs where each accuses the other of either causing the problem or not being able to tune the database. What I believe to be the main problem is inertia going forward. It’s so great that reviewing what has been done is not considered an optimum use of resources. Another aspect of the problem is the “fix it fast” mentality, where the focus of effort is too narrow in scope.

These factors along with with rapidly evolving hardware, innovation, and short-term leases is what leads organizations to consider platform migrations every few years. Migrating existing production environments to new platforms is a pricey and time consuming exercise, though.

I’m not saying that companies can avoid migrating to new platforms as they grow, I’m not recommending contracting outside performance expertise at every opportunity either. Simply put, I’m advocating a more intelligent use of resources that may make “hurry up and migrate” exercises unnecessary so that migration is not in response to systems that “suddenly” could not cope with the workload. I only believe in the proactive use of resources, which is much less costly than using them reactively.

One method that can help this situation is to send code developers to database training. A basic understanding of the database environment where the code will be executed will pay off, ultimately saving the company thousands. The developers will gain a better understanding of how things happen in the database and learn about features, which can make processing more efficient.

Another method is to have a DBA work with the developers from the beginning of the development process to leveraging database knowledge at the start of the code lifecycle. The most efficient example of developer/DBA harmony I have seen at a company was where one DBA was assigned to support each enterprise application. This support started at the development level and continued all the way through testing, QA, and finally production. The DBA became well versed on the code in the application, and the developers gained the benefit of having the DBA bring database features into the development process that the developers were otherwise unaware. Problems were minimal in these environments and could be identified quickly. Code deployments and upgrades went smoother as well.

In the absence of these methods, a periodic code review or performance review can bring bad coding practices to light before the term ‘migration’ starts getting tossed around. In this case, contracting outside expertise could be of benefit.

There is an old saying in Information Technology that goes “Garbage in, garbage out.” This was true then and is true now, no matter how shiny, new, and powerful the garbage truck is.



No comments:

Post a Comment