Oracle Exadata is a powerful new database platform that has created quite a stir in the database market. Its combination of integrated hardware and software dedicated to executing the Oracle database at the best possible speed practically sells itself.
I have a problem with Oracle Exadata, though. It hides bad code with brute horsepower. Customers see Oracle Exadata as a solution to their environments where they are currently hitting a wall in terms of performance. Most of these customers were probably in the same boat several years ago when they saw a hardware upgrade as the only solution. Given that, hiding the bad code is not an Exadata problem exclusively. Exadata just does a better job of hiding it than any previous database platform.
When I use the term “code,” I’m referring to processes and procedures that interact with the database. Calling the code “bad” may not infer that the code itself is bad, but that it’s being used in a manner that is producing less than desirable results.
Over the past couple of months, I have witnessed situations where even the advent of Oracle Exadata has not solved all the problems it was projected to solve. These situations were caused by bad coding practices. These practices were probably put in place at a time when they were “good enough” to run things as they were, but had neither the flexibility nor scalability to sustain increased operations down the road.
One situation involved a process that to perform a multi-million row update of a single database table, one update statement complete with an explicit commit was executed per row to be updated. This combined this with the practice of using literal values and no bind variables, the shared pool got overwhelmed. Tricky adjustments could be made to keep them running while the bad code could be examined and revised. I wonder if they even needed to deploy Oracle Exadata if these coding deficiencies had previously been identified and resolved.
Another situation involved performing a select from a database table, using these values to search another table, inserting the data if it didn’t exist in the second table, and committing the change. After that, it moved on to the next row. Needless to say, this practice will soon overwhelm the Exadata platform they are currently on, as was the case with their previous platform.
Why do these practices continue to exist through one or more platform migrations? I doubt the main cause was the ignorance that nothing was wrong with the code, although it may be a contributing factor. I suppose part of it could be blamed on disagreements between developers and DBAs where each accuses the other of either causing the problem or not being able to tune the database. What I believe to be the main problem is inertia going forward. It’s so great that reviewing what has been done is not considered an optimum use of resources. Another aspect of the problem is the “fix it fast” mentality, where the focus of effort is too narrow in scope.
These factors along with with rapidly evolving hardware, innovation, and short-term leases is what leads organizations to consider platform migrations every few years. Migrating existing production environments to new platforms is a pricey and time consuming exercise, though.
I’m not saying that companies can avoid migrating to new platforms as they grow, I’m not recommending contracting outside performance expertise at every opportunity either. Simply put, I’m advocating a more intelligent use of resources that may make “hurry up and migrate” exercises unnecessary so that migration is not in response to systems that “suddenly” could not cope with the workload. I only believe in the proactive use of resources, which is much less costly than using them reactively.
One method that can help this situation is to send code developers to database training. A basic understanding of the database environment where the code will be executed will pay off, ultimately saving the company thousands. The developers will gain a better understanding of how things happen in the database and learn about features, which can make processing more efficient.
Another method is to have a DBA work with the developers from the beginning of the development process to leveraging database knowledge at the start of the code lifecycle. The most efficient example of developer/DBA harmony I have seen at a company was where one DBA was assigned to support each enterprise application. This support started at the development level and continued all the way through testing, QA, and finally production. The DBA became well versed on the code in the application, and the developers gained the benefit of having the DBA bring database features into the development process that the developers were otherwise unaware. Problems were minimal in these environments and could be identified quickly. Code deployments and upgrades went smoother as well.
In the absence of these methods, a periodic code review or performance review can bring bad coding practices to light before the term ‘migration’ starts getting tossed around. In this case, contracting outside expertise could be of benefit.
There is an old saying in Information Technology that goes “Garbage in, garbage out.” This was true then and is true now, no matter how shiny, new, and powerful the garbage truck is.
Friday, November 2, 2012
Wednesday, June 20, 2012
Afraid to Fail or Afraid to Try?
This coming weekend I will have to travel out of town to perform a database export from one database and import it into another database. That's right - I will have to travel to do something that not only can be done remotely, but can probably be done by anyone with about a couple hour's worth of prep time. Don't get me wrong. I am not trying to say that I am so far above this that this situation is insulting. I am mearly pointing out the time and money that could be saved by not utilizing an outside resource. I try to keep costs down myself by not dining in lavish places and keeping away from the hotel room's minibar.
Why you may ask is the target party I am doing this for not doing it themselves. As the title of this post suggests, there could be a couple of reasons. One is that they are using a professional resource in case a problem arises. I think this is the most plausible explanation. The other explanation is the real basis of this post.
Afraid to Fail or Araid to Try
In a different situation, the 'afraid to fail or try' mentality prevailed because of the critical uptime required of the database. This point was driven home when a resource was fired when they inadvertantly caused an unplanned downtime. This was a bad (knee-jerk) reaction from the management as it sent a message to the remaining resources that any deviation from the status quo was dangerous. As such, fear ruled the environment and there was no desire to evaluate new features or drive to improve existing processes. When this happens, innovators are given the choice to give up or get out. At this point, complacency sets in (see my 'Complacency Kills' post - http://ora-vent.blogspot.com/2010/04/complacency-kills.html). Stay clear of these environments unless you are a happy button pusher who does not like to ask 'why are things being done this way'?
Why you may ask is the target party I am doing this for not doing it themselves. As the title of this post suggests, there could be a couple of reasons. One is that they are using a professional resource in case a problem arises. I think this is the most plausible explanation. The other explanation is the real basis of this post.
Afraid to Fail or Araid to Try
In a different situation, the 'afraid to fail or try' mentality prevailed because of the critical uptime required of the database. This point was driven home when a resource was fired when they inadvertantly caused an unplanned downtime. This was a bad (knee-jerk) reaction from the management as it sent a message to the remaining resources that any deviation from the status quo was dangerous. As such, fear ruled the environment and there was no desire to evaluate new features or drive to improve existing processes. When this happens, innovators are given the choice to give up or get out. At this point, complacency sets in (see my 'Complacency Kills' post - http://ora-vent.blogspot.com/2010/04/complacency-kills.html). Stay clear of these environments unless you are a happy button pusher who does not like to ask 'why are things being done this way'?
Saturday, July 30, 2011
Get involved with the development process
I recently was involved with a production database performance problem that was narrowed to a problem with some code that had just been implemented days before. It seems that new records placed into a certain table go through a process where a five-digit alphanumeric 'unique' key is generated, then the table is searched to see if the 'unique' key already exists. If it does, an new 'unique' key is generated and searched for again. This process happened 800-2000 times for each record. With about 200 users in the system, you can see how quickly this ridiculous process became unmanageable. This process is so illogical, that even non-DBA's shook their heads when it was described to them.
No DBA, no matter how unseasoned, would allow code like this into a production environment without questioning it. How about using a database sequence to generate truly unique keys? Or using the dbms_random procedure? It became apparent that no DBA was involved in the development process. The lesson here is that if the application is placed on top of a database, there needs to be a DBA resource involved in the development process from the beginning. If you are a DBA in this situation, get involved or learn to spend most of your time cleaning up messes like this.
No DBA, no matter how unseasoned, would allow code like this into a production environment without questioning it. How about using a database sequence to generate truly unique keys? Or using the dbms_random procedure? It became apparent that no DBA was involved in the development process. The lesson here is that if the application is placed on top of a database, there needs to be a DBA resource involved in the development process from the beginning. If you are a DBA in this situation, get involved or learn to spend most of your time cleaning up messes like this.
Tuesday, April 20, 2010
Complacency Kills?
I have a military background and the first thing you get taught in counter-terrorism training is that 'complacency kills'. Complacency means that you start feeling secure and unaware of potential danger. This may sound a bit extreme, but I ran into complacency in the DBA world just last week. I was interviewing a potential candidate to fill a full-time Sr. DBA position. On paper, this candidate looked good. They had worked in both large and small shops doing a variety of tasks. During the interview, though, they were not able to recall how to perform the most basic of tasks or speak in detail on processes and procedures. Another glance at this candidate's resume showed that they had worked in a large shop since 2007 and were the senior member of a team of four. It had become apparent that this person had gone from a well-rounded DBA to a 'button pusher' all because of complacency. This person was obviously more concerned with collecting a paycheck than keeping their skills current. In the world of IT, this is a death sentence. Once you lose your interest in the technology, you may as well look to another career field that you are interested in.
Friday, December 11, 2009
Saving a buck?
I was recently approached by some project managers who were facing implementing a new database to support a third-party application. To save the cost of new Oracle licensing fees, they wanted to add the new database into an existing database system. Seems like a good idea at the time, and I could go ahead and agree being as I will not be here in the long run to deal with the side affects as I am a consultant, but would it be the right thing to do? Probably not. The right thing was to not recommend coupling an existing production system with one whose performance characterisitics are not known. Another alternative would be to run it for a few months without a license just to determine the performance characteristics to determine if it could cohabitate with an existing database comfortably, but I could not recommend that.
Monday, September 21, 2009
Bragging rights?
I was recently sent to help a company who's onsite DBA wanted to point out upon meeting me that they had "18 years of experience". This person then proceeded to impress me by not being able to build a two-node RAC cluster when given two weeks and all resources including root access on the servers. After several more "impressions", this person resigned and has not been heard from since. I should have kept in touch because wherever this person goes, there is a contracting opportunity. Bottom line, if you are too concerned with talking the talk, you probably can't walk the walk.
Sunday, September 20, 2009
Welcome to Ora-vent
Sometimes those of us who work in the Oracle realm; whether we are DBA's, developers, or architects, need to get stuff of our chests, stuff that only fellow Oracle guys would understand. That's what this blog is for. All I ask is that you keep it clean. Other than that, have at it. I do reserve the right to decide what and what not to post, however. Thanks.
Subscribe to:
Posts (Atom)