We are expecting at least a 75% increase of our internet traffic, db transactions, network and database activity in general so our entire company is working hard on getting ready to manage this incoming wave.
Regrettably, life is not always good and the core of the business is setup around a system using a typical client/server architecture where each workstation creates a connection to the database and to make it worst, establishes server side cursors to fetch all data they need.
Now on the other hand, the website is driven by old school ASP pages, using adHoc Queries or direct stored procedure calls to the already beat up database. Finally, it is important to mention that this is running on ODBC connections.
End result?, chaos, in high traffic moments the entire network slows down and all the applications switch to a slow silent death walk to a total IT crisis. There is no caching techniques, no business layer, basically no middle tier... all the opposite things to what I'm used to deal with, and that is why it is my mission to turn this around and take over the world. (moooooahahaha)
Well, the mission is simple, we can't change much in short time, so we need to make this system work as good as possible. So, my first task is to provide a list of recommendations towards the deployment of our new database servers running MSSQL 2005.
Yup, we are migrating from MSSQL 2000 to MSSQL 2005, people will think that it is a simply straight import/export, but special considerations need to be made when dealing with the databases collation which has completly change from one version to the other one and to the famous "compatibility modes". Keeping an imported database in Compatibility mode 80 (MSSQL 2000) may save you headaches but will offer you little in terms of all the good new toys and performance tricks that you can apply on compatibility mode 90 (MSSQL 2005).
Bellow is a summary of my initial recommendations aimed to the physical database storage design aspect that you should follow when dealing with a situation like the described above:
- Server side cursors implies lots of memory usage on the database. So, increase memory on the database server, the more the better. Make sure your database is using it, by checking the database memory limit. (special attention to the 3gb limit on the non Enterprise versions of MSSQL)
- Multiple core machine? well, you will not use them if you don't separate that big nasty .mdf file into multiple .ndfs, what is the recipe? the number of data files within a single file group should equal the number of CPU cores.
- For optimized I/O parallelism, use 64 Kb or 25 Kb strip size when defining the RAID configuration.
- Use manual file growth database options. Automatic is only for development (ja! you didn't knowt that did ya?)
- Increased size of “tempdb” and monitor space usage, adjusting accordingly. The recommended level of available space is 20%. All your temp tables and indexes are created there so, keep that guy with enough space, if you are using the default size that is only 8MB, and that is basically 8 floppies, so be nice, put some more space there.