The most important tip comes from just common sense - get it in writing. The part that may not be so obvious is that for the contract to be useful in a dispute (the time that you need it the most), it must have details about what you expected your system to do, under what conditions and in what time frame.
Obviously, if you're buying a single off-the-shelf computer or a box of prepackaged software, you probably cannot get anything in writing. The best you can hope for is a liberal return policy or test period. What I'm addressing here are primarily major purchases of hardware for networks, mainframes, client-server applications, or other large configurations, and custom developed or modified software.
The problem with computer related disputes is that they're rarely as easy as "The computer won't turn on," or "The accounting software can't add." The answers in these cases are easy and obvious.
Usually, the problems are more like, "The system is slower than we expected," "The network crashes 'too often,'" and "The payroll takes longer than expected to run."
Imagine trying to resolve a dispute in the face of general warranty language like, "We warrant that the system will be free from defects for one year."
What's a defect? How often should a network crash? Is one system reboot a week reasonable? I'd say yes for Windows XP or Windows Small Business Server, but no for the life support monitoring system in an intensive care unit in a major hospital. Oh, and how long should it take to run that payroll? Did you ever tell your vendor?
The problem is that computers and everything about them are not perfect. Perfect software simply doesn't exist. Sometimes computers do inexplicable things for no apparent reason. That's just the way it is. You must expect the unexpected. Drawing the line between a reasonable amount of glitching and "broken" or "not functioning to specifications" isn't easy.
When dealing with a mainframe or a server, you have every reason to expect your computing to be relatively robust, error free, crash resistant, and predictable. These are some of the reasons for still using a mainframe or a server when a personal computer can do more than a mainframe could a few years ago. Nevertheless, "relatively" is the key word.
Although not perfect, everything about the history and tradition of mainframes is about computing for mission critical applications. Redundancy, error checking, and slow steady development lead to an operating environment that's stable and time tested.
Your reasonable expectations rapidly diminish when you enter the world of the hottest latest operating system running on the newest processor available
The simple fact is that Windows XP and software running under it crash. Yes, they crash less than under older Windows 95, but they still crash. Even the more robust and "bulletproof" (computerese for doesn't crash) Windows Server crashes. They lie when they tell you that it doesn't. It just crashes less than Windows XP which crashes less than older Windows 95 which crashes MORE than its older cousin "DOS."
The point is that performance expectations must vary depending on, among other things, the type of hardware, the operating system, and what you're trying to accomplish. We all know when a television is failing to perform reasonably well, but it's just not that easy with computer systems.
The key is to be very clear about what you expect from your system. You need to create criteria for testing and accepting your hardware and software.
While specifying tests for everything that your hardware and software can do is impossible, you can create typical test cases as the basis for acceptance criteria. For example, if you expect your custom developed accounting software to print checks for your 10,000 employee payroll between the hours of 5:00 p.m. and 8:00 a.m. using your current mainframe and printers, then specify that as a test. Or if you're a hospital and, for security purposes, you want your nurses to be able to see photographs of physicians on their screen upon demand, say so, but be specific.
The more specific you are the better. If a photo takes an unreasonably long three minutes to appear at noon on a Tuesday, it won't make you feel better that it took only five seconds at 4:00 a.m. on a Sunday, when the network server is often idle.
I reiterate the point. Your acceptance testing criteria should be as specific as possible. Where applicable, it should specify things like hardware in detail. For example, "We will run Software tests on a desktop computer with a Pentium 4 processor, with 524 megabytes of RAM, running Windows XP Professional, Version 5.1 with Service Pack 2, etc."
Yes, this is highly technical and putting it together isn't easy. It will require a team effort between system users, your Information Systems people, an attorney with experience in computer law, a hardware or software consultant other than the company with whom you are contracting and, finally, your vendor.
Vendors, don't despair. Disputes are as bad for you as your customers. They take up your time, cause you to incur attorney's fees and are bad for your reputation. Clear communication of expectations is equally important to you.
In my experience, most computer technology related disputes are good-faith disputes. What I mean by that is that both sides legitimately believe that they are right. Neither side is lying, but rather they are simply not on the same page.
Vendors typically say things like, "But they never told me that there would be 30 other users on the network. I thought that the maximum number of simultaneous users would be 15." Of course, the customer's version is simple. "They knew" or, better yet, "They never asked."
Disputes and lawsuits are avoided if the customer and vendor take the time to clearly delineate what they expect of the system and what it can do. Time, money, and effort up front clearly communicating and then writing it into a contract are well spent if this effort avoids an expensive lawsuit down the road.