Probably the most difficult contract drafting task a technology lawyer faces is for custom software development. The mission is to create some certainty and objective standards for a piece of software that doesn't exist yet. Here are some tips from the trenches.
No matter which side I represent in these kinds of negotiations - and I've worked with both software buyers and developers - I've noticed the same three basic issues become points of contention every time: design specifications, flexible pricing, and performance standards. In my experience, it invariably requires some compromise before the parties can come to terms because there are no perfect answers to these issues. Moreover, it does not matter what kind of software development you're looking at. Development of an applications running locally on a PC or as a software as a service (SaaS) application in the cloud all have the same issues.
Negotiating Design Specifications
Since almost everybody uses word processors, I'm going to use them for my example. Imagine that you are writing a contract for the custom design of a word processing program like Microsoft Word. You might start with a general functional description in non-technical terms:
"The program to be developed shall function as a full-featured word processing program."
Of course, if I represent the developer, I want "full-featured" out. I would argue that it's a meaningless term that may imply open-ended obligations.
If I represent the buyer, I'd probably concede this point, but - I tried. I also learned just a bit about how well represented the developer is.
Next, you need to develop detailed design specifications.
Think of all the zillions of capabilities that word processing programs have. Think of a program you know: how many pages could you fill while trying to describe what it does, and how? You'd need to describe menus, icons, visual appearance, functionality, user ability to customize, method of customization, help text, performance standards, and on and on.
Now imagine how difficult this task becomes if you're doing detailed design specs with someone for a program that only exists in your head.
This can be a daunting task, but taking time at the beginning to work out detailed design specs is equally important to both sides of the development deal. It's the only way to be sure everyone's on the same page, that there's been a meeting of the minds as to what the software should do, and how.
And that's also the only way both sides will ever be able to walk away from a development project looking forward to doing another one together.
The Parties Need to be Flexible
Most software development is a lengthy process. This creates opportunities for frequent and regular consultation while the work progresses. As unexpected programming problems and issues inevitably arise, the contract should create a mechanism for revisions to the design specs and the pricing as the scope of the work changes.
If you are thinking this isn't really necessary in your case, consider this: Isn't it true that with common, off-the-shelf software like Photoshop that new versions usually arrive later than originally projected, and they include sets of features that look quite different from those described in the early speculative blog posts?
My point is you have no reasonable expectation of doing better than Adobe does with his own nearly limitless resources and in-house pool of talent. It's a simple concept: expect the unexpected with software development.
Expect the final product to look somewhat different from what you originally envisioned.
Expect revisions in the original design specification. It's all part of the process. And that means the parties must be flexible and write flexibility into the contract.
This might seem like a pro-developer position, but it isn't. If the developer has unexpected difficulty (I'm assuming honesty and integrity here) with a certain aspect of a program, the buyer hurts himself by insisting on a fixed price and feature set.
The developer is in business for profit. If the buyer insists on sticking it to the developer with "it's a fixed-price contract," something else will undoubtedly give, human nature being what it is. Maybe corners will be cut somewhere else in the development. The point is inflexibility in an inherently difficult-to-quantify area like software development can only lead to bad things.
A suggested and often successful alternative to a "fixed price" contract is one where the price is based on how much time and money the developer spends on the project. If the buyer is on the ball, he'll say that this is too open-ended. The compromise could be to cap the open-ended compensation with cost overruns shared according to some formula (50-50? 60-40?).
The big day has arrived. The developer has finally delivered your new software to you for testing. You start using it and bingo, it works. But very slowly. You call the developer, and he says, "....but it works."
Yes, it does everything required by the design specs, but what you also need in your agreement are performance standards. This is where you define how fast the hypothetical program must run - how fast it should do what it's supposed to do.
This is another difficult part of computer contracting, but the buyer and developer must put performance standards into their agreement. As with design specifications, performance standards may need to be refined as the new software is created - again, both sides need to be flexible.
However, in this part of the contract, performance standards should be specifically tailored to the buyer's needs. Things to consider are acceptable downtime, response times, and benchmarks.
With "acceptable downtime," you quantify reliability - what's considered a reasonable amount of time for the software to not work. For example, you might require that the software be up and running 98 percent of the time, or that it require a reboot no more often than once a week, or once month or whatever.
This will vary depending on the mission-critical nature of the software. Software running a hospital's life support system is likely more mission critical than software running a computer game.
With "response times," you create scenarios for the software and require that it do that scenario in an agreed length of time. For example, your scenario might measure the time between the moment that you hit the "OK" button until the system has completed a specified operation.
With "benchmarks," or tests that set the standards, the parties need to agree on the operating environment for the tests. For example, the parties might agree that the standards apply to software running on Intel Core 2 Duo with 2 GB of RAM, etc. They might further specify the number of simultaneous users and other relevant criteria that can affect speed and performance.
We've only touched on three basic points of contention. There are many other significant ones, including warranties (the obligation to fix problems), limitations of liability (if you're down for two days because of bad software, does the developer pay for your lost profits?), intellectual property (who owns the copyright?) and other matters.