Not only are there no silver bullets now in view, the very nature of software makes it unlikely that there will be any — no inventions that will do for software productivity, reliability, and simplicity what electronics, transistors, and large-scale integration did for computer hardware… I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation… If this is true, building software will always be hard. There is inherently no silver bullet.
Frederick P. Brooks (No Silver Bullet, 1987)

Thirty years ago, the computer scientist Frederick Brooks wrote a remarkable little book called The Mythical Man-Month.  The express intent of that book was to demonstrate how, and why, adding personnel to an already-late software project will only result in delaying the project further.  In the course of explaining the why part, Brooks got to the heart of software engineering: whatever metaphors one chooses to describe what software engineers do, ultimately they all come down to specification.  Digital computers are good at — maddeningly good at — doing exactly what they’ve been told to do, irrespective of whether what they were told to do was the intention of those doing the telling.  Thus, the fundamental problem of creating software is managing several concentric spheres of specific instructions, each one tailored to the level of abstraction it addresses.  Think of the layers of an onion.  At the outermost layer, you have high-level system requirements like “it makes money”.  Peel away the layers and you find, say,  “pleasing user interface” which in turn yields to “AJAX-based, search-driven”, “dashlet components interact via mediator pattern”, and so on, down to the itty-bitty details such as what regexps will be in place on what input fields to prevent SQL injection attacks.  Whatever the actual content, managing of all these levels of requirements necessitates writing specs.  In particular, the most important of these specs (sometimes also called a “Master Design” or “Master Requirements Document”) is the functional spec, the one that describes how the whole system works, in detail, from the user’s perspective.  We’re talking screens, menus, input fields, messages and so on.

Sure, it’s possible to produce software without a functional spec.  It’s just in most cases a bad idea.  In fact, there’s good evidence that failing to produce a proper functional specification is the number one risk in software development.

Moreover, it’s not all stick — there is a carrot here as well, namely that developers working from quality specs are far more productive, and produce generally higher quality code, than those that do not.  Why?  Because what a functional specification does is force your organization to design the software.  Software carefully designed tends to show it: in the user experience, in its performance and robustness, and in the significantly reduced maintenance and rework required over the life of the product.

So let’s assume for the sake of argument that good functional specs, spelling out all the system’s requirements, are just the cat’s meow.  Why are good functional specs (or indeed, specs of any quality) so rare?  Because, as Brooks pointed out in his essay No Silver Bullet (quoted above), specification is hard.  How hard?  I have a little story that may shed some light:

Back (way back) in college I was a math tutor.  A preliminary for all tutors at this particular college was a joyous seminar on the “problems of learning and instruction” — a brief glimpse into just how maddening the enterprise of explaining things to people can be.  Most of it seemed like tedium, and painfully obvious.  It was no surprise to the assembled tutors-to-be that some people just don’t get math.  Of course — why else would someone seek out tutoring?

I remember nursing a genuine resentment at being forced to waste two weekends listening to lecturers drone on and on about ‘cognitive gaps’ and ‘leaps of inference’ and the like when it was eminently clear that none of this would aid us, the tutors, or our charges.  Really, if years all of this pointy-headed theorizing hadn’t made oh, say, the professors of math courses any more effective as teachers, what chance did four days of it have for us?

Then came a little ‘interactive’ session entitled ‘How to Make a Peanut Butter and Jelly Sandwich’.  The premise was simple.  The audience was to verbally instruct one of the seminar’s facilitators, step-by-step, on the process of crafting a fine, and presumably edible, PB&J.  He had everything he needed arranged on a table: peanut butter, jelly, a loaf of wonder bread, and a knife.  Oh, dear, thought I.  Now we get the sad spectacle of a man pretending to be as stupid as a bag of hammers in order to frustrate our attempt to walk him through making lunch.

And sure enough, the facilitator played it to the hilt.  What was surprising was that this little exercise was actually enjoyable; hilarious even, as it became clear that this guy was experienced.  He was good at what he did, and it made for fine comedy.

“Just start with two pieces of bread!”  was the first instruction that coalesced out of the din of tutors offering simultaneous advice.  The facilitator put on his best what-me-worry face and confidently gripped the bag of bread, held it aloft.  Then consternation crossed his visage, signaling that the next phase of his mission wasn’t clear.  Hmm… two pieces.  Well, I’ve got this collection of bread pieces… hmm… I need two of them, but they’re in the bag… hmm.

“Open the bag!”  Well, you lob a slowball like that, he has to hit it out of the park.  A look of enlightenment came upon him as he vigorously shook the bag, but this, too, passed into confusion as the bag failed to yield its load of bread-like styrofoam.  Much hooting and hollering ensued.  Again, gripped by inspiration and our inchoate instructions, he settled on a tried-and-true bag-opening tactic: he tore it open, violently.  With purpose.  With conviction.  Bread flew from the bag.  Some landed in his vicinity and he seized it triumphantly, mangling it further in the process.

“Now put some peanut butter on the bread!”  At this point everyone was fully cognizant that this guy was going to misinterpret our instructions in some absurd manner; some of us just wanted to see exactly how absurd.  Much laughter as he placed the whole unopened jar of Skippy on top of the bread with a self-satisfied flourish.

It went on and on; full minutes later, after we’d successfully negotiated the opening of the peanut butter jar, we’d told him to get some peanut butter on the bread by way of first getting peanut butter on the knife and then dragging the knife across the bread.  See, we thought we’d out-dumbed him, but such was not the case as he viciously plunged the knife into the jar, punching out the bottom and sending a mess of broken glass and hydrogenated oils to the floor.

Well, I’ve bored you enough with the details.  Suffice it to say that it took a solid hour to finally get this guy to make a frickin’ sammich.  So where am I going with this, and what does it have to do with writing a spec?  Everything.

Because, as mentioned above, when you’re writing software, you’re really just sitting in a chair telling a box what to do, and that box is pretty dumb.  Because, essentially (and particularly in a business domain) you’re trying to instruct something a lot dumber than a man pretending to be stupid, and you’re trying to get it to do something a lot more complicated than making a PB&J.  Another way of putting it is that successfully developing software necessitates dealing with a level of specificity that makes most people ill, insane, or both.  Largely, this is a war of wills, and the decisive battle is usually in creating the functional spec.

Problems with Specifications

As developers, we’ve all had the experience of specifications coming down from on high only to find that they are woefully incomplete and/or vague.  That’s an old complaint and an old story.  But it’s also a safe bet that developers have all had the additional experience of running into what can charitably be called resistance when seeking clarification on requirements, and smoothing that dirt road is what a well-written functional spec is all about.

Many approaches over the years have been offered.  Agile/XP-type processes, with their emphasis on “user stories”, short cycles, and lots of end-user feedback, seem to work very well when they work, but, just like technologies, methodologies don’t exist in a vacuum.  You usually can’t simply combine a methodology with a business environment like you combine acids and bases in a laboratory.  Agile methods are great when everyone plays along, not so great when they don’t.  Same goes for any methodology, big-M or small.  The chief problem in any software development environment is cooperation — a people problem.

Developers are often well aware of their strange mental inclination — the capacity to decompose the real world into a series of maddeningly specific steps.  To a client’s business analyst, it’s a job well done, maybe even overkill, to specify:

“Requirement 357a: The system shall, upon encountering an incoming address, increase the ‘returned from post office counter’ in the data warehouse for existing addresses equal to the incoming address.”

Yup, the analyst thinks, that’s all there is to it.  Problem is, of course, the developer can’t compile that sentence into working code.  Now the developer is faced with the task of asking The Stupid Questions.  Incoming address?  Incoming from where?  Where do we find it?  How are two addresses reckoned to be “equal”?  So on and so forth.  The business analyst starts to get exasperated; he’s got, like, four tons of this crap to go through, and he’s not a mind reader either.  Now he’s got to go directly to the client and ask The Stupid Questions, because come to think of it he’s not exactly sure of how the client determines when two addresses are equal — OK, they’ve got the same street, number, city and zip, but this one address is missing a state/province code…  seems straightforward, they’re still equal, right?  ‘Cos if you have the zip you know the state… jeeze!  But that box, that stupid, stupid box, doesn’t know that, and now the analyst has to ask what the client wants, which makes him look like a moron, because he’s paid to figure this junk out, and he hasn’t done it, or so the client seems to suggest every time he walks into her office…  So he ignores the developer’s email and hopes they’ll just do something right for a change.  And the developer sends more email and things get testy because now the schedule is slipping because there’s still these unimplemented features because the developer doesn’t want to code them until the requirements are clear since if she does and the client doesn’t like it then that will generate a bug report on her code, and too many of those look bad come review time.  Now QA is getting testy, too (no pun intended) because how are they supposed to test unimplemented features?  Four weeks later, after the PM has called the VP to schedule a JAD session, it comes out that:

“Requirement 666a:  The system shall consider two addresses equal when, and only when, at least the following fields in the incoming data source (defined in subpart J of definitions document Foo) are Unicode (see addendum 6) character-by-character matches on a one-to-one basis… [long and winding road inserted here]…  Further as documented in the ‘null-field coalesceable’ specification, STATE_PROVINCE is not a required field for this process as the system shall normalize the city and state by the postal code, which is required…”

Welcome to the Dilbert zone.
(So sure, you say, we’re all familiar with this kind of frustration.  What do we do about it?  Well, I have an idea. Keep in mind that’s all it is. I’m not selling any snake oil. There’s no guarantee that this will work, no statistically significant findings from a controlled study to back it up.  But I think it’s worth trying: Arrange a meeting with your stakeholders, and get them to tell you how to make a peanut butter and jelly sandwich.)
Since we as developers are paid to systematize the world on behalf of other people, we have to do a better job of educating our clients on both the value and the pitfalls of what we do.  As long as the rain keeps falling, no one knows or cares about our chanting and dancing around with the chicken bones.  Come drought time, the mystery of our profession is our undoing.  (We’ve always been unfathomable pinheads, but in times of systemic failure we’re the unfathomable pinheads who failed.)  The single most useful way to manage and mitigate the risk of failure is to have a written, up-to-date, and authoritative spec.

Taking the Pain out of Specifications

So far, I’ve stipulated that (1) functional specifications are crucial to quality software and (2) these specifications are very hard to produce.  It may seem as if we are then at something of an impasse.  But not necessarily.  There’s a lot an organization can do to lessen the pain, and maximize the utility of, producing specs.

  • Chiefly, don’t confuse functional specifications — the detailed description of a system’s features from the user’s perspective — with any other kind of specification, such as tech specs discussing, say, class hierarchies and the like.  If the two are combined, as they sadly often are, the result is a mess that no one will read, guaranteed.
  • Product Management — or whoever is ultimately responsible for the product — should own and write the spec.  Make no mistake about it:  coordinating the system requirements gathered from marketing, sales, and customer service and encoding them into a functional spec is a big job, and crucial to producing quality software.  The functional spec must have not only institutional coherency, it must also have authority.
  • Functional specs are one of the chief artifacts of an engineering project, and hence are to be run through QA along with everything else.  This is crucial.  Not only will the QA department be able to generate their testing plans based on the functional spec, they can also run down a quality-control checklist on the spec itself, looking for common gotchas.  The best spec QA checklist I’ve ever seen (and one I heartily recommend) is provided by Steve McConnell (author of Code Complete).  Some of his criteria:
    • Are all the inputs to the system specified, including their source, accuracy, range of values, and frequency?
    • Are all the outputs from the system specified, including their destination, accuracy, range of values, frequency, and format?
    • Are all output formats specified for web pages, reports, and so on?
    • Are all the tasks the user needs to perform specified?
    • Is the data used in each task and the data resulting from each task specified?
    • Is the expected response time, from the user’s point of view, specified for all necessary operations?
    • Are other timing considerations specified, such as processing time, data-transfer rate, and system throughput?
    • Is the level of security specified?
    • Are definitions of success included? Of failure?
    • Are the requirements written in the user’s language? Do the users think so?
    • Does each requirement avoid conflicts with other requirements?
    • Do the requirements avoid specifying the look and feel?
    • Are the requirements at a fairly consistent level of detail? Should any requirement be specified in more detail? Should any requirement be specified in less detail?
    • Are the requirements clear enough to be turned over to an independent group for construction and still be understood?
    • Is each item relevant to the problem and its solution? Can each item be traced to its origin in the problem environment?
    • Is each requirement testable? Will it be possible for independent testing to determine whether each requirement has been satisfied?

McConnell’s list gets pretty big, but what’s nice about it is that you don’t have to take it all at once — or indeed at all.  Depending on a project’s scope, urgency, and importance, the level of rigor for the specification can be tailored up and down.  Which leads us to:

The components of a spec

As previously mentioned, since there’s a good deal of variation in the scope, urgency, and importance of projects, there’s a corresponding variation in the appropriate amount of work to put into a functional spec.  Determining exactly how much is as much a matter of art as it is of experience, but there are some guidelines.

At a minimum, every functional spec should have two main components.  Firstly, there should be an adequate coverage of the three high-level questions — what is being asked for, who is asking for it, and why they need it.  This coverage could be (and often is) simply a paragraph or two.  Or it could be several pages.  In any case, the spec should contain it.  It could be as basic as:

“The VP of Sales has requested a new report that shows monthly sales over the past fiscal year, aggregated into domestic sales and Russian (only Russian, no other overseas sales are to be included) sales groups.  They are asking for this since Russian sales have been highly volatile and they’re looking for domestic correlations.”

The second must-have area for a function spec is a collection of hard-boiled, falsifiable requirement statements.  If it’s important that users be able to use hyphens and spaces in their logon names, it’s a requirement.  And if it’s a requirement, the spec should say so, and should say so in a way that makes it easy for QA to test.  Specifically, QA should be able to give a simple pass or fail to every requirement with their test suites.  In the above example, the spec might state (among other things):
Logon name must:

  • Start with a letter character, upper or lowercase
  • Consist of nothing other than alpha-numeric characters, underscores, spaces, or hyphens
  • Be a minimum of six characters
  • Be a maximum of sixty-four characters
    Something to watch for
    Note, however, what the functional spec does not say.  However it follows that your back-end DAOs and database should fully support hyphens and spaces in logon names, that’s not the concern of the writer of the functional spec.  To them, it shouldn’t matter if you do it with some contraption built of beer cans and string, as long as you do it.

Keep in mind the above examples would serve for only the most bare-bones type of documentation.  We’re talking tiny little projects here.  For substantial projects, those taking two weeks or more, say, more comprehensive documentation is required.  In such a spec, it’s good to have more depth.  Rather than go on about it, though, I’ll show you what I mean.  I’ve made a sample spec with all of the parts I’ve found contribute to a useful document.  This document specifies a highly fictional web-based test-case tracker such as would be used by an internal QA department.  I’ve included all the major sections and will explain the benefits of each.

Testy-Test Test Case Database Sample Functional Specification

A good place to start is with an overview, just a simple-as-you-can-make-it description of what the system does, and for who, like:

TTTCD Overview

Testy-Test Test Case Database is a system that allows QA personnel to create, track, manage, and generate reports on test cases and particular runs of test case suites.

The value of this is that it allows readers a quick opportunity to see where the document is going, and whether or not they even need to be reading it.  The names of projects alone often aren’t enough to describe what they are.

It’s usually a good idea to include a disclaimer section.  This is boilerplate, and, frankly, pure CYA:


This specification is by no means complete.  All of the wording may need to be revised several times before final acceptance.  Any graphics or screenshots shown here are merely to illustrate the ideas behind the functionality.  The actual look and feel will be developed over time with the input of graphics designers and user feedback.

This spec does not discuss data structures, algorithms, schema, or data models used by the system, which will be discussed elsewhere. It discusses only what the user experiences when they interact with the Testy-Test Test Case Database.

Of course, when the project is feature-complete, you can remove this section.

Also highly useful is a “non-goal” section.  The idea here is not to list everything that the project won’t accomplish, but to specify what you absolutely don’t want the project to encompass:


The Testy-Test Test Case Database will not:

  • Generate or modify Jira issues, or tie-in to any existing project management software in an administrative capacity.
  • Actually invoke runs of regression (automated) tests.
  • Define project requirements, specifications, release scope, or timelines.

The next thing you should have is a section outlining scenarios, fictional (but stereotypical) descriptions of the users of the system, and what motivates them to use it, arranged in a role-based way.  This is often overlooked, but including it has several key benefits:

  • For the specification author, it allows you to mull over the why part of your design.  Often, it’s here that you’ll discover that the user doesn’t necessarily need a drill, he just needs a way to put holes in wood.  Or maybe he doesn’t even need that.  Maybe he just needs a piece of wood with some holes already in it.  Or maybe he needs a fabrication plant.  By contemplating your users and their needs, you’re on your way to finding out one way or another.
  • Another benefit to the spec author is that it gives you some place to start.  This is a larger benefit than many realize.  Especially if you have difficulty getting things started when faced with a blank document, it offers you a low-pressure way to get going, and thinking productively about the project.
  • For developers, reading the scenarios offers a chance to mentally tune-in to what the project is about and internalize the user’s needs.
  • For stakeholders, reading the scenarios offers a quick check that the specification author “gets it”, and has correctly interpreted the stakeholder’s needs.

One other thing about this section: I’m in flat-out agreement with Joel Spolsky when he claims that your specs should at least try to be entertaining.  I’ve done it his way, and it works.  The idea is that by avoiding a dry-as-dust approach, larded with passive-voice constructions and enterprise-y circumlocution, you may just find that people will read your specs, rather than just look at them for awhile before becoming benumbed by boredom and wandering off a cliff.  And, by cutting yourself a little slack and cracking a few jokes, the process of writing the spec is immeasurably less painful.  I can’t stress this enough.  If you’ve had to write a spec before, and absolutely hated doing it, it’s a good chance that’s because someone was making you write a crappy spec.  People may claim that writing in a breezy, pleasant-to-read voice results in “unclear”, or worse, “unprofessional” specs.  Do not listen to such people, because “clarity” and “professionalism” are concepts orthogonal to humor.  Granted, you could waste everyone’s time with several pages of anecdotes about somebody making peanut butter and jelly sandwiches, but I’m hardly advocating that.  You can, with a little practice, easily stay relevant and give your audience a reason not gouge out their eyes with a rusty spoon.  Here’s an example of what I mean:


Administrative/Managerial: Michael J. “Mike” Nelson, QA manager for, is a very busy man. produces a number of digital products which it makes available online through subscription services.  The different products, ranging from online publications on urban farming to streaming on-demand mp3s of Japanese folk-punk bands, to webcasts of B-movies sent to marooned satellite crews in outer space, are accessible to subscribers via different web portals.  Each live, or production, web portal is mirrored by a test, or ‘release’ portal, which contains the same content as its production twin but is not accessible to the unwashed masses (i.e., anyone outside of FooCo).  It is these release portals that Mike deals with, by running test cases against them before allowing those zany Russian engineers from (a subsidiary) to push change orders through to the production servers, thus preserving life, liberty, and sanity for all concerned, except the poor sod stuck in space watching bad movies.  He’s a lost cause.

Mike has several employees under his direct supervision, who in the main are concerned with executing the test cases Mike has defined — that is when they’re not tied up discussing how lame the Fox network was for canceling Arrested Development (very) or typing "lol 1337gUnn3r pWns j00" to each other during heated matches of Extreme Beach Volleyball Zombie Elimination Massacre 2.  They’re an easily distracted lot, these testers, and so Mike needs to be able to define, rather specifically, exactly what it is they’re supposed to be testing for, what steps they’ll need to run through in order to appropriately test it, and what results they should see if the test passes.  If they don’t see the expected results, the test fails.  If they can’t run the test as specified by the steps, the test is blocked.  If they do, by some minor miracle, reproduce the test in just the way Mike specified and they subsequently observe (by a major miracle) the expected results, the test passes.  In any case they must report the results back to the system so that Mike, who is again a very busy man, can generate a report showing what tests have been run against what builds of the various products and what the results were, so he can then print out volumes of data and dump it on his boss’s desk, thereby justifying his salary, and, at times, his very existence.  Oh, and these reports are also helpful in assessing the overall health of the system, and determining whether or not to push the project onto the production servers.

So, Mike needs to be able to define to the Testy-Test Test Case Database the following items:

  • Products that sells (,, etc.)
  • Main feature groups of each product (security, UI, content, etc.)
  • Releases for each of the products (Upcoming December Maintenance Release of, Initial Rollout Release of, or the OhMyGodTheServerIsOnFire Bug Patch)
  • Test cases for his testers (or, on weekends, Mike himself) to execute.
  • Test case suites (groups of related test cases) to be executed against releases, in a particular order.
  • Reports.  Mike needs these especially bad.

Tester:  Franklin T. “TV’s Frank” Vespucci is an underling tester at  Frank’s biggest problem is that he hasn’t unlocked all the secret swimsuit costumes on EBVZEM2 yet.  That takes lots of time.  In order to save time, Frank needs to reduce the amount of work required to execute, and track the results of, the test suites assigned to him by his supervisor Mike.  What Frank really needs is to become clairvoyant and telepathically communicate with product management, the customers, and the servers so he can holistically grok that the system is, or is not, working as planned.  But Frank’s not the sharpest tool in the shed, so he has to do it the old-fashioned way by systematically executing every test that will verify that a release meets its requirements, will satisfy the customers, and won’t set the servers on fire again.

Frank needs to be able to log into the Testy-Test Test Case Database, see what releases of what products he’s responsible for testing, and pull down and execute the test cases that will allow him to at least cover his butt in case of catastrophic failure.  He will need to execute each case in turn, and report back to the system which test cases passed, which failed, which he was unable to run, and which he just didn’t feel like running.  For any result other than a test passing, Frank also has to be able to report the reason why the test didn’t pass.

And so forth.  Again, the point in all of this is to attempt to go a little further so that people will want to read the spec.  Often, I’d get back comments on my specs like “lol extreme beach volleyball zombie elimination massacre, that was funny… btw, I noticed that you didn’t specify what happens when an anonymous (non-logged in) user is browsing and…” which is always gratifying.  Then again, I’d also get “dude, arrested development sucked.  Anyway, I noticed that you didn’t specify what happens when an anonymous (non-logged in) user is browsing and…”  In either case, whether my readers liked my humor or not, they wound up reading the spec, and paying attention to what it said.

Then it’s time to go into the details.  Use cases are a fine way of doing this.  Applications have a kind of emergent order to them, such as web apps grouping tasks by page.  By writing out use cases, you can usually quickly see what use cases belong on what pages.

Just a personal observation, from experience:  At first I had trouble with use cases (well, any detailed functional specs in general) because it’s rarely clear when to start, and when to stop specifying.  “Hmmm, does this description of what this field is for belong in the use case?”  That sort of thing.  You can sit and scratch your head for hours a day and only end up writing one page of spec in a day.  The problem is that you’re trying to do several things at once — understand the problem domain, grok what the user needs, design the system — and you shouldn’t overly burden yourself with designing your spec document, which is exactly what’s going on when you hit that kind of impasse.  The best way I’ve found for dealing with this problem is very similar to how you deal with it in code:  If you think you need to say something, just go ahead and say it.  If later review shows that it’s unnecessary, you can always delete it.  Don’t waste time agonizing.  If you find yourself saying it again, it may be sign that you want to pull it out into its own section and replace it with a reference.  Saying something three times is almost a dead giveaway that you should do this.

In other words, this is an iterative process, and you can always “refactor” your spec.  As an example, let’s start with our (arbitrarily) first use case, where an admin creates another user.  A first crack might look like:

Add user to system
Precondition: user is logged on with admin privileges.
User clicks “Create new user” button.  The “New user information” panel becomes visible.  The panel contains fields to input required and optional information about the new user.
User (logon) name.  Required.  64-character (max) Unicode text field.  This is the name that the user will use to log on, as well as being the name that the system will use in all communications with the user where addressing the user by name is called for, as well as the identifier that the system will use when recording the user’s activity.
Password: 16-character (max) password field.  Optional.  If left blank, the system will auto-generate a password and email it to the user (user will be prompted to change password on initial login).
…etc., etc…

User fills in all required and chosen optional forms, then clicks “create this user”.  System validates all free-input text fields (except password, if given) by:
Checking for null.
Checking for minimum length.
Checking for maximum length.
Checking for valid initial characters.
Checking for SQL injection vulnerabilities.

Passwords are validated by:
Checking for null.
Checking for minimum length.
Checking for maximum length.
Checking for valid initial characters.
Checking for a mix of upper- and lower-case characters, and at least one numeral.
Checking for SQL injection vulnerabilities.

All fields that are invalid (whether required or not) are to be reported to the user in a single block at the top of the form.  Likewise, the labels on all fields that received improper input will be prefixed with a blinking red asterisk.  The form must not reset the fields, causing the user to re-input any valid data.  To report the input errors to the user, the top block will use the following text.
For user (logon) name starting with an invalid character:
“User names must start with an alphabetic character of either case (A-Z or a-z).  Numerals and symbols are not allowed for the first character.”
For user (logon) name missing:
“Please supply a User (logon) name.”

Well, not bad.  But there’s repetition in there, and definitions of things probably applicable elsewhere, so let’s pull them out into referenced sections.  By pulling out the definitions of what the “User Logon Name” is for, and yanking the generic validation scheme definitions, we’d wind up with something like this:

Add user to system
Precondition: user is logged on with admin privileges.
User clicks “Create new user” button.  The “New user information” panel becomes visible.  The panel contains fields to input required and optional information about the new user.
User (logon) name.  Required.  64-character (max) Unicode text field.  See “User Logon Name” in the definitions for complete description of the system-wide meaning of this field.
Password: 16-character (max) password field.  Optional.  If left blank, the system will auto-generate a password and email it to the user (user will be prompted to change password on initial login).
…etc., etc…

User fills in all required and chosen optional forms, then clicks “create this user”.  System validates all free-input text fields (except password, if given) as specified in the “text field validation” section.  Passwords are validated according to the “password validation” specification.

All fields that are invalid (whether required or not) are to be reported to the user in a single block at the top of the form.  Likewise, the labels on all fields that received improper input will be prefixed with a blinking red asterisk.  The form must not reset the fields, causing the user to re-input any valid data.  To report the input errors to the user, the top block will use the following text.
For user (logon) name starting with an invalid character:
“User names must start with an alphabetic character of either case (A-Z or a-z).  Numerals and symbols are not allowed for the first character.”
For user (logon) name missing:
“Please supply a User (logon) name.”

And so on and so forth.  The basic, iterative process of steady refinement works just as well with specs as it does with code.

One last section that is helpful to have is an “open issues” section, which lists the issues that have been noticed but not sufficiently dealt with.  When you’re writing a spec, you’ll often think of something and realize “oh, yeah, I’d better talk about that” but you’ll be in the middle of something else.  Rather than totally break your flow, just make a quick note in the open issues section and get back to what you were doing.  This is also a good place to put items sent to you by others until you can adequately address them.