How can the tenants with the best-performing landlords also be the least satisfied? It is all a matter of expectations, as Janis Bright discovers.
"If you are not completely satisfied with these peanuts/crisps/plasticised sushi rolls..."

The message is there on a million wrappers, telling you the company cares about your experience of eating its product. Deeply.

But what does it mean? Surely you did not expect that eating a 28p bag of crisps was going to change your life?

Face it, expectations of crisps are low. So you probably will be "satisfied" – as long as they are not soggy or mouldy.

And that is similar to the problem of measuring tenants' satisfaction with their landlord: it is all about what they expect in the first place. If tenants think council houses are mouldy and soggy, they will not have high hopes.

Councils are recording remarkable ratings on the new performance indicator that, by government order, they now have to complete. In some areas, 98 per cent of tenants are said to be "satisfied" with the service they get.

A Northern Housing Consortium and HouseMark study published last month found the median satisfaction rating among northern councils was a healthy 76.5 per cent. But it found no correlation between higher ratings and better service as found in the "objective" measure of performance indicators.

The suggestion is that the current way of measuring satisfaction is unreliable.

That alone would be bad enough news for the Audit Commission and Housing Corporation, which are building their inspection approach around gauging tenants' experience of the service.

But it is also backed by an earlier consortium report in which councils complained that unrepresentative focus groups and individuals were being given too much weight in inspections.

Chief housing inspector Roy Irwin says he is well aware of the problem. He welcomes debate on the issue, but stresses that performance indicators are only the "can openers" that give a starting point for more detailed investigation.

He acknowledges a perverse effect in which the best councils can get the worst satisfaction ratings, because they have raised tenants' expectations about the service.

Thus Bolton, with an almost unmatched history of 10 years' worth of "well above average" ratings from the regional government office and good performance indicators, has the lowest satisfaction rating in the north at 63 per cent.

York council was a leader in starting to gauge tenants' views in the 1980s. It has amassed a dozen years' worth of data in its annual service monitor. The average level of satisfaction since 1996 (when the current unitary council was created) has been just over 85 per cent, with a peak of 89 per cent in 1998.

In last year's survey, overall satisfaction was 77 per cent – but 85 per cent considered their rent good value for money and 93 per cent considered their home to be in good condition.

A picture emerging is that while tenants are generally very happy with their home, they are increasingly concerned about wider issues of the neighbourhood, crime and, especially, noise nuisance – all problems that are more difficult to solve than are repairs.

Bill Hodson, senior assistant director in York's community services department, says a factor perhaps overlooked in collecting figures over time is that the nature and type of tenants has changed.

"Our age profile 12 years ago was that most tenants were aged over 40. Now we have many tenants in their early twenties, or even teenagers. They have a different attitude than older people: they expect fast service and reject traditional council communication methods."

The council's best value review, now being completed, will lead to an action plan that focuses on the key areas of improvement across the service.

Tenant level of satisfaction is an important factor, but not the only one.

Hodson is apprehensive about how the Housing Inspectorate will view the figures. He says: "In York, tenants' expectations have risen over time as services have improved. That's a good thing as it challenges us to do even better.

"If you focused only on the headline figure, you would think we were getting worse, but other measures show we are not. We follow up the service monitor by teasing out these factors and have a dialogue with tenants and staff about it."

In the real world it’s what the customer, not the producer, thinks that matters

That is something with which marketing expert Richard Beevers wholeheartedly agrees. The Capita Consulting director is concerned that, whether surveys or focus groups are used, the methodology should be robust – and that is not currently the case.

"If eight people in a group say they are annoyed, you simply cannot extend that as a genuine verdict," he warns. "If you do an exercise involving many focus groups, you might be reasonably confident about your conclusions – but they will not be statistically valid."

Beevers says that if one is going to measure satisfaction, it is useless to use a single broad category. "There is a huge difference between those who are 'satisfied' and those who are 'very satisfied' and the latter are the ones you want to hear about," he argues.

A single 'satisfied' category leads people to settle for complacency.

"Nobody remembers Alexander the Mediocre," he notes.

He says that arguments over satisfaction percentages are academic, because the differences do not reflect the huge task social landlords face.

A council that completes non-urgent repairs in 25 days is not a whole lot different from one that takes 35, he says: both are just nowhere near good enough.

Four years ago, Beevers conducted in-depth research into tenants' perceptions and found alarming undercurrents of dissatisfaction. In fact, he recorded the worst ratings he had encountered in any sector or service. He concluded that the council housing "brand", successful in the past, was now a failure.

He says: "These problems are the result of the old monopoly situation. That is why we have structural demand problems now: if people have a choice, they will take it.

"Tinkering with the standards and arguing over percentage satisfaction ratings is not going to achieve the step change in performance that is needed."

Sheffield Hallam University principal housing research fellow Paul Hickman goes further.

"It would be hugely dangerous to suggest that tenants are 'wrong' in their assessments of local authorities' performance.

"In the real world the customer is never wrong – it is their perception of reality that drives their purchases and not 'actual' performance or quality," he says.

"Telling council tenants that their assessment of their landlord's service is wrong is akin to McDonalds or Burger King chastising potential customers for not buying their products.

"In the real world, often contrary to objective assessments of quality, it's what the customer, and not the producer, thinks that matters."

Hickman wants more, not fewer, performance indicators – both objective and subjective – that together would form an accurate picture of service quality. He also believes that interpreting indicator scores needs greater sophistication.

HouseMark chief executive Ross Fraser agrees that customers' views must be taken very seriously in any analysis.

But he urges some caution. "Customers' views are affected by their living environment, and they don't have perfect knowledge. Those landlords that are effective at communicating success may get higher ratings. There is also a danger of focus groups being dominated by a particular perspective."

He believes satisfaction performance indicators are key, but are not the only measure that counts. "Where they are at variance with other indicators, that should start a debate. And the question to answer is: if the others are good, why are your tenants not happy?"

To that end, the DTLR is about to launch a research project to unpick the meaning of that tricky word 'satisfaction'.