There appears to be a good correlation between what can be seen on the graphs and what is heard on listening tests.
Any deviation from the frequency response of the original source material up to 16kHz can easily be heard in listening tests. This most aptly demonstrated in the 96kb/s VQF encoded file.
It is visual observation of the spectral view of the material that provides the best indication of the perceived sound quality. If you can see any pruning of the frequency content below 16kHz you will almost always hear it in the output. Inclusion of spectral content above 16kHz is heard as increased clarity in the listening tests.
In my opinion it is these two points that lead to my rather blasť response to current 128kb/s encoded AAC files. The suppression of content below 16kHz reduces the impact of the sound but at the same time the inclusion of content above 16kHz has a positive effect. The two tend to cancel each other out and most probably this effect leads to my opinion that the output suffers from some colouration in the upper midrange.
Any content that is seen in the spectral view that is not present in the reference material is also easily heard in the output as a reduction in signal clarity.
To sum it up you can tell a good codec by a frequency response that accurately follows the source material to at least 16kHz, no reduction in spectral content below 16kHz and the inclusion of spectral content above 16kHz.