This is similar to the problem with too much dependence on inheritance. Inheritance is a very useful OO concept but it can be a huge problem if the class hierarchy becomes too complex and starts to have too many methods. There was a project I was working on recently where code inheritance had turned the entire system into a house of cards for which small modifications would take days to implement and test since any one change anywhere would have such far-reaching changes in the rest of hierarchy. That case, however, was caused by a lack of official design in the earlier and intermediate phases of development.
Again let me state that these aren't inherently bad ideas. Code inheritance can be great as long as you have a design which uses it in an effective way and avoids gratuitous applications of the concept. The reason for these problems, I believe, is that code sharing forces the developer to think in terms of how something works and not the simpler area of what it does. Abstraction is meant to promote this simpler way of thinking and OO was supposed to leverage that. In terms of the communication between unrelated objects of well-designed interfaces, this is completely true. However, it seems to be largely ignored as soon as we start to think about the implementation of an object hierarchy instead of just how to use it.
This is why I believe that these interfaces are the most important part of OOD: they focus purely on the external use of the object and beg us to take our minds away from the inheritance dilemma of how will this function be called at every level of the hierarchy and will its behaviour make sense.
A concrete example of where this idea should be used is in the defeat of multiple inheritance. MI has several problems in complexity of how to use it effectively as well as efficiency of how to implement it (ivar placement in the object implemented under the MI diamond problem is an intensely complicated issue with only poor solutions). This complexity of design comes from the problem that MI forces the developer to think about multiple code inheritance paths instead of just one where one can easily become unwieldy. At the higher level we also have the problem of what multiple inheritance actually means: the sub-class implements solutions to mutually orthogonal problems (orthogonal in that they grow from independent parent implementations) which seems entirely counter to the point that objects should do only one thing.
In MI, the developer must now think of how to cobble together these unrelated problems to solve them both and some new problem. It is unlikely that this can be done without re-writing a substantial amount of code because the original building blocks, by definition of their mutually exclusive heritage, would have little in common. Implementing this as a fresh class which implements the appropriate interfaces, one can focus on the specific meaning of this implementation purely driven by the needs of the interfaces being implemented (and thus what the class needs to be able to do) and not be bogged down by how it would solve its respective sub-problems in different situations (and thus not waste time thinking about how the class solves different problems).
Multiple interfaces frequently make sense, however. For example, for simple operations, it would usually make sense for a data model abstraction to implement an interface for de-serializing key-value coded data and implementing some situation-specific data provision interface. This way the underlying loader layer can still depend on the facility provided for reading data in a generic way and the higher-level application code can depend on the abstraction provided by the data provision interface but the implementation need not worry about how the solution would work in some other situation (as inheritance frequently does in that it is now borrowing parts of a different solution and trying to adapt it to the current problem). Of course, hybrid solutions are also valuable, but are special cases as opposed to a general rule.
Does that seem reasonable?