I’ve never worked on software where I needed to use design patterns. According to Paul Graham’s Revenge of the Nerds essay, design patterns are a sign of not enough abstraction.
To quote him directly, “For example, in the OO world you hear a good deal about ‘patterns’. I wonder if these patterns are not sometimes evidence of case (c), the human compiler, at work. When I see patterns in my programs, I consider it a sign of trouble. The shape of a program should reflect only the problem it needs to solve. Any other regularity in the code is a sign, to me at least, that I’m using abstractions that aren’t powerful enough– often that I’m generating by hand the expansions of some macro that I need to write.”
I was just wondering if everyone thought design patterns are overused and are symptoms of not having enough abstraction in your code.
I don’t think the patterns per se are the problem, but rather the fact that developers can learn patterns and then overapply them, or apply them in ways that are wildly inappropriate.
The use of patterns is something that experienced programmers just learn naturally. You’ve solved some problem X many times, you know what approach works, you use that approach because your skill and experience tell you it’s appropriate. That’s a pattern, and it’s okay.
But it’s equally possible for a programmer who’s less skilled to find one way to do things and try to cram every problem they come across into that mold, because they don’t know any other way. That’s a pattern too, and it’s evil.