I've been reading and digging into the theory in OOP a lot recently and one question always seems to come up for me.

It seems that some of the concepts, such as abstract classes, interfaces, and to a lesser extent visibility, have more to do with situations where other people will be extending your code...

For example, lets say I was making a simple CRUD app for a small company to store customer information, contact info, orders, etc. The size of the app may warrant an OOP architecture, but I know from the offset that no one else will be extending this code and that it will be a 'closed' project once I've finished it. As such, it seems that setting things up for the future, and creating those sort of protections and contracts -- via abstract classes, interfaces -- may be overkill? Does that seem accurate?

All that being said, I'm all for doing things correctly and thoroughly at all times. And I also understand that one of the main purposes of OOP is for maintainability and extendability for the future. I just have a hard time wrapping my head around Interfaces and Abstract classes sometimes, and I have feeling its because the smaller projects I do rarely involve other people extending the code, or creating APIs and such.

Comments

It's more for your benefit than anything else. When you have to go back to maintain it, do you want things to be clearly organized and simple?

Written by Rafe Kettler

true, and I get that. But I suppose I'm not quite sure how interfaces and abstract classes my play into that, when dealing with a small scale app

Written by dtj

Accepted Answer

The size of the app may warrant an OOP architecture, but I know from the offset that no one else will be extending this code and that it will be a 'closed' project once I've finished it. As such, it seems that setting things up for the future, and creating those sort of protections and contracts -- via abstract classes, interfaces -- may be overkill? Does that seem accurate?

Well, in short, that's not accurate. If you're writing code for a reason, there's a good chance it'll need to be maintained. Following good OOP practices will make it easier to debug problems in the future. But that's not where the key to OOP is.

As you have indicated, the true benfits of the OOP paradigm are when it comes to future modification. From the 60/60 rule, we know that 60% of a projects cost come after delivery. And 60% of that cost comes from changes to the specification. That's why you should do "all that work".

Sure, for some projects it will be completely overkill. But the key is that the only time that you can actually know if it's overkill is well after delivery when you don't change the application. I've seen first hand many projects that fit into both categories (before delivery we thought would be dead after, but wasn't and those we thought would be really active but died). So since you can't know for certain until the time comes to make the modifications, it may (or more likely is, depending on you exact situation) be worth it to just invest all the effort for all projects...

Remember, if you don't have time to do it right the first time, when will you have time to do it over?

Written by ircmaxell
This page was build to provide you fast access to the question and the direct accepted answer.
The content is written by members of the stackoverflow.com community.
It is licensed under cc-wiki