The reason you specified an interface was to make it reasonable to do the grading. Furthermore, implementing to an interface is a crucial skill in computer science. If their work does not conform to the interface, give them a zero and offer them the opportunity to resubmit it within a week with a working interface for grading at a letter grade penalty. Of course, you should make it clear that this is not their opportunity to fix bugs in the program, just the interface, and that you will be checking the differences between their first submission and their working submission.
Yes, verifying differences in the second round of grading may be a bit harder than just fixing their interfaces, but you can have a quick turnaround on the first round and, ideally, you will only have to do it once. Better yet, make it clear that each student will have this opportunity exactly once in the course, and subsequent failures to conform to the interface will result in a non-negotiable zero.
Yes, it's hardass, but it teaches them that a sine qua non of programming is being able to work within a design, whether it is your own or inflicted upon you.
But something needs to be done, that's for sure. Now to nail down my exact policy in the next few days, before spring term syllabi go out....
Well, you can completely automate the return-and-resubmit part if your interface is solid enough. You write a harness that tests each submission's interface, starting with make. Run the harness on each submission, and kick back the ones that fail. You can even give this out to the students ahead of time. Then you create a harness that actually tests results for your own use in grading.
By the way, do you sandbox your grading? I mean, suppose a malicious student submitted a trojan that downloads a rootkit. Sure, it doesn't sound like a concern, until it happens.