I've seen too many cases when a student has completed a programming exercise (without cheating) but can't say, for example, what type of values a variable contains during program runtime (in case of Python programs).
It looks like the ability to solve exercises should not be the only (or even main?) criteria for assessment.
Do you know any research papers (or books or posts) that tackle this issue and/or recommend other assessment techniques / exercise types?
If I understand your problem correctly, it's that students can create programs that behave correctly without understanding why they behave correctly. I assume that they do this by some combination of brute force trying things and SO search.
Using program correctness for grading has the desirable qualities of being objective and automation friendly.
So how could you keep these qualities while testing for actual understanding? While I agree that good code structure, variable naming, and possibly comments can offer proof of understanding, they are all highly subjective.
What I would do instead is construct exam exercises that are relatively small requirements changes to homework assignments that students have completed earlier, ideally at least 2 weeks earlier. Then give each student access to the code (s)he submitted for the homework as a starting point for completing the exam exercises. Tune the original homework size and the exam requirements changes so that a student who understands the code and has well-written code can complete the exam exercise quickly, but one that started from scratch would not likely be able to complete the work. This accomplishes 4 things:
Disclaimer: I work in industry and have no research to support this.
I have seen my share of this 'program gets output' but the programmer has no clue how she/he got there. It's funny how that happens so many times.
This is what I have done to at least handle the issue.
Before assessments, I break down the evaluation to include the following.
Obviously, this makes me their least favourite faculty in terms of evaluation and gets me a lot of complaints but I stick with it.
Update : Please note that, during evaluation, I don't expect all of them to be done. Say, a student is already using meaningful names everywhere, I would not mark her/him down for not writing comments.
I am wondering how much of this is because they can not express in natural language (don't know terminology). How much is because of just fiddle until it works programming.
This is important, to allow them to communicate with a larger team, to allow them to look stuff up on an internet search, and to answer some of the exam questions.
This is an important technique to use some of the time. However it should then be followed by evaluation.
For example: Trying to find the angle of a triangle, when drawing using turtle graphics.
Table of angles:
| number of sides | angle | |-----------------|-------| | 3 | | | 4 | 90 | | 5 | | | 6 | | | 7 | |
You can discuss what the best strategy is [binary search]. One you have a completed table (4 or 5 entries), you can start to look for a pattern. It there a general formula for a polygon. Also getting pupils to pretend to be the turtle can help, or teacher is turtle. In ether case another pupil is the instructor, that tells turtle to move and turn. While doing this they gain a deeper understanding of how the code works.
I'm afraid my answer here will suggest that you completely revamp how you teach.
The sort of problems that result in issues like this, seem to me to be problems that treat the computer as a fancy calculator. Problems given to students are of the "math-y" type. Some require tricky thinking, of course, but they are unlike the sorts of problems that people in the real world write programs to solve.
My suggestion, is to, instead, use simulation as the basis of your teaching. There are many ways to do this, but one of the best and easiest to introduce is the Greenfoot system. It provides both an IDE (for Java) and a simulation framework.
There is also an organization for users, Greenroom, who contribute simulation frameworks that you can start with and modify. There are hundreds of such simulations, some with teaching materials, even videos. Note that the Greenroom is a membership site. You will need to join.
Here are a few examples. I have used some of them, but not all.
Fuel Depot Question from APCS-A
Greeps - A Programming Competition One of the originals.
Karel J Robot meets Greenfoot A robot simulation - from the book.
2d Platformer Similar to Mario.
There are hundreds more, both at the main Greenfoot site and the Greenroom. Only the Greenroom requires (free) membership. With these sorts of frameworks and the programming that it involves, the problem you discuss simply won't arise.
It has been noted that I often promote Greenfoot. I'm not affiliated with the site or its developers, nor have I ever been. I know some of the developers and I have produced materials for use with Greenfoot. I also occasionally submit bug reports and feature requests to the site.
But Greenfoot aligns well with my general teaching philosophy and makes Object-Oriented Programming easy to teach to novices in an interesting and engaging way.
It's time for a little "code review." Have a student present his code in front of class and talk about how he made it work. Hey, this happens in the professional world. There is no time like the present to begin learning this vital skill.
I also encounter this issue (just encountered it yesterday in a lab exam). This is how I differentiate between someone who has done his work and just not been able to explain it and someone who has crammed/cheated:
Usually Point 2 is enough to differentiate. Kindly ensure to let him/her sit calmly in order to not let nerves get hold of him/her - it works for me!
Isn't this what lab reports are for. Computer Science is a science and should come with some basic rigors. Copying some code and hacking away might get an output, but forcing students to do analysis, flow charts, etc will prove how much they really understand the process. Even simple things like writing a paragraph on the theory and another on the basic algorithm to solve the problem would go a long way.