[comp.edu] Automatic checking the students' answers

juh@cs.hut.fi (Juha Hyv|nen) (10/11/90)

Vision:
------

	Student Jack sends a request by e-mail to a mail server asking
	for question # 1. The server sends him back the following.

		"Show the heap after each insertion of the keys
		 A N E X A M P L E into an initially empty heap."

	When student Jill requests the same question she may get e.g.
	the following.

		"Show the heap after each insertion of the keys
		 D I F F E R E N T into an initially empty heap."

	The questions are generated by a "question generator" and
	there should be some variation between any two student's
	questions.

	When Jack has solved the problem, he sends back the answer.
	Now, the "answer analyzer" checks his answer and gives points
	to him.

	(Later, Jack could ask the server how he has done and possibly
	what was wrong with his answer.)

------

I am looking for information about what kind of questions can be
checked automatically. (It should be quite easy to generate variations
of the questions.) The system would be tested (and used) in a computer
science course "Data structures and algorithms". The book used is
Sedgewick, R., Algorithms (2nd ed).

The course includes
	- data structures (list, stack, queue, tree, ...)
	- sorting methods (quicksort, heapsort, ...)
	- searching methods (binary search, B-trees, hashing, ...)
 	- string processing (pattern matching, parsing, compression,
	  cryptology, ...)
	- graphs (spanning trees, shortest path, ...)

If you
	- know about a similar system, or
	- have heard of a similar system, or
	- know about suitable questions or question types, or
	- have suggestions about questions or question types, or
	- know anything else you feel is related to the subject

please e-mail me <juh@hutcs.hut.fi> or post to the net. Thank you.

	   / (.__o		..
	  /_/ __/	Juha Hyvonen
	! /  !
	!/ ) !		juh@hutcs.hut.fi
	 ------

eibo@rzsun3.informatik.uni-hamburg.de (Eibo Thieme) (10/15/90)

juh@cs.hut.fi (Juha Hyv|nen) writes:


>Vision:
>------

>	Student Jack sends a request by e-mail to a mail server asking
>	for question # 1. The server sends him back the following.

>		"Show the heap after each insertion of the keys
>		 A N E X A M P L E into an initially empty heap."

>	When student Jill requests the same question she may get e.g.
>	the following.

>		"Show the heap after each insertion of the keys
>		 D I F F E R E N T into an initially empty heap."

>	The questions are generated by a "question generator" and
>	there should be some variation between any two student's
>	questions.

>	When Jack has solved the problem, he sends back the answer.
>	Now, the "answer analyzer" checks his answer and gives points
>	to him.

>	(Later, Jack could ask the server how he has done and possibly
>	what was wrong with his answer.)

>------

>	<Questions about this vision omitted>


As I read this article I felt quite uncomfortable, as I couldn't 
appreciate this vision as much as Juha Hyv|nen appears to do. Being
a student of informatics in Germany I am far too close to the intended
group to stay calm on such a topic.

Automated testing is IMHO one of the most inappropriate methods for
evaluating peoples knowledge. Some points to mention are:

   1. The set of questions askable is confined to the area
      of reproducing memorized facts and application of 
      memorized rules. (Anybody out there in AI-land daring 
      to oppose ? :-)

   2. Knowledge is *always* embedded, being part of a person
      acting in the world. To pay attention to this *central*
      aspect of knowledge it requires direct interaction,
      which in this context means aural examinations.

   3. People tend to forget the nature of learning, believing
      it being nothing more than is required. The net effect
      is a very poor standard of education.

   4. Much energy will be spent to conform system requirements,
      ingenious ideas will be considered false. Again, people
      will see the method, not the contents.

Please note, that this is only about education, which is a very reduced
view onto learning, i.e. attaining wisdom. And mark, that my arguments
are applicable to any automated testing including multiple-choice-tests,
not only computer aided testing. I am willing to explicate myself further
if there is any interest.

eibo
      
--
eibo Thieme                             *    FB Informatik
eibo@fbihh.informatik.uni-hamburg.de    *    Universitaet Hamburg
..!uunet!mcsun!unido!fbihh!eibo        *    Schlueterstr. 70
PHONE: +40 4123-5660                    *    D-2000 Hamburg 13  (FRG)

anw@maths.nott.ac.uk (Dr A. N. Walker) (10/19/90)

In article <eibo.656000376@rzsun3> eibo@rzsun3.informatik.uni-hamburg.de
(Eibo Thieme) writes:
>juh@cs.hut.fi (Juha Hyv|nen) writes:

[automated question/mark/answer server description deleted]

>   1. The set of questions askable is confined to the area
>      of reproducing memorized facts and application of
>      memorized rules. (Anybody out there in AI-land daring
>      to oppose ? :-)

	I'm not in AI-land, but this is not true.  The difficulty
lies in making the answers easily parsable.  I have been using such
systems since 1970 to set and mark problems in Numerical Analysis.
The questions were askable because they conformed to a simple template
in which certain elements could be randomised.  What facts the students
had memorised, or what rules they used was in no way a factor in
the system, which worked because the computer was at least able to
assess the answers.

	[Initially, the computer was used primarily to generate a
"random" question sheet (for the student) and the corresponding correct
answers (for me);  once the computer power became available, the
whole system became interactive, and the student responses were
assessed by the computer.  As the responses were always numbers
or (occasionally) a menu selection, the computer either understood
the response or could reject it as "ungrammatical".  All interactions
were logged, and students could include comments/queries/complaints
in the log for me to deal with.]

>   2. Knowledge is *always* embedded, being part of a person
>      acting in the world. To pay attention to this *central*
>      aspect of knowledge it requires direct interaction,
>      which in this context means aural examinations.

	I wanted to find out whether the students could *do* NA.
The course *also* (naturally) included theoretical knowledge,
assessed in the traditional ways, but *practical* knowledge can
often, with imagination, be assessed mechanically.

>   3. People tend to forget the nature of learning, believing
>      it being nothing more than is required.

	[I'm sorry, but I don't understand this point.]

>   4. Much energy will be spent to conform system requirements,
>      ingenious ideas will be considered false. Again, people
>      will see the method, not the contents.

	No.  In a practical test, the method and the ingenuity
are irrelevant.  The computer sees the results.  If I ask you to
solve an equation, or to perform a quadrature, and you get the
[unambiguous!] right answer in a reasonable time, fine, no matter
how you do it.  If the answer is wrong, it's no mitigation that
you used a very ingenious method.

	[NA is not, of course, the only area of knowledge susceptible
to automated assessment.  One to watch out for, on the horizon, is
the driving test.]

-- 
Andy Walker, Maths Dept., Nott'm Univ., UK.
anw@maths.nott.ac.uk

juh@cs.hut.fi (Juha Hyv|nen) (10/19/90)

From: eibo@rzsun3.informatik.uni-hamburg.de (Eibo Thieme)
Subject: Re: Automatic checking the students' answers
Date: 15 Oct 90 14:19:36 GMT
+------------------------
! juh@cs.hut.fi (Juha Hyv|nen) writes:
! 
! >	Student Jack sends a request by e-mail to a mail server asking
! >	for question # 1. The server sends him back the following.
! 
! >		"Show the heap after each insertion of the keys
! >		 A N E X A M P L E into an initially empty heap."
! 
	[...stuff deleted by juh...]
!
! >	When Jack has solved the problem, he sends back the answer.
! >	Now, the "answer analyzer" checks his answer and gives points
! >	to him.
!
! As I read this article I felt quite uncomfortable, as I couldn't 
! appreciate this vision as much as Juha Hyv|nen appears to do. Being
! a student of informatics in Germany I am far too close to the intended
! group to stay calm on such a topic.
!
! Automated testing is IMHO one of the most inappropriate methods for
! evaluating peoples knowledge. Some points to mention are:
!........................

I realize I did not mention anything about the intended use of the
proposed system. We are not trying to evaluate the students'
knowledge. We are trying to make sure they do their homework (and
learn while doing it).

In order to pass the course, you have to do (quite a lot of) homework.
That consists of questions like question #1 above. In addition to
that, you have to pass a final exam.

To make sure everyone is doing their homework, it has to be checked.
Homework is devided in five parts, each of which about one major
section of the book (R. Sedgewick, Algorithms). The student must get
enough points from every part. And those who do their homework very
well get a bonus that rises their final exam grade, e.g., from 2 to 3
(but not 0 -failed- to 1; we use the scale of 0...5).

Presently, checking the homework is done by a person. Last spring,
over 500 students took part in the course. That meant that a total of
500 students x 5 parts x 4 questions = 10,000 answers had to be
checked (and the results registered). At the rate of checking one
answer a minute it would take over 160 hours to check all the answers.
That is one working month! I think that the time should be spent
teaching than doing a routine job that could be done by a computer.

+------------------------
!    1. The set of questions askable is confined to the area
!       of reproducing memorized facts and application of 
!       memorized rules. (Anybody out there in AI-land daring 
!       to oppose ? :-)
!........................

We are not asking the students to reproduce memorized facts. But
students must know these facts (and rules) in order to be able to
answer (simple) questions like

	"Which of the following arrays can represent a heap?
		1)	a b f h k o p
		2)	c a g u y z
		3)	a h b k m r t "

To answer that, you have to know AND understand the "heap condition"
(and the rule for representing a heap as an array).

Of course intelligent questions like "Why" or "Explain" or "Suggest"
cannot be automatically checked. But currently, we are not asking
that kind of questions.

Could you give some examples of questions we should ask?

+------------------------
!    2. Knowledge is *always* embedded, being part of a person
!       acting in the world. To pay attention to this *central*
!       aspect of knowledge it requires direct interaction,
!       which in this context means aural examinations.
!........................

We do not have the resources for direct interaction other than the
classes (lectures). (Most students skip the classes -- attending is
voluntary.) Here are some other observations:

The students could ask questions. They do not.
The students are asked questions. They do not want to answer.

If the students are made to answer, even more of them tend to skip the
class. (We could start a new thread on discussing why. Here is one
answer: they are afraid of being wrong and letting everybody to know
it. "It is better the keep quiet and let everybody THINK you do not
know the answer than to open your mouth and make sure they KNOW that."
I believe this shyness (?) among us Finns has been discussed in some
newsgroups -- maybe in soc.culture.nordic.)

Conclusion: direct interaction (with too limited resources) does
            not work (with 500 Finns).

What is "aural examination"? Do you mean "oral"?

+------------------------
!    3. People tend to forget the nature of learning, believing
!       it being nothing more than is required. The net effect
!       is a very poor standard of education.
!........................

To teach e.g. the quicksort we do not have a homework question: "Learn
the quicksort."  We have a question: "Solve this problem using the
quicksort algorithm."

What seems to be required: "Solve this problem".
What actually is required: "Learn quicksort".

So, the student learns an algorithm by doing in a small scale the same
that a program is supposed to do in a large scale (learning-by-doing).

+------------------------
!    4. Much energy will be spent to conform system requirements,
!........................

There are (at least two) different kinds of requirements. The mail
server must be able to

	- determine who sent the message
	- extract the answers from the message

These are what we call "external format" of the answers.

Then there are the requirements the analyzer program sets for the
answers. Of course, answers must be given in a simple format (that the
analyzer can understand). The graphical format used in the text book
cannot be used.  Still, the problems should be solved using that
format. The conversion from the graphical into the required format is
(or should be) very simple. The format of each answer is called
"internal format".

The internal format reflects the program's implementation of the data
structure. The result is that students learn one implementation of an
abstract data structure. A disadvantage is that they may think that is
THE implementation.

+------------------------
!       ingenious ideas will be considered false.
!........................

I can remember only one "ingenious idea" in all my checking of the
final exams & homework. It was solving a problem using a method not
discribed in the book. By the way, several students came up with it.
Therefore, I do not think that ingenious ideas are a problem.
Besides, when the student complains, the answer is checked by a
person.

+------------------------
!	 Again, people will see the method, not the contents.
!........................

Take the quicksort example. What is the content of a sorting problem?
The result (sorted array)? But everyone knows the result! You cannot
mean that.

I think you mean that we should let the student choose the tool (i.e,
the sorting method) best suited for the problem? Actually, we would
like to. But we also want to make sure s/he knows all tools and how
they work. And that is the main purpose of this particular course.

+------------------------
! Please note, that this is only about education, which is a very reduced
! view onto learning, i.e. attaining wisdom. And mark, that my arguments
! are applicable to any automated testing including multiple-choice-tests,
! not only computer aided testing. I am willing to explicate myself further
! if there is any interest.
!........................

I certainly am interested.

           / (.___o             ..
          /_/ ___/      Juha Hyvonen
        ! /  !
        !/ ) !          juh@hutcs.hut.fi
         ------

PS.  I got a warning about some bugs in the algorithms in the book
     (thanks Pat, those were new ones to me). Does anyone have a list
     of other known bugs? If not, does anyone happen to know the
     e-mail address of Robert Sedgewick (the author of the book)?

PS2. The course is valued (by the university) as being a "three-week"
     course, i.e., the amount of work needed to learn and understand
     all issues is about 3 working weeks (120 hours) -- including
     classes, homework, and additional personal effort. The course
     deals with sections 1...23 and 29...32 (Fundamentals, Sorting,
     Searching, String Processing, and Graph Algorithms) of the book

	R. Sedgewick, Algorithms, 2nd ed. 650 p. Addison-Wesley, 1988.
	ISBN 0-201-06673-4

     Do you (the net people) think that 3 weeks is reasonable? All the
     algorithms are supposed to be totally new to the students. Every
     student here seem to think that it is a joke.

eibo@rzsun3.informatik.uni-hamburg.de (Eibo Thieme) (10/22/90)

juh@cs.hut.fi (Juha Hyv|nen) writes:

>I realize I did not mention anything about the intended use of the
>proposed system. We are not trying to evaluate the students'
>knowledge. We are trying to make sure they do their homework (and
>learn while doing it).

While it is quite reassuring that you have this intent I would like
to point out that in introducing new technology you should always
consider its development in the future. I think any such system would
eventually take over the evaluation of students, just by being such
a "versatile" and "convenient" instrument.

>To make sure everyone is doing their homework, it has to be checked.

Here we are at the verge of philosophy of education. I don't believe
it to be sensible to enforce people doing there homework. Following
my main line I would say that you are accomplishing next to nothing
by any such scheme, perhaps even worse. My experience and deep 
conviction is that

     You can only learn if you want to know.
     Luckily, this is a natural trait of humankind.

If you force them to do their homework they will learn as much and as
intensely as required to do it, instead of wanting to know ever more as
they acquire a higher proficiency. There might be some people who would
not loose their interest in learning, even under such hard conditions,
and I am afraid I have to admit that there are quite a lot of those
who are perfectly happy with being pressed to work. But, even if it is
an arrogant position: I don't want anyone to succeed at the university
who needs to be pushed forward all the time. 

>Presently, checking the homework is done by a person. Last spring,
>over 500 students took part in the course. That meant that a total of
>500 students x 5 parts x 4 questions = 10,000 answers had to be
>checked (and the results registered). At the rate of checking one
>answer a minute it would take over 160 hours to check all the answers.
>That is one working month! I think that the time should be spent
>teaching than doing a routine job that could be done by a computer.

Whenever you discover anything not operable you should first think of it
as wrong instead of considering a computer to do what you can't.

>Could you give some examples of questions we should ask?

I think your questions are alright, but they should be answerable by 
everyone who took the course. You could certainly devise more realistic
questions where the student had to extract the formalized problem first
but that's not I am up to. To ask questions in the first place is a
hindrance, give your students the opportunity to show they know by
acting in their environment. Offer interesting projects, give intersting
problems to solve, give your students the opportunity to teach, as this
makes them aware of their knowledge.

Your observation in the classes might be some evidence to the current
grade of motivation too. Regrettably the situation here in Hamburg is
not much different.

>The students could ask questions. They do not.
>The students are asked questions. They do not want to answer.

>If the students are made to answer, even more of them tend to skip the
>class. (We could start a new thread on discussing why. Here is one
>answer: they are afraid of being wrong and letting everybody to know
>it. "It is better the keep quiet and let everybody THINK you do not
>know the answer than to open your mouth and make sure they KNOW that."
>I believe this shyness (?) among us Finns has been discussed in some
>newsgroups -- maybe in soc.culture.nordic.)

>Conclusion: direct interaction (with too limited resources) does
>            not work (with 500 Finns).

I believe there is much work to be done to get people back to learning
and I think it is extremely necessary. Our societies need knowledgeable
persons, not only in science but also in business and administration.
The current trend of cutting down education to the minimum with the
promise of further education in the working-place will leave people
with only very specialised knowledge, global understanding thought of
as unnecessary ballast. There should be countermeasures. Limited
resources are to be fought against and the low motivation of students
has to be raised.

eibo

--
eibo Thieme                             *    FB Informatik
eibo@fbihh.informatik.uni-hamburg.de    *    Universitaet Hamburg
..!uunet!mcsun!unido!fbihh!eibo        *    Schlueterstr. 70
PHONE: +40 4123-5660                    *    D-2000 Hamburg 13  (FRG)

juh@cs.hut.fi (Juha Hyv|nen) (10/22/90)

From: anw@maths.nott.ac.uk (Dr A. N. Walker)
Subject: Re: Automatic checking the students' answers
Date: 19 Oct 90 10:36:05 GMT
+------------------------
! In article <eibo.656000376@rzsun3> eibo@rzsun3.informatik.uni-hamburg.de
! (Eibo Thieme) writes:
! 
! >   1. The set of questions askable is confined to the area
! >      of reproducing memorized facts and application of
! >      memorized rules.
!
!                         ... but this is not true. The difficulty
! lies in making the answers easily parsable.  I have been using such
! systems since 1970 to set and mark problems in Numerical Analysis.
! The questions were askable because they conformed to a simple template
! in which certain elements could be randomised.  What facts the students
! had memorised, or what rules they used was in no way a factor in
! the system, which worked because the computer was at least able to
! assess the answers.
!........................

In numerical analysis, the (unambiguous) answer to a problem cannot be
known if you do not know any method to solve it. In our case, the
"answers" are trivial (e.g., a sorted array). We are not interested in
the answer. We are interested in the method (e.g. quicksort). That
makes it more difficult because the representation of the *method*
must be parsable (and as natural for the student as possible).

+------------------------
! 	[Initially, the computer was used primarily to generate a
! "random" question sheet (for the student) and the corresponding correct
! answers (for me);  once the computer power became available, the
! whole system became interactive, and the student responses were
! assessed by the computer.  As the responses were always numbers
! or (occasionally) a menu selection, the computer either understood
! the response or could reject it as "ungrammatical".  All interactions
! were logged, and students could include comments/queries/complaints
! in the log for me to deal with.]
!........................

Your system seems to be similar to those discribed in papers that I
have found. They all seem to be dealing with cases where generating
tests is easy and checking only the answer is sufficient (mainly in
mathematics). I have found no references to automated interactive or
*non-interactive* testing that could deal with the method used in
getting the (already known) answer.

I believe that setting up a non-interactive testing system in
numerical analysis would be rather simple because there is (almost) no
syntax within the answers.

I have found references to papers (and read some of them) about
	automated test generation
	automated program marking
	interactive tests (with automated marking) 
	interactive programming tutors (self-paced learning)

(I am looking for more...)

By the way, I do not see much difference in interactive testing and
interactive tutoring in general. Interactive testing tools can be used
in (simple) tutoring if they provide instant feedback. And with little
effort, it should be possible to include automatic marking into
interactive tutoring tools.

+------------------------
! 	I wanted to find out whether the students could *do* NA.
! The course *also* (naturally) included theoretical knowledge,
! assessed in the traditional ways, but *practical* knowledge can
! often, with imagination, be assessed mechanically.
!........................

And we want to know whether the students can solve a problem using a
particular *method*. That is also practical knowledge. And you hit the
point with imagination. That was one reason for me to post my question
[about automatic marking], to (use your imagination to help me) find
good and suitable questions for automatic marking and their answer
formats. Nobody has suggested a single one yet :-(

+------------------------
! 	     In a practical test, the method and the ingenuity
! are irrelevant.  The computer sees the results.  If I ask you to
! solve an equation, or to perform a quadrature, and you get the
! [unambiguous!] right answer in a reasonable time, fine, no matter
! how you do it.  If the answer is wrong, it's no mitigation that
! you used a very ingenious method.
!........................

In our case (of practical test), the results are irrelevant. We want
the computer to see the method. The syntax of representing the method
is the key issue.  And the answer is not necessarily unambiguous: one
(trivial) mistake does not mean that it is completely wrong.
Intelligently dealing with partly correct answers is perhaps the
hardest requirement to meet. The answer should therefore consist of
distinct steps. If the result of one intermediate step is wrong, it
should be taken as a basis of the next step and not the correct result
(of the step).

(Could your equations be solved in a single step?)

           / (.___o             ..
          /_/ ___/      Juha Hyvonen
        ! /  !
        !/ ) !          juh@hutcs.hut.fi
         ------

anw@maths.nott.ac.uk (Dr A. N. Walker) (10/26/90)

In article <JUH.90Oct22161834@hutcs.hut.fi> juh@cs.hut.fi
(Juha Hyv|nen) writes:
>In numerical analysis, the (unambiguous) answer to a problem cannot be
>known if you do not know any method to solve it. In our case, the
>"answers" are trivial (e.g., a sorted array). We are not interested in
>the answer. We are interested in the method (e.g. quicksort). That
>makes it more difficult because the representation of the *method*
>must be parsable (and as natural for the student as possible).

	Then you must ask a different question.  If you want your students
to know about sorting, get them to write a sorting program, and test it.
Your automatic marker will put it through a collection of tests, time it,
check the output for correctness, and mark accordingly.  If you want them
to know *very* *specifically* about quicksort, then (a) you can get them
to write their answers in (eg) Pascal, and parse the results;  (b) you
can ask them for the state of the array after so-many partitions using
such-and-such pivots.

	In my NA system, some of the questions were of this type;  for
example, one might ask what approximation to a root was obtained by
three cycles of Newton-Raphson starting from x = 1.

>In our case (of practical test), the results are irrelevant. We want
>the computer to see the method. The syntax of representing the method
>is the key issue.

	Computer scientists, of all people, should know about syntax
of representing methods!  What else is a programming language for?

>Intelligently dealing with partly correct answers is perhaps the
>hardest requirement to meet.

	OK.  For example, my NA system looked to see if the student's
solution differed from the correct one in various standard ways --
wrong sign, out by a power of ten, out by an integer factor, out by
using degrees instead of radians, etc.  If you can guess at plausible
incorrect answers, you can look for them.

>			      The answer should therefore consist of
>distinct steps. If the result of one intermediate step is wrong, it
>should be taken as a basis of the next step and not the correct result
>(of the step).

	This is sometimes, but not always, possible.  For example, if
a sort step is followed by a merge step in which the input is assumed
to be sorted, then a sort failure is potentially disastrous.  There is
(perhaps) no way of properly assessing the merge unless the sort is
correct.

>(Could your equations be solved in a single step?)

	No.  (Can they ever?)

-- 
Andy Walker, Maths Dept., Nott'm Univ., UK.
anw@maths.nott.ac.uk

juh@cs.hut.fi (Juha Hyv|nen) (10/31/90)

      [	This discussion is beginning to be about the nature of
	learning and education as a process. Eibo seems to argue that
	the educational system (practices) should be changed. My
	original question (about automatic generation of
	individualized homework questions and the checking of the
	answers) could be regarded as an attempt to "patch" the
	existing practice on my part where I have a chance to do it.

	I believe Eibo and I are talking about different things. My
	arguments are not valid if the purpose were to change the
	system. They might be regarded as an attempt to defend the
	system. And I think Eibo's arguments are not valid in the
	context that this discussion started (using the computer to
	help in checking the homework).

	Maybe we should discuss about how the computers should be used
	in education and how they should not. Anyone want to start
	that? Eibo? Or is that old news to everyone (but me)?
      ]


From: eibo@rzsun3.informatik.uni-hamburg.de (Eibo Thieme)
+------------------------
! >To make sure everyone is doing their homework, it has to be checked.
! 
! Here we are at the verge of philosophy of education. I don't believe
! it to be sensible to enforce people doing there homework.
!........................

We are making them do their homework because we want them to pass the
course the first time they take the final exam. If doing homework were
voluntary, then most students would not do it, but everyone *will*
pass the exam, eventually -- even if they are not interested --
because they need to (in order to graduate).

Do you think it would be sensible to give some incentive to do their
homework? The points from homework could be added to the final exam
points. (Now the only incentive is that if you do not do your
homework, you do not pass the course. And those who do homework very
well get a bonus.) Do you think the final exam (in its traditional
form) is a sensible way to measure the students' skills?

+------------------------
!      You can only learn if you want to know.
!........................

You can (only) learn if you want to know *more*.

+------------------------
! If you force them to do their homework they will learn as much and as
! intensely as required to do it, instead of wanting to know ever more as
! they acquire a higher proficiency.
!........................

First, the student knows "nothing". How can he want to learn more?
Homework helps the student to learn something to start with. Then, if
he finds it interesting, he can study more. And how is the student to
know when he has reached even the required level? The time the
student can devote to one course is limited.

+------------------------
! There might be some people who would
! not loose their interest in learning, even under such hard conditions,
!........................

Hard? It is easier for the students to understand if they learn by
doing.

Are you saying that those who would *otherwise* be interested might
loose their interest? I do not agree. Helping the studets to
understand should *increase* their interest in the subject because
they find out it was not as difficult as it first sounded.

+------------------------
! I don't want anyone to succeed at the university
! who needs to be pushed forward all the time.
!........................

We are not pushing them. We are guiding them to help them learn (by
doing). We provide them hints of the minimal level everyone is
supposed to reach.

+------------------------
! >Presently, checking the homework is done by a person. Last spring,
! >over 500 students took part in the course. That meant that a total of
! >500 students x 5 parts x 4 questions = 10,000 answers had to be
! >checked (and the results registered). At the rate of checking one
! >answer a minute it would take over 160 hours to check all the answers.
! >That is one working month! I think that the time should be spent
! >teaching than doing a routine job that could be done by a computer.
!
! Whenever you discover anything not operable you should first think of it
! as wrong instead of considering a computer to do what you can't.
!........................

From the students' point of view, the (current) homework system looks
like being an improvement over the previous system. Last spring was
the first time we used homework questions. Before, we had programming
assignments. Some statistics on final exam results:

final exam
results		year 1989	year 1990
		-----------	-----------
 5 (excellent)	 24 (~17%)	 81 (~20%)
 4		 20 (~14%)	 72 (~17%)
 3		 23 (~16%)	 85 (~20%)
 2		 17 (~12%)	 64 (~15%)
 1		 18 (~13%)	 45 (~11%)
 0 (failed)	 38 (~27%)	 68 (~16%)
		-----------	----------
total		140		415

The final exam is held four times a year. The statistics are from the
first exam held right after the course was over. (The students have
three attempts to pass the exam; actually four, because nobody is
counting.)

There are two things to note (year 1990):

	The number of students who took the exam right after the
	course was over was very high. Before, the number of students
	was more evenly distributed with the four exams (with the
	first two being the largest). Therefore, the students seemed
	to be more motivated than before.

	Students got better grades overall and especially the
	percentage of zeros dropped. (I do not have statistics from
	other exams, but the percentage of zeros has been somewhere
	between 25 to 35 %.)

In addition, there was a strong correlation between the homework
results and the final exam result.

Did the students really learn more or were they simply better prepared
to answer to final exam questions?

+------------------------
! I think your questions are alright, but they should be answerable by 
! everyone who took the course. You could certainly devise more realistic
! questions where the student had to extract the formalized problem first
! but that's not I am up to. To ask questions in the first place is a
! hindrance, give your students the opportunity to show they know by
! acting in their environment. Offer interesting projects, give intersting
! problems to solve, give your students the opportunity to teach, as this
! makes them aware of their knowledge.
!........................

Aren't the questions answerable?

[Begin the provocation...]

Interesting projects and problems, and teaching require too much time
*from the students*, because everyone would have to (e.g.) teach every
algorithm in order to learn it. (If they do not have to teach every
algorithm, they are interested only as long as they have completed
their part. I *know*.) And how do we give those 500 inexperienced and
unconfident students the opportunity to teach (=time and place)? How
do we evaluate those *10,000* lessons given by the students? (The
figure 10,000 is the number of answers to homework questions last
spring.) Let's say it takes 15 minutes to teach an algorithm. So, it
takes 15 working months to grade the students!!! And that has to be
done in one semester (3.5 months, 4 hours a week). Now we have 1.5 
people teaching the course. That would have to be increased to 40!!!
Do you call that operable?

Yes, we could have the students grade the lessons themselves.

[...now back to less provocative discussion, I hope.]

We simply want to know whether the students can solve the problem
using a specific method or not. Do you think that is unreasonable?
The students are required to understand how all the different
algorithms work. It is *not* sufficient to know the properties of the
algorithms. The properties are *memorized facts* used to select the
right algorithm to solve an "interesting" (real) problem.

	To be able to select the right algorithm, you have to know its
	properties.

	To know the properties of an algorithm, you have to memorize
	them. (You have to remember them.)

	To understand a property of an algorithm, you have to
	understand how the algorithm works.

	To understand how an algorithm works, you either have to
	implement and test it or do it manually a few times.
	(Implementing requires more effort.)

	To understand why an algorithm has a certain property, you
	have to be able to do it manually. (To understand how the
	property was derived.)

	To be able to do it manually, you have to do it manually.
	(Confusing?)

	If you understand why an algorithm has certain properties, you
	can safely apply it in situations where it was not intended to
	be used. (Or nobody ever thought of using it there before.)

	If you understand how an algorithm works you can modify it to
	be used in situations where it was not intended to be used.

The students need to understand "why" in order not to make a wrong
decision (when selecting an algorithm in real life). If you know how
something is done it is easier to figure out why.

And even if you do not understand the "why" you will not forget the
"how" completely because you have done it at least once. If you later
need to understand the "why", you can propably figure it out because
you know the "how".

BTW, doing an algorithm manually is in effect the same as teaching the
algorithm to others! In both cases, you (should) show how the
algorithm works!

           / (.___o             ..
          /_/ ___/      Juha Hyvonen
        ! /  !
        !/ ) !          juh@hutcs.hut.fi
         ------