More content: http://mapleta.can.nl/classes/kamminga/
Tested via: experiences from earlier workshops, {Heck, 2004 #2}
Screenshots
Digital Mathematical Environment
Name
Digital Mathematical Environment
Date
20080229
Version
Unknown
Key words
This evaluation is based on (past) experiences of the Galois and Sage projects. Several tests were made in a special environment for secondary and higher education topics: http://www.fi.uu.nl/dwo/voho After registering the test becomes available.
schoollogin: vo-ho
student password: passw_leerling_vo
Screenshots
Activemath
Name
Activemath
Date
20080229
Version
1.0
Key words
http://www.activemath.org/
Activemath is a comprehensive tool with much more than is our focus in our research (tutorial component, mathematical learner model).
Screenshots
STACK
Name
STACK
Date
20080229
Version
2.0
Key words
In the new version 2.0 of STACK Moodle is required. Installing STACK is quite difficult. http://www.stack.bham.ac.uk/ The possibilities of Stack are also evaluated using our knowledge of the earlier version 1.0 and the moodle quiz module. There is a lot of higher education material.
Screenshots
Wims
Name
WIMS
Date
20080217
Version
3.57 (3.62 released)
Key words
http://wims.unice.fr/
Exercises can be made by using Createxo. Power modules are also possible, but require more administrator rights. For dutch content: http://wims.math.leidenuniv.nl/wims -> studentenbereik -> Gestructureerde aanpak algebraische vaardigheden
(administrator: Relinde Jurrius)
username test, password gaav
Screenshots
Appendix E
A more detailed rationale for every score can be found in the worksheet matrix_tools_290208.xls
Wm=WIMS, S=STACK, M=Maple TA, D=DME, Wr=Wiris, A=Activemath
Tool
Wm
S
M
D
Wr
A
Wgt
Notes
Scr
Item
Key aspect of our research is: formatively assessing symbol sense by using an ICT tool. Therefore, even if an item is valuable, weights were determined by keeping this clearly in mind.
1
Representational
3
3
3
5
4
3
5
Here we mean both the fact that the mathematics behind the tool should be sound, but also that content can be displayed in several representations. (Van Streun, Janvier)
2
Randomization
5
5
5
5
1
3
4
One strong point of computer tools is that they can randomly create exercises.
3a
Multisteps: within question
4
2
2
5
1
3
5
Asking for one answer is one thing, being able to find your 'own way' towards an answer is considered a strong point.
3b
Multisteps: between questions
2
5
2
2
1
2
3
The ideal tool should be able to connect several exercises into one exercise, making it possible to make more complex (type C) questions.
4
Integrated CAS
4
5
5
3
5
5
3
An integrated CAS is a plus, but we should keep in mind that our research focuses on relatively simple mathematics.
5
Teacher, authoring
2
3
3
4
1
2
5
As Sangwin stated teachers are 'neglected learners'. Ease of authoring should be an important variable.
6
Student, usage
4
2
3
5
5
3
5
Of course, ease of use for students should also be taken into account.
7
Registration: answers
3
5
4
4
1
2
5
We want a tool to store all answers a student gives in. Also storing wrong answers is a plus.
8
Registration: process
1
2
2
5
1
2
5
To get insight in WHAT a student has done right/wrong one has to know the 'way to an answer'.
9
Possibility of modes
5
5
5
5
1
1
5
In a didactical scenario, both practicing and testing plays a role. Providing these modes is a part of formative assessment
10a
Feedback: process
1
1
1
3
1
2
5
Providing feedback on the way a student answers a problem, could be a part of formative assessment. Here we can distinguish general remarks (we see these as Hints) and actually saying something about the process.
10b
Feedback: answer
3
5
3
4
3
5
5
The possibility of giving more feedback than right/wrong is deemed valuable. We score whether it is possible, and how easy it is to author. Feedback on 'in-between' answers is seen as a plus. Ease of authoring these questions is also taken into account.
10c
Feedback: global
2
2
2
2
1
4
5
An overall view of one's mastery of a subject is seen as global feedback. It is valued slightly lower than the other feedback types.
11
Hints
4
4
4
2
1
5
3
Feedback can also be provided in the form of hints.
12
Review mode
1
3
4
5
1
1
5
Essential for formative assessment, as a student can examine his/her answers and mistakes, and subsequently learn from them.
13
Question types
3
2
5
1
1
3
2
Providing a variety of question types makes a tool more versatile. As an item it isn't as important as the other items.
14
Scoring
3
4
3
3
1
1
2
We see scoring as a way to motivate students. This is contrary to literature on formative assessment.
15
Question management
3
4
5
3
1
2
3
Being able to copy, delete and recycle questions is an important aspect for practical tool use.
16
Use of standards
2
3
4
4
3
4
3
Sharing content -both formulas, questions and whole packages- is easier when standards are being used.
17
Available content
4
2
5
4
3
2
3
The availability of existing content.
18
Cost
5
5
1
3
2
5
2
The pricing of a tool.
19
License and modifiability
5
5
1
3
2
4
2
We value open source higher than closed source, because we can easier adapt code.
20
Technical requirements
1
1
3
4
4
2
2
We score the technical requirements of a tool. Here we make use of our experiences installing the tool, but also previous experiences with the used technologies.
21
Continuity
2
1
4
3
4
3
4
It is very important that teachers can rely on a tool to be supported for a while. This score reflects this.
22
Languages
5
2
2
3
4
4
1
Is the tool available in more than one language?
23
Stability and performance
4
2
3
4
4
2
2
Being able to use the tool in classroom practice, with a fair amount of students, is important. This score is also based on earlier experiments with tools.
24
Looks
2
1
3
4
4
3
1
Here we also mean the structure a tool provides.
25
Integration for VLE
3
5
5
5
3
3
2
3=it exists but very experimental, 5=exists and works
Tot.
SCORE
285
309
318
374
204
271
Appendix F
Note: these screenshots are from the DWO, the dutch equivalent of the DME.
Here we see the difficulty of ‘open’ questions. Of course just providing the answer is quite easy, but how can we make sure that partially correct answers are graded that way?
The solution in this case was to add this possibility to the answer space. However, this limits the ‘open’ character of the question somewhat, as the student has to provide at least that answer to get all the points. Rewriting both sides without brackets is possible. The question remains whether one should give a hint.
This implementation of the second question relies on the expression to be seen as an equation. It remains to be seen whether other expressions with v= are graded as correct or not.
In question 3 substitution is possible. Circularity –in the form of rewriting the left side of the equation- is possible, and can also be awarded points
Here we encounter some difficulties modeling the open character of the question. We could force an expression with n to be used, but the whole point is that that a student could choose his/her own variable(s).
Using hints and a more closed instruction could also help.
Integrals can be computed using mathematica in the background. However, there is no function here, as the characteristic holds for all functions.
Just using DME/DWO in a more limited way, not asking steps but only one answer, is also an option.
References
1 Information and Communication Technology. When use “tools” I mean “ICT tools”.
2 Digital Mathematical Environment, http://www.fi.uu.nl/dwo/en/
3 Wiskunde En Les Praktijk, dutch project that was aimed at using applets in an algebra curriculum