Negotiation spaces in human-computer collaborative learning



Yüklə 161,69 Kb.
səhifə5/8
tarix02.11.2017
ölçüsü161,69 Kb.
#27883
1   2   3   4   5   6   7   8

MEMOLAB


The task. The goal of MEMOLAB [Dillenbourg et al. 94] is for psychology students to acquire the basic skills in the methodology of experimentation. A goal is assigned to agents, for instance “study the interaction between the list length effect and the semantic similarity effect”. The learners build an experiment to study this effect. An experiment involves different groups of subjects each encoding different list of words. An experiment is described by assembling graphical objects on a workbench. Each object associate a group of subjects, a task (e.g. read, listen, count down, free or serial recall,...) and a material (a list of words that the fictitious subjects have to study). The system simulates the experiment. The learner can visualize the results and perform an analysis of variance.

The agents. The human agent interacts with a rule-based system which plays the role of an expert. The inference engine was modified in order to make the expert sensible to the user actions on the task interface (the graphical problem representation). The specificity of design also concerns knowledge engineering. When a rulebased system reasons alone, the set of possible states is defined by the combinations of its own rules (and the problem data). In case of collaboration with a human, the set of possible problem states covers the combinations of all commands of both agents. Hence, the rulebase must include any intermediate step that the expert would normally skip but that the user could reach. Hence, the designer has to decompose rule into smaller chunks. Moreover, the learner action may transform the problem into a state which is not covered by the expert because it does not belong to any sensible solution path. We hence had to add sub-optimal rules and repair rules to the system which enable him to cope with these peculiar problem states [Dillenbourg et al. 95a].

Features

Description

4Mode: Action.

Designing an experiment requires several dozens of commands. Some of these commands are available to both agents: these are the commands to create, delete or modify an event. Modifying an event means changing its group of subjects, its material or its task. Negotiation does not occur below that level. For instance, if the system disagrees on the features of a material used in an event, it can create another material, with different features and replace the one it disliked. But, the agent can not intervene when the human agent is creating this material (selecting the words, analysing its properties, typing the material names). In the discussion mode, there is no real negotiation: the expert provides an explanation on request, but the learners cannot express disagreement.


Directness: Low

Negotiation is mainly indirect from the user side, since most user actions can implicitly initiate a negotiation of the action she just performed. Negotiation is more direct from the machine agent side, since it explicitly expresses agreement or disagreement to the user.

Table 2: Description of the negotiation space in MEMOLAB

Evaluation. The experiments revealed several problems connected with the action mode. The learner action is interpreted every time he has performed a significant command. A difficult design question is precisely what should be treated as a significant. For instance, when an agent creates a list of words, the actual commands for building this list are irrelevant. What matters here are the intrinsic features of this list (e.g. its degree of semantic homogeneity) and which experimental group will study this list. The issue is therefore : at which granularity do we compare the actions of two agents: any interface commands (any click, any character,... even inside a word?) or only at major interface commands. In Memolab, the mapping concerns about 20 commands: for the expert these are the commands which appear as conclusions of one of its rules. The answer is in the question: the mapping must done at the level where the designer wants the agents to negotiate. If the mapped commands are too elementary, there is nothing to negotiate because individual commands do not carry enough meaning. Conversely, if mapped commands are too large, two divergent commands may imply various types of disagreement In MEMOLAB, some meaningful operations took the learner several steps, and when diagnosis occurred in between the steps, the subsequent interactions led to misunderstanding between agents.

The second main problem was that the expert was not flexible enough, because its problem solving strategy was not sufficiently opportunistic (an in the action mode, negotiation is not separated from task-level reasoning). We re-designed the expert and made it more opportunistic: there were fewer cases where the expert disagree not because the user decision was unacceptable, but because it had itself previously taken another decision. Although the collaboration was easier after redesign, subjects complained about the lack of negotiation regarding decisions taken by the expert. The problem was due to the absence of action-mode negotiation of the expert decisions. The most ‘strategical’ part of the expert’s reasoning (choosing the independent variables, choosing the modalities for each independent variable) were not negotiable since they were not represented on the screen (in action mode, only displayed objects can be negotiated). However, these 'hidden' decisions inevitably came into focus when agents disagreed. We should either include these decisions in the problem space (i.e. extending the action mode negotiation) or develop an interface for discussing these decisions (i.e. extending the discussion mode), i.e. attempt to fill the gap between the action mode and the discussion mode. Negotiation appeared to be too much centred on action and not enough on underlying knowledge. Once again, a key design issue is deciding which subset of the task commands is available to both agents.

We also experimented Memolab with two human agents playing only with the task interface (two people in front of a single machine). It appears that the type of negotiation should not be modelled at the rule level, but "below the rule level, i.e. as a negotiation of the scope of the variables in that rule [Dillenbourg, to appear]. We should develop inference engines in which the instantiation of variables is not performed internally, but negotiated with the user.


Yüklə 161,69 Kb.

Dostları ilə paylaş:
1   2   3   4   5   6   7   8




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©muhaz.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin