contents - CiteSeerX

Transcription

contents - CiteSeerX
ReCALL
Volume 10
·
Number 1
·
May 1998
CONTENTS
Where research and practice meet
Françoise Blin and June Thompson
3
Address by the Minister for Education and Science
M. Martin T.D.
5
KEYNOTES
Where do research and practice meet? Developing a discipline
N. Garrett
7
Technology and universities: context, cost and culture
C. Curran
13
Puissance du binaire, créativité du synaptique
M. P. Perrin
21
SPECIAL TRIBUTE
The language learner and the software designer
S. Myles
38
SELECTED PAPERS
Does computer-mediated conferencing really have a reduced social dimension?
T. Coverdale-Jones
46
Virtual language learning: potential and practice
U. Felix
53
Breaking down the distance barriers: perceptions and practice in
technology-mediated distance language acquisition
M. Fox
Learning to learn a language – at home and on the Web
R. Goodfellow and M.-N.Lamy
Vol 10 No 1 May 1998
59
68
1
Contents
Les outils de TALN dans SAFRAN
M.-J. Hamel
79
Two conceptions of learning and their implications for CALL at the tertiary level
M. Levy
86
Designing, implementing and evaluating a project in tandem language learning
via e-mail
D. Little and E.Ushioda
95
Wintegrate? Reactions to Télé-Textes Author 2, a CALL multimedia package
L. Murray
102
Using the Internet to teach English for academic purposes
H. Nesi
109
The ‘third place’ – virtual reality applications for second language learning
K. Schwienhorst
118
Seminar on research in CALL
D. Little
127
President’s Report
G. Davies
129
CILT Research Forum
A. Jamieson
133
Software Review
PROF (Practical Revision Of French)
136
Diary
142
2
ReCALL
CTI Centre for Modern
Languages
Director:
Professor Graham Chesters
Centre Manager:
June Thompson
Information Officer:
Jenny Parsons
CTI Centre for Modern
Languages, The Language
Institute, The University of Hull,
Hull HU6 7RX, UK.
Tel:+44 (01)482 466373/465872
Fax: +44 (01)482 473816.
Email:[email protected]
Internet: http://www.hull.ac.uk/cti
SPECIAL ISSUE
Where Research
and Practice Meet
Selected papers from EUROCALL 97
Dublin City University, Dublin, Ireland, 11-13
September 1997
EUROCALL
Subscription rates for 1998
Individual:
£ 30.00
Corporate: £ 80.00
Commercial: £300.00
Edited by
Françoise Blin and June Thompson
Further details from the above
address.
ReCALL Journal
Advertisement rates*
Full page: £150
Half page: £100
Quarter page: £70
Inserts:£150 (per thousand,
single A4 sheet)
(*reductions for EUROCALLmembers)
© The CTI Centre for
Modern Languages,
University of Hull, UK
ISSN 0958-3440
Journal Production
Management:
Troubador Publishing Ltd
PO Box 31, Market Harborough
Leics LE16 9RQ, UK
Tel:+44 (01)858 469898
Fax: +44 (01)858 431649
Email:troubador@
compuserve.com
Printed by:
Selwood Printing Ltd, West
Sussex, UK
Vol 10 No 1 May 1998
Following the special issue: CALL – The Challenge of Gram mar (Vol 9 No 2 November 1997), this extended issue also
constitutes a new departure for ReCALL. By devoting the May
issue each year to selected papers from EUROCALL conferences, it is hoped that such papers, representing the latest
developments in the field, will thus be made available to a
wider audience.
EUROCALL 97 was held in Dublin, Ireland, at Dublin City
University and was attended by over 320 participants from all
continents. The format and programme of the conference were
designed to facilitate the meeting of research and practice, of
researchers, developers and teachers. Plenary and parallel
paper sessions, poster and show-and-tell sessions and a series
of seminars contributed to stimulating exchanges which
showed the continuing search for excellence in research standards, in software development and in teaching and learning
practices.
In preparing this publication, the academic panel was faced
with the extremely difficult task of selecting one third of the
papers submitted. Those that have been retained were considered as being representative of the issues debated during the
conference. Others will be published in later issues of
ReCALL.
This issue opens with the address given by Mr Micheal
3
Editorial
Martin, T.D., Minister for Education and Science at the magnificent state reception which he hosted for the conference participants at Dublin Castle on 11 September 1997. The keynotes
are then presented in the order they were given, followed by
selected papers in alphabetical order, David Little’s report on a
seminar on Research in CALL held on the last day of the conference, and the President’s Report to the AGM.
Finally, we would like to thank all those who contributed to
the success of EUROCALL 97: Mr Micheal Martin, T.D.,
Minister for Education and Science; Dr Daniel O’Hare, President of Dublin City University; Mr John Hayden, Chief Executive/ Secretary of the Higher Education Authority; Dublin City
University staff who helped us with the logistics; the organising committee, especially Jane Fahy, Dwain Kelly and Eileen
Colgan; and at last but not least, Michael O’Sullivan and his
team of student helpers.
Françoise Blin and June Thompson
4
ReCALL
ReCALL 10:1 (1998) 52–54
Address by the
Minister for Education and Science
Mr. Micheál Martin T. D.
At Dublin Castle, 11 September 1997
Ladies and Gentlemen,
It gives me pleasure to welcome this evening
so many distinguished international scholars
to this state reception in honour of EUROCALL. In particular I am very pleased to be
able to acknowledge the role of Dublin City
University in this conference. DCU is one of
our most innovative and dynamic institutions
and I think it is very fitting that they should be
involved in an endeavour such as this.
I was very interested to read details of your
conference programme and I would like to
stress how I believe that the issues which you
are discussing are very important. Languages
were once solidly placed in the domain of liberal arts education, and treated mainly as a
door to the philosophy and literature of other
nations. The study of languages is increasingly
approached as a tool of communication and
learning.
The main aims of EUROCALL very much
coincide with the educational objectives of
this government. These include the promotion
of the use of foreign languages at all stages of
the education system and increasing the use of
technology in the learning environment.
Two of the aspects of education that I have
Vol 10 No 1 May 1998
been trying strongly to promote during my
very short period as Minister are languagelearning and technology.
I consider that language-learning is of vital
importance for our young people, many of
whom deal on a daily basis with people who
do not speak English. An example of this is
the tele-services industry which provides
employment for large numbers of linguistically competent school-leavers. This industry
will continue to grow rapidly in the future. In
recognition of this I have recently announced
a tele-services initiative which has put in place
for the current school year a major expansion
in courses available throughout the country.
I am determined that our record of expertise in languages will continue to improve.
Like most countries whose principal language
is English, we have suffered in the past from a
degree of insularity. One of my aims is to
increase the development of oral skills in language learning in schools. This year, for the
first time, up to 40% of marks in our terminal
school exam were awarded for oral and aural
competency. At present, I am initiating a pilot
project for the teaching of modern languages
at Primary School level. I intend that the new
technologies will be employed in this project,
5
Address by the Minister for Education and Science
where appropriate, in order to enable pupils to
engage easily in transnational communication.
In the area of computer expertise, Ireland,
although the home of many international computer companies, has not been to the forefront
in the use of technology in our schools. This
government is determined to tackle this situation. We intend to ensure that all pupils in our
schools will shortly have access to computers
and have the opportunity to become skilled in
their many uses. Major advances in software
development have already started to make a
contribution to the classroom learning environment. I believe that Ireland can, with serious commitment, become a leader in this area
and it is my intention to ensure that support is
available to encourage and promote innovation in computer learning aids.
Your programme over the next few days is
fascinating to read. Not only will research and
6
practice meet, but you will be bridging the gap
between the formerly polarised worlds of liberal and vocational education.
I am delighted that Ireland is so well represented in EUROCALL and on its executive
committee. We are a small country which is on
the periphery of Europe and we need all the
contacts possible with our fellow Europeans
and with the greater world beyond Europe. It
is a source of great satisfaction to me as Minister that Dublin City University, your host
institution, is in the vanguard of innovation in
language-teaching approaches and in the use
of technology for the teaching of languages. I
wish you success in your programme and the
sharing of your research with your colleagues
from other institutions. I note that this is one
of the largest EUROCALL conferences to
date. I hope that your work will go from
strength to strength.
ReCALL
ReCALL 10:1 (1998) 7–12
Where do research and practice
meet? Developing a discipline
Nina Garrett
CTW Mellon Project for Language Learning and Technology, CT, USA
The topic of EUROCALL ’97, ‘Where
research and practice meet’, is perhaps the
most provocative and complex one in all of
higher education. It would be interesting to
recast the assertion that EUROCALL ’97 is
where research and practice meet as a more
probing question: Where do research and practice meet? Unarguably, yes, at this conference:
our program – the variety of presentations on
practice, and on the research done on practice,
and on theoretical research and the nature of
research – makes that clear. But it is not only at
this yearly gathering; research and practice
also meet in the organization of EUROCALL,
and for many of us in our daily professional
lives. But we could push the question further
and ask: do research and practice meet regularly and inherently in the profession of
CALL? In the multimedia classroom or language center? In our materials? In cyberspace?
in the mind of the learner? To most of those
questions we would probably have to give,
reluctantly, a negative answer. The integration
of research and practice is not yet common in
our language technology centers, which exist
almost entirely to deliver instruction. Nor yet
in most of the technology-based materials we
use, very few of which are set up to collect any
Vol 10 No 1 May 1998
data on what students do with the materials.
Certainly not in the minds of our learners,
almost none of whom are able to introspect or
ask questions about (i.e. do research on) their
own learning.
We might talk at length, another time,
about how to develop positive answers to any
of those questions.
My hope today is to argue that we as
CALLers, and even more so, we as language
professionals, need to develop a substantive
research agenda as an integral and constant
part of our work, and that that need is
extremely urgent, indeed critical, for us and
for language teaching generally. But we know
that there are several barriers to the development of a serious research agenda for CALL.
The first problem is that the first kind of
research that springs to people’s minds when
they connect ‘research’ with ‘computerassisted’ is efficacy research, studies of the
efficacy of using technology in language
teaching. We are by now aware that it is
extremely difficult to do sound large-scale
efficacy studies that attempt to ask whether
using computers is good for language learning
– because there are so many uncontrollable
variables in methods studies, because the use
7
Where do research and practice meet?: N Garrett
of technology is not itself a method, and
because the efficacy of the outcome depends
much more on the content of the pedagogy
than on its delivery platform. Parenthetically, I
worry that much of our efficacy research has
been rather defensive in tone: “Let us show
you that technology really does work, that students really can learn this way just as well as
in the conventional classroom setting.” CALL
studies often collect data to confirm that learning (as we have traditionally defined it) does
happen, that technology can fit into already
accepted pedagogical paradigms – rather than
setting out to demonstrate convincingly that
students learn differently. This is of course not
to say that efficacy studies are not possible or
valid or worthwhile, when they are appropriately constrained; I note several very interesting presentations on our program.
A second barrier to development of a solid
research agenda is the growing sentiment
(again based on the equation of ‘research’with
‘efficacy studies’) that it doesn’t matter
whether or not we can show that ‘it works’
because the expansion of technology into our
curricula is inevitable, whether for good reasons or bad. And the third barrier is that – as
many of us know to our cost – research in
CALL ‘doesn’t count’ towards promotion or
tenure, because it’s seen as merely pedagogical research, part of one’s teaching, not real
research. Even in small elite liberal arts colleges, where the quality of teaching is taken
very seriously, teaching counts less than
research, teaching with technology is seldom
counted much, and research on learning with
technology is barely understood.
Why, then, do we urgently need to develop
a research agenda for CALL? It’s not only
because we need to get more respect, as individual professionals and collectively as a profession, and because doing more research is the
surest way to get that respect. I’m arguing
something much more presumptuous and
grandiose. I believe that technology is going to
define language teaching, not the other way
around – and by this I mean not just that the
technology will inform language learning and
teaching but that it will actively shape it. I
know that some of you will be thinking, “Sure,
8
sure, we’ve been hearing that since the first
uses of computers in language education, but it
hasn’t happened.” But I’m not making a claim
here for the wonderfully transformative potential of technology to lift language study into
dynamic life and compelling cultural authenticity and individually tailored experience; this
isn’t another bit of hyperbole about the
promise. This is a warning. Many non-CALL
language professionals will take it as a threat,
though I don’t see it that way myself, and I
hope that you won’t either. I believe that technology is likely to dominate language teaching,
and very soon, and I’m arguing that a substantive CALL research agenda is the only factor
that will allow us to shape that domination for
the good of the profession. Without the
research that only we can undertake, the effect
of technology on language teaching and learning is all too likely to be disastrous.
As we all know, there are good reasons and
bad arguments for the rapid expansion of technology use in education generally and in language education in particular. The strongest
bad argument is that institutions will save lots
of money because technology use will allow
them to cut faculty lines.
I won’t waste your time by refuting that
argument; you know its falsity as well as I do.
There are others: we have to use technology
because all the other institutions are doing it
and we don’t want to look old-fashioned or
cheap; students (or their parents ) demand it as
proof that the education we offer is up-to-date.
There are plenty of good reasons too, especially in language education making language
learning more dynamic and interactive, creating more opportunities for communication,
allowing more access to culturally authentic
materials, making students more responsible
for their own learning, etc. Some of the arguments that we think of as good are based on as
little solid evidence as the bad ones – and
many of both the good reasons and the bad
ones are predicated on limited traditional
notions of how adult classroom second language acquisition takes place and thus of what
technology can do to support it. But my point
is that the goodness or badness of the reasons
for technology use are by now almost irreleReCALL
Where do research and practice meet?: N Garrett
vant. The pressures for the use of technology
in education are quite simply irresistible, and
there’s not much point in arguing against it, in
our field or in any other. It’s growing, and it
will inevitably continue to grow exponentially.
On the face of it, that growth might seem
to be good for us as CALL specialists. We’re
the ones who are already in the know; we’re
prepared to make a living in this brave new
world, and to show those not yet involved how
it works. Can we happily ride the rising wave
of increasing demand for our expertise and our
materials, continuing to expand on the practice we have been building? I think that would
be dangerous. If we continue to pursue our
practice as we have done up till now, with
research a relatively peripheral concern for the
field of CALL as a whole, economic pressures
may push that wave so high and so fast that it
will crash down on us, and we’ll drown and be
swept out to sea in the undertow. As technol ogy is forced onto those who don’t understand
it and don’t welcome it, we are likely to experience a backlash: we may be increasingly
resented and our work will be increasingly
attacked as lacking in pedagogical validity....
unless we move rapidly to pre-empt that criticism by developing the research that will not
only justify our current practice but will also
open up new approaches to language teaching.
We are the only ones who can control and
shape technology use, so that when the tail of
technology starts wagging the dog of conventional language pedagogy it will be to our students’ advantage, and that of language education as a field.
We have to use our practice – our day-today integration of technology, our understanding of the necessary design links between
pedagogical goals and technological implementation – to drive the redefinition of language teaching as a whole in ways that are
both valid and acceptable to teachers.
CALL practice, software design and implementation in pedagogy, has taught us how to
translate teaching or tutoring behaviours into
learning activities, how differences in learning
styles affect the ways students approach these
activities, how students actually understand
and misunderstand what and how we teach,
Vol 10 No 1 May 1998
how students’ misconceptions about language
learning make it difficult for them to learn
what we want them to learn. We’ve had to
work out exactly what we tell the computer to
do in order to get the students to do what we
intend; to the extent that we’ve used technology not to replicate conventional pedagogy, at
least, we’ve had to work out how we relate
materials design to pedagogical goals, though
we haven’t made much progress in how to
assess learner behaviors to gauge whether
those goals have been met. To design software, or computer use of on-line materials,
requires that we make conscious structured
decisions about all of these issues, which most
non-CALL language teachers seldom have to
do consciously.
In conventional classroom practice, good
teachers make such judgments intuitively and
well, but they are seldom able to correlate their
students’ successful or unsuccessful learning
with those ad hoc decisions except in anecdotal or experiential ways. This means that few
teachers who do not use technology have much
research basis for defending their practice in
the face of, for example, cuts in class contact
hours, pressures to increase class size – or the
threat of distance learning replacing them.
In the early days of CALL many teachers
resisted computers because they were so limited that their pedagogical capabilities seemed
trivial. Now, conversely, the interactive potential of multimedia and of the Web rouses the
fear that the technology can do too much, so
that the threat of its being used by budget-fixated administrators to replace teachers is in
fact a real one. CALL practitioners know that
technology does not, cannot, replace teachers,
though it can certainly change the role of the
teacher. If we want to prevent administrators
from using technology wrongly, we have to do
the research on which we base our objections.
Our non-technology-using colleagues only
sound defensive when they try to argue against
technological incursions into their practice.
The basic research capability on which this
kind of research agenda depends is tracking
software. The computer’s ability to collect
data on what students do with technologybased language learning materials while they
9
Where do research and practice meet?: N Garrett
are in the act of working with them (be they
pedagogical software, web use, or networkbased communication) gives us for the first
time an instrument that will track the learning
process rather than assigning a score to the
outcome of that process in a test. In this way
research on learning – whether that is pedagogical research or second language acquisition research – is fused with, inseparable from,
the practice itself. The data we can gather this
way can be used in a wide variety of different
efforts. (1) They will allow us to do formative
evaluations on the structure of our software:
do students make use of the options we provide? Do they seem to be learning what we
intend them to learn from this activity? Do
students of different ability levels or different
learning styles use our materials differently,
and if so do we need to recognize those differences in design options? (2) They allow us to
investigate our non-technology-based teaching
as well, to do straightforward pedagogical
research. Does the way students carry out their
homework assignments show that they have
absorbed what we intended them to in class?
Do our methodological assumptions hold
water? (Amanda Brooks, who did her Ph.D. in
French at Vanderbilt University, used the datacollection built into Systeme-D to show that
students who were strongly encouraged and
taught to ‘think in French’ did little if any better in compositions than those who were
explicitly encouraged to ‘think in English and
translate into French.’ You might be taken
aback at her conclusions, but the design of the
research opens up great possibilities in many
directions.) From this perspective we can see
how ‘computerizing’ a conventional teaching
activity or pedagogical technique doesn’t necessarily change it but it does give us a way to
do research on it.
But (3), we also need to collect data on
what students do in the new kinds of learning
environments that technology offers them,
data on how sophisticated technology use
changes pedagogical practice and changes
learning. We as CALL enthusiasts are convinced that multimedia enormously enriches
language learning, but we still define and
assess that learning (if at all) with old para10
digms. It’s one thing to claim that multimedia
extends and enhances the learning of culture,
even though we don’t have very good ways of
testing that assertion. More significantly, it
seems to me, multimedia – the fusion of text,
audio, and video – offers the possibility of
exploring the relative effect on language learning of reading text, listening to the speech
stream, and viewing language in its full social
context, including body language. What does
multimedia do for the learning of language?
We have never before been able to investigate
how these abilities support each other, because
we have never before had the capability of
selectively manipulating our materials to
emphasize one or the other source of input and
measure the effect. This means that novel
unprecedented teaching practice can lead to
the development of quite new areas of second
language acquisition theory. Charles Ferguson
once emphasized how important it is that both
theorists and practitioners understand that the
flow of wisdom is not unidirectional: it is not
always theory that informs practice.
We must go further. By tracking what students do in the whole range of language learning environments and materials, traditional
and new, we can leap out in front of current
practice to do what I spoke of earlier – take
charge of the ways in which technology
shapes language education. When distance
learning is brought up as a way to ‘extend’
language learning opportunities, most language teachers display a kind of knee-jerk
reaction: “That’s not appropriate for language
learning.” We stress the need for personal contact with real live speakers of the language, for
the personal specifics of body language and
facial expression, for full interactivity at the
speed of real conversation, etc., and the ways
in which distance learning diminishes these
make it unacceptable to us. Don’t get me
wrong, I would agree with all those objections
if we were faced with a flat either-or decision:
conventional classroom-based teaching with a
teacher, or language pedagogy delivered via
videoconferencing with no actual contact
hours. But in fact we could insist on a rational
combination of classroom and distance teaching. After all, we’ve operated for years with a
ReCALL
Where do research and practice meet?: N Garrett
combination of two environments: students
spend some time in class with teachers and
some time working on their own, doing homework. In recent decades we have been able to
enhance some of the working-alone time with
audio. Now we can not only enhance the
working-alone time with video as well, and
with fully integrated pedagogically glossed
video, audio, and text; we can also offer outside-of-class communicative opportunities
with e-mail, listserves, and chat. We don’t
object to those as additions to our on-campus
classroom practice, and surely those constitute
‘distance learning’just as much as does videoconferencing, even when the distance is less
than a mile, as from the dorm room to our
office. The reason we object to the idea of
these forms of communication replacing any
part of our classroom contact hours, it seems
to me, is that we really have no idea (a) what
kind of language learning actually happens in
face-to-face communication with a teacher,
(b) what kind of language learning happens in
spontaneous real-time meaningful personal
communication that’s network-based instead
of face-to-face, with other learners or native
speakers instead of teachers, (c) what kind of
language learning happens in meaningful
interpersonal communication that’s not in real
time (as in e-mail and listserves), and (d) what
kind of language learning happens when students work alone, doing homework or even
working in self-instructional situations.
That’s the research agenda that we need to
undertake now, so as to have solid evidence on
which to base principled arguments for and
against whatever form of distance learning is
suggested to us in our particular institutions.
This is not just a question of regulating different kinds of pedagogical input, though; it
includes some very basic issues in second language acquisition theory. Some strong proponents of the communicative approach insist
that virtually all language learning happens in
the act of communication, where the particulars of language use are shaped by the particulars of the discourse situation and of the participants in it. Bill VanPatten, one of the
strongest methodologists of this school, said
in a teleconference last year that those who
Vol 10 No 1 May 1998
teach by the grammar-translation method
could probably make good use of technology
– could even be replaced by it – but that those
who teach a truly communicative syllabus
wouldn’t have much use for it at all. However,
others who espouse communicative teaching
just as strongly have seen network-based communication as an enormously powerful support for it. Still, we don’t yet know much
about what kind of language use results
(among native speakers or among learners)
when, in e-mail and chat, the personal and situational particulars of the participants and the
communicative act are not known. Furthermore, we have never undertaken a comprehensive or systematic study of what kinds of
language learning can happen when learners
are not actively communicating. What about
practice, which we generally assume leads to
internalization and automatization of material
learned in class or in communication? What
kind of practice? What kind of feedback on
the practice? Score-keeping? Forced review of
missed items? Explanations? What role does
memorization play?
It’s out of fashion now, but some of us still
believe that memorization can be not only a
very useful shortcut to retention but can also
furnish a wealth of immediately accessible
examples of language use that can be used as
reference points in spontaneous language use.
What about problem-solving or cognitive
approaches to grasping the relationship
between meaning and form? What about cognitive and metacognitive strategies and learning? What role do those play in relation to
classroom learning, and what role might they
play in relation to distance learning? Even if
we accept sociolinguistic and discourse arguments that the functions of language are best
learned in the act of communicating, perhaps
the notions of language (how it gives formal
structure to the semantics of time, space,
causality, case roles, foregrounding and backgrounding of information, etc.) are best
learned outside of the communicative act, in
reflection and problem-solving mode. To complicate this issue still further, though, we’ll
also have to explore these distinctions in terms
of different kinds of learners. Highly analytic
11
Where do research and practice meet?: N Garrett
and introspective learners may well want to
learn a lot not only about grammar but even
how functions work before engaging in communication, whereas context-sensitive, extroverted, and inductive learners may prefer to
extract even grammatical form and notions out
of communicative input.
One way to develop such a research agenda
would be to establish experimental language
education projects that combine face-to-face
teaching, networked communication, and
homework (preferably homework using good
multimedia), and to explore in depth what kind
of learning happens for what kinds of students
in each of these environments. At CTW (Connecticut, Trinity, and Wesleyan) we are planning a project that will provide a structure for
offering several less commonly taught languages which these three small schools cannot
regularly staff (perhaps Korean, Portuguese,
Swahili, Hindi or Arabic) by hiring one teacher
per language for the three campuses. The
teacher would spend one day per week at each
campus giving a long class and meeting with
students; students would have regularly scheduled network interactions both with the teacher
on days when s/he is at another campus and
with the other students in the class from the
other campuses. The teacher would also spend
considerable time in materials development for
the students to use in working on their own. A
researcher working with the teacher would collect a wide variety of data from students in the
classes to track all the variables that could be
controlled for. If it is possible for CTW to
acquire videoconferencing equipment so that
the teacher can teach distance classes or so that
students can meet that way, we’ll include
research on that capability.
I’m sure that amongst the CALLers assembled here we could come up with a good collection of other research designs that could
yield important data to feed into the future of
language education, both technology-supported and classroom-based. Obviously, such
ambitious projects are time-consuming to
carry out, but technology supports collaboration amongst researchers just as well as
amongst students and teachers. Such research
is expensive too, but considering the signifi12
cance of the results, I cannot help but be optimistic about the availability of funding.
I don’t want to gloss over the potential dangers of doing such research. Given the budget
pressures on higher education, administrators
will face increasing temptation to cut faculty
lines in the hope that technology can substitute
for teachers. We might find that the best technology-based materials really could be used
responsibly to deliver a significant part of language instruction for carefully limited purposes under carefully constrained conditions,
only to find our research quoted as recommending something far more radical than we’d
intended. I worry about this a great deal, and in
my more melodramatic moments I think of the
plight of those physicists before World War II
who wanted to explore the potential of nuclear
energy for peaceful purposes only to see their
work used to create weapons. But we are on
the horns of a dilemma here: if we hold back
from doing this research because we fear that
the results will be used against us, we will have
no data on which to build the case for how it
should be used for us. This is an issue that we
need to talk about very seriously indeed.
In my more hopeful moments I see our
work leading to a splendid integration of
research and practice not only for CALL but
for language education across the board.
Undertaking research may not be necessary
for career advancement of all language teachers – nor is it an endeavor that attracts us all
personally – and it’s not absolutely necessary
for the development of excellent materials and
excellent technology-based teaching practice.
But it is necessary for the advancement of our
collective agenda and for the discipline as a
whole. To accomplish that goal we need to use
our practice as the basis for developing a
much broader range of research, both pedagogical and theoretically motivated, that will
open up a new paradigm for our field. Higher
education is full of rhetoric about integrating
research and teaching practice, but the rhetoric
often seems idealistic. We need to accept, now,
that it is not only possible but urgently necessary – and that we in CALL are in the best
possible position to make it happen. More
power to us!
ReCALL
ReCALL 10:1 (1998) 13–20
Technology and universities:
context, cost and culture
Chris Curran
Director, National Distance Education Centre, Dublin City University
This paper relates to the projected role of the new technologies in university teaching in the light of considerations of cost, context and culture. The author is currently engaged in a study of the successful
application of the new technologies in university distance teaching, in North America and countries of
the European Union.
Introduction
Radical change in higher education, induced
by the advent of the new technologies, is predicted with increasing frequency. More and
more one hears such terms as ‘electronic university’, ‘virtual university’ or, somewhat disparagingly perhaps, the ‘cyber university’ proposed as paradigms of higher education in the
new millenium. Are we at a watershed in the
historical evolution of universities? Are we
witnessing the inception of a seminal change
in the traditional process of teaching and
learning in higher education? What credence
should we give to these harbingers of change?
Change in Universities
Change in universities, of itself, is hardly
news. Indeed, contrary to popular myth, many,
perhaps most, universities in EU countries
Vol 10 No 1 May 1998
have been subject to a gradual but fairly continuous process of change over the last two or
three decades. True, the factors inducing
change have been diverse. Dominant among
them has been the substantive increase in student enrolment which, in many European
countries, has seen the number of post-secondary students quadruple in thirty years
(Gellert, 1993: 10). In some countries this
expansion has been achieved against a background of severe constraints on public funding
for higher education, with a consequent
decline in the resource input per student.
The increasing contact between the university and the wider world outside the walls is a
second factor inducing change. The former
perception of the university as an ivory tower,
if indeed ever true, has given way to the
imperative to respond to industry’s needs (real
or imagined), and to the insistent demand by
governments for a greater measure of public
accountability on the part of universities, not
13
Technology and universities: C Curran
least with respect to the quality of their teaching and research. Even the role of university as
generator of new knowledge is under attack
from powerful and often affluent centres of
knowledge-generation outside the traditional
locus of the university.
New Information Technologies
These and similar changes, profound and often
painful as they are, pale into insignificance
beside that predicated as a consequence of the
new information technologies. What makes the
predictions relating to the effects of technology
different, is the fundamental character of the
change forecast, and its implications for the traditional modes of education (Ravitch, 1993: 45)
and in particular for teaching and learning. A
change, some observers predict, in the basic
technology of teaching and learning, as fundamental as that arising from the development of
the printing press (Massy and Zemsky, 1995: 1),
and one with significant implications for universities, their students and staff (Cox, 1995: 5–7).
Predictions of this kind would, in the normal course, tend to be viewed by educators as
overly optimistic (or pessimistic, depending
on one’s point of view). Seen in the context of
recent advances in telematics, however, they at
least give cause for reflection. Certainly the
advances in technology are impressive. The
speed with which computers can process data;
the facility to convert audio, video and text to
digital format; to compress it for compact storage and high speed transmission; the growth
of wide-band tele-communication networks
and high-capacity satellite systems; all have
greatly enhanced our capacity to process information. Since information, in one form or
another, is an essential input to teaching and
learning, it is hardly surprising that education
is widely seen as a sector well suited to the
exploitation of these new technologies.
Computers, of course, are already widely
used in the administration of universities
where they would seem to have had a significant and generally positive effect on productivity. This is largely true also of library services. Within the academic sphere too the
14
developments in telematics have stimulated
significant change, in curricula (computer science, electronic engineering, communications
and design, for example) and in the process of
research and academic administration. Most
researchers now routinely use computers as
tools (for writing, analysis, and simulation) and
telematic networks (for easier information
access and communication with peers).
Nonetheless, administration, even academic
administration, is one thing; what of the impact
of these technologies on university teaching?
Potential
The potential is clear. Indeed there is now
abundant evidence to show that the new technologies can provide a powerful resource for
teaching and learning. At a minimum they provide easy access to bibliographic and other
reference materials; to software to enhance
classroom teaching; to digitalised data for
independent learning, together with tools for
simulation, analysis and synthesis. They additionally, and perhaps most significantly, facilitate communication, group discussion and collaborative problem solving. This much is clear
and generally accepted. But will these new
technologies radically change the traditional
approach to teaching and learning in universities and so bring about the paradigm shift so
widely heralded?
Cost
The potential cost is one cause for concern,
and indeed, the application of the new technologies can be costly. The required investment in computers, video production facilities,
virtual libraries, central servers and data networks, can be considerable, especially where a
common access standard has to be supported
across a total system. To take one example, I
have recently had an opportunity of visiting a
number of universities using video conferencing for teaching students, both on and off campus. In most cases the capital investment was
considerable, ranging from about 150K ECU
ReCALL
Technology and universities: C Curran
to more than 1M ECU. There was in addition
a significant annual cost for technician support, maintenance and tele-communications
charges. In a number of the centres visited, the
cost hardly seemed justified by the level of
teaching delivered over the systems – a
significant factor in achieving cost efficiency
(Bacsich et al., 1993: II). The often short and
unpredictable life of these facilities, and the
need to provide on-going technical support for
their effective operation and maintenance, are
additional factors which need to be carefully
considered.
Of course the investment could be justified
on other grounds. The use of the network for
administration and meetings of university staff
was an important source of cost saving in a
few cases, although it is questionable as to
whether even extensive use of the networks
for this purpose justified the capital outlay.
Other reasons for investing in video conferencing related to factors such as the need to
make the most of scarce teaching skills, or to
fulfil the university’s mission of reaching out
to disadvantaged or remote communities.
Indeed there are many cases where technology is used to good effect: meeting needs
which could not otherwise be met; enhancing
the quality of courses previously taught exclusively in the traditional manner; or even providing programmes at a lower unit cost than
would be possible through more traditional
modes of teaching. Even in such cases, however, one finds, all too often, that the positive
results relate to pilot programmes often carried out in circumstances difficult to replicate
in day-to-day operational teaching. Or that the
technology is used to support activities marginal to mainstream teaching; and that justification is based on a partial analysis of costs.
As Green and Gilbert note
“We have yet to hear of an instance where the
total costs (including all realistically amortized
capital investments and development expenses,
plus reasonable estimates for faculty and support staff time) associated with teaching some
unit to some group of students actually decline
while maintaining the quality of learning.” (Green
and Gilbert, 1995)
Vol 10 No 1 May 1998
Experience of such cases has led some
observers to argue that examples of successful
application do not, by and large, address the
core, campus-based instructional activities of
most faculty at most institutions (ibid).
Productivity
The general argument in favour of using technology is clear and, by and large, convincing.
Productivity in conventional education, so the
argument goes, is effectively static, being
based on a student-teacher ratio fixed within a
relatively narrow band. An increase in student
numbers, therefore, effectively triggers a concomitant increase in staff. Staff costs in universities are a high proportion of teaching
costs (typically some sixty to eighty percent)
and tend to rise in tandem with (if not actually
faster than) the rate of inflation. It is hardly
surprising then that the potential for increased
productivity in traditional teaching is generally viewed as rather low.
This assumption, of course, is only partly
correct. In practice, many stratagems are
adopted to get around this apparent rigidity:
increasing student/teacher ratios (Bache,
1993: 176); abandoning “...the Humboldtian
ideal of teaching by active researchers” by
employing staff exclusively for teaching, with
no research responsibilities (Stronhölm, 1996:
10); recruiting a higher proportion of part-time
staff (at a lower unit cost); building larger lecture theatres, to name just a few. Nonetheless,
the essential argument that the approach to
teaching (the production process in economic
terms) has hardly changed is broadly correct.
When, additionally, one takes account of the
capital cost of providing more student places,
and the relative decline in the cost of technology (compared to staff salaries) the argument
appears all the more convincing.
Nevertheless, a key issue is the reluctance
to substitute technology for teachers; indeed
the difficulty of displacing teachers in conventional education is arguably one major reason
why technology has, as yet, had such a limited
impact in that sector. Technology, initially in
the form of printed texts, has been used for
15
Technology and universities: C Curran
more than a century to deliver the content of
courses to students in distance teaching programmes, as indeed have later technologies –
audio and video tapes and of course broadcast
media. Satellite television is an interesting
example of the current use of technology to
disseminate lectures, often by leading academics and research scientists, to literally thousands of students dispersed across continents.
The National Technological University in the
United States is an example of one such institution, which for more than a decade has been
delivering short courses, workshops, and
research seminars and masters degrees to engineers and managers. The 1000 or so reception
sites for the NTU programmes are located in
high-tech companies, government agencies
and other universities in North America and in
the countries of the Pacific Rim (Curran,
1997: 341).
Economies of Scale
These dissemination or broadcast technologies, which of course can take many forms
(from printed texts to computer networks) are
at the heart of most distance education systems. Almost all require some, and most substantial, investment in development prior to
the delivery of courses to students and so are
subject to economies of scale to a greater or
lesser degree. This poses no problem for costeffective operation, provided sufficient students enrol on the programme.
Similar economies of scale, however, are
much more difficult to achieve in relation to
ancillary, but often essential, student support
activities. Such support, of course, can take
many forms: contiguous tutorials, mediated
group discussion, and assignment monitoring
are among the more common. One of the most
promising advantages of the new technologies
is their potential to support communication,
interactive discussion and collaborative project work in a manner, and at a level, which
was previously largely restricted to contiguous
discourse in traditional (and even distance)
teaching.
Few of these interactive technologies,
16
however, seem to offer scope for economies
of scale in the provision of student support.
Initial evidence, albeit still somewhat tentative, seems to suggest that unrestricted student access to interactive technologies
increases, rather than decreases, the demands
on tutor time. Rumble, in an early study of
the additional costs incurred in introducing
computer mediated communications into an
existing course at the UK Open University,
noted
“...nobody knows at present how much time
tutors spent off-line preparing and reading messages, whether value for money was achieved,
or whether tutors were grossly underpaid for the
hours they actually spent on the course.”
(Rumble, 1989: 158).
While uncertainty with regard to the productivity of the new technologies is not confined
to applications in higher education (Landauer,
1995), the uncertain outcome may reflect the
still early phase of such initiatives and the,
perhaps inevitable, time lag involved in finding the most appropriate ways in which to
apply technology – the car as horse-less carriage syndrome!
Independent Learning
Considerations of this kind have led some
observers to call for a radical change in the
approach to teaching and learning. To move
from a teaching-centred approach based on a
pre-determined number of time slots, to a student-centred, independent, self-paced mode of
‘mastery learning’. Such changes in implementation it is claimed could reduce the time
required to master curricula (Fisher, 1987: 43).
As Johnstone, a former Chancellor of the State
University of New York, notes
“Technology does not guarantee productivity: but
coupled with changes in pedagogy, economies
of scale, and a paradigm shift to individualized,
self-paced mastery learning, technology can
make greater learning productivity possible.”
(Johnstone, 1992 :8).
ReCALL
Technology and universities: C Curran
Independent learning, however, imposes its
own demands on the system, not least on students and on their teachers. Requirements
vary from one case to another, as might be
expected, but the demands on academic staff
time in designing and developing courses and
supporting students can be high. Similarly, the
demands on the system for effective and well
resourced library and other support services
and, in campus-based programmes, for quiet
study space, can be significant. All too often
independent learning can mean underresourced learning, where unit costs are contained but at a price in terms of the quality of
the learning environment – clearly the opposite of the objective the advocates have in
mind. This of course is yet another example of
the intrinsic difficulty of balancing quality and
productivity in higher education. As Blaug
noted some three decades ago
“The measurement of educational quality is... at
the bottom of all controversies over university
productivity.” (Blaug, 1969: 317).
Moreover, independent learning can impose
demands on students, which not all students
are equipped to meet. Indeed the imposition of
independence on students who have neither
the skills nor ability to take advantage of it is
perceived by some educators as a gross distortion of the educational process (Garrison,
1989: 23–40).
Context
Such views are important. The attitude of educators, and especially teachers, can be critical
in influencing the use of technology in universities. A recent study by CRE (the Association
of European Universities) sponsored by the
European Commission under the Socrates
Programme, identifies a number of constraints
on universities trying to implement new technologies (CRE, 1997). These include legal
constraints in particular relating to intellectual
property rights and copyright; linguistic constraints relating to the problem of minority
languages; and technological constraints.
Vol 10 No 1 May 1998
(This last is many sided and the report notes in
particular the variations in telecommunications infrastructure from one country to
another; the long time required to develop
teaching materials and the need for regular
updating). A fourth constraint identified in the
study related to economic aspects. Issues considered embraced the cost of the necessary
physical infrastructure and its continuing
upgrading and the related investment in software, training, supplies and personnel. The
high cost of multimedia course development
was noted, as was the potential for economies
of scale with attendant low marginal cost.
Finally, the report having identified external
pressures on universities to change, considered an aspect singled out by the universities
at the seminars (conducted by CRE)
“...an internal brake on their efforts to bring
about change through using the new technologies: resistance from people.”
Under this aspect they highlight teachers’ attitudes as a major obstacle to the introduction of
change. With respect to designing course
packages, they note that there is little motivation for an academic to get involved in a
process for which there is little reward; and
note that the negative attitudes of university
administrators were also mentioned. The
report concludes that
“While the external pressures on universities to
change appear to be an almost irresistible force,
the traditions and systems of governance of universities create serious resistance to change.”
Culture of Universities
No doubt much of this resistance is based on
real, perhaps well founded, concern at the per ceived negative impact of technology on the
best and ancient traditions of university teaching. Nonetheless, such concerns may seriously
underestimate the potential of the new technologies to provide for the effective delivery
of course content and for the essential concomitant communication and interaction
17
Technology and universities: C Curran
between students on the one hand, and their
tutors, mentors and peers on the other hand.
This capacity has long been demonstrated in
university distance teaching tutorials, week-end
and summer schools, audio and computermediated tutoring and the like. Indeed, it sometimes seems that technology-based teaching
suffers from being compared, not to conventional teaching as routinely practised in traditional universities, but to some ideal which, if it
ever existed, would now seem all too rare.
In marked contrast to the early nineteen
seventies, when the recently established open
universities were regarded with some scepticism, few academics who are familiar with
good quality distance teaching would question
its capacity to deliver pedagogically efficient
and academically sound teaching, or indeed, to
support it effectively given adequate
resources. To be fair to traditional educators,
however, their concern more often relates to
learning rather than teaching per se. Kelly, a
former university registrar, states the case well
“For most students, informal discussions in corridors, or with lecturers outside class times, are
remembered as times when their intellectual
curiosity and academic creativity was aroused,
when they really began to formulate original
thoughts and ideas, when self-confidence, both
academic and social, began to grow...The video
screen of the cyberuniversity will never replace
this dynamic of the campus” (Kelly, 1997: 8).
This perception of the university as a place is,
of course, very much in the tradition of Newman, who positing the choice between a ‘socalled’ university which dispensed with residence and tutorial superintendence and one
which merely brought young men (sic) together
for three or four years and then sent them away,
had no hesitation in giving preference
“to that University which did nothing, over that
which exacted of its members an acquaintance
with every science under the sun” (Newman,
1976: 129).
Newman’s views were influenced by his experience at Oxford where, during the first half of
18
the nineteenth century, the acquisition of social
skills, rather than intellectual improvement,
was “...the essential point of an undergraduate
education” (McMackin Garland, 1996: 264–
268) and non-utilitarian learning “...opened the
higher echelons of British society more rapidly
than professional or useful knowledge” (Turner,
1996: 296–297). Viewed from the perspective
of the role of technology, Newman’s assumption that the University had to be a place
(Landow, 1996: 349) is particularly interesting.
Rashdall notes that word university in medieval
usage meant “...merely a number, a plurality, an
aggregate of persons.” And went on to note
“It is particularly important to notice that the term
was generally in the Middle Ages used distinctly
of the scholastic body whether of teachers or
scholars, not of the place in which such a body
was established, or even of its collective
schools” (Rashdall, 1936: 5).
The appropriate term he notes “...is not univer sitas, but studium generale” (ibid). Nyiri
rejects the view that proper forms of higher
education necessarily presuppose some form
of traditional university setting as the framework for protracted personal communication
between teachers and students
“...my own student years had no formative effect
on me; anything I have ever learnt I have learnt
by reading books of my own choice, by attending
conferences, and generally by belonging to an
informal network of colleagues having similar or
related interests. As a university teacher I have
not been invariably unsuccessful; but I am perfectly aware of the fact that, during all these
decades, only a fraction of my professional energies was spent on students; and practically none
on fellow faculty members” (Nyiri, 1997).
Community
The key issue would seem to be one of community. The traditional view of the university
as a community of scholars dedicated to the
pursuit of research, the generation of knowledge, and the teaching of students, is still a
ReCALL
Technology and universities: C Curran
powerful ideal. The appropriate application of
the new technologies by facilitating communication, peer discourse and collaborative working could support the emergence of real communities, so allowing the university to
maintain the best of its traditions, but with less
exclusivity than in the past. This surely is a
challenge appropriate to a new millennium.
Conclusions
Are we witnessing the inception of a seminal
change in the traditional process of teaching
and learning in higher education? It is, I think,
still too early to say. Clearly, the new technologies are widening access to higher education, especially in the form of distance teaching. A recent survey of distance education
courses offered by higher education institutions in the United States showed that an estimated 25,760 courses were offered in
1994/95, for more than 750,000 students.
Some 57% of the institutions offering the
courses used two way interactive video. However, while there were some 690 degrees
offered, which students could take exclusively
at a distance, only an estimated 3430 students
received degrees (Greene and Meek, 1998).
When compared to the numbers graduating
from some of the larger open universities, this
seems quite modest.
Moreover, even still, distance taught university programmes in Europe are primarily
text based with some provision for face-toface tutorial support. Telematic media are
increasingly used in some member states. In
Spain and Scandinavia, for example, the use
of video conferencing is well established, as is
computer conferencing in Norway and the
United Kingdom. Many institutions are using
the Internet both as a source of course materials and as a means of communication. Much
of the use however is still of a pilot or experimental nature and even where the applications
have developed to an operational phase their
use is often marginal, in many cases providing
optional additional support, rather than an
essential core facility.
A key challenge for technology-based
Vol 10 No 1 May 1998
teaching is not just to provide the necessary
course materials and for their dissemination to
students, but to provide an effective and cost
efficient substitute for traditional forms of student support. The declining cost of technology
relative to the cost of academic time will no
doubt encourage more and more universities
to become involved in technology-based
teaching. The growth in non-traditional student populations, in post-graduate and continuing professional education students, and the
need to respond to demands for lifelong learning, will inevitably reinforce demands for a
more flexible approach to course delivery.
It is hardly surprising therefore that the
newer technologies are increasingly being
used in university teaching, especially in distance teaching programmes; albeit often in a
supportive or enhancing role, rather than as
central to the teaching process. While this situation is changing, and rapidly so in some
countries, the jury is still out on the long term
consequences for higher education as a whole.
References
Bache P. (1993) ‘Reform and Differentiation in the
Danish System of Higher Education’. In Gellert
G. (ed.), Higher Education in Europe, London:
Jessica Kingsley 9–20.
Bacsich P., Curran C., Fox S., Hogg V., Mason R.
and Rawlings A. (1993) Telematic Networks for
Open and Distance Learning in the Tertiary
Sector. Vol 1, Heerlen: European Association of
Distance Teaching Universities.
Blaug M. (1969) ‘The Productivity of Universities’
(Conference paper). Reprinted in Blaug M.
(ed.), Economics of Education 2, Middlesex:
Penguin.
Cox K. R. (1995) Technology and the Structure of
Tertiary Education Institutions . On-line:
http://kcox.cityu.edu.hk/papers/ct95.htm
CRE (Association of European Universities) (1997)
Universities and the Challenge of the New
Technologies, Geneva: CRE.
Curran C. (1997) ‘ODL and Traditional Universities: Dichotomy or Convergence?’ European
Journal of Education 32(4), 335–346.
Fisher F. D. (1987) ‘Higher Education Circa 2005:
More Higher Learning, But Less College’,
Change.
19
Technology and universities: C Curran
Garrison D. R. (1989) Understanding Distance
Education: A Framework for the Future, London: Routledge.
Gellert C. (1993) ‘Changing Patterns of European
Higher Education’. In Gellert G. (ed.), Higher
Education in Europe, London: Jessica Kingsley
9–20.
Green K. C. and Gilbert S. W. (1995) ‘Great Expectations: Content, Communications, Productivity, and the Role of Information Technology in
Higher Education’, Change, March–April, 221–
231.
Greene B. and Meek A. (1998) Distance Education
in Higher Education Institutions: Incidence,
Audiences, and Plans to Expand. National Center for Education Statistics. On-line:
http://nces.ed.gov/pubs98/distance/980621.html
Johnstone D. B. (1992) Learning Productivity: A
New Imperative for American Higher Educa tion. National Learning Infrastructure Initia tive. (Edited version of a monograph originally
published by the State University of New York
as part of its series, Studies in Public Higher
Education.) On-line: http://www.educom.edu/
program/nlii/articles/johnstone.html
Kelly J. (1997) ‘Cyber campus can’t beat the real
thing’, Irish Times: Education and Living, May
13.
Landauer T. K. (1995) The Trouble with Computers,
Cambridge: MIT Press.
Landow G. P. (1996) ‘Newman and an Electronic
University’. In Turner F. M. (ed.), The Idea of a
20
University: John Henry Newman, New Haven,
CT: Yale University Press 339–361.
McMackin Garland M. (1996) ‘Newman in His
Own Day’. In Turner F. M. (ed.), The Idea of a
University: John Henry Newman, New Haven,
CT: Yale University Press 265–281.
Massy M. F. and Zemsky R. (1995) Using Informa tion Technology to Enhance Academic Produc tivity. On-line: http://www.educom.edu/program/nlii/keydocs/massy.html
Newman J. H. (1976) The Idea of a University,
Oxford: Clarendon Press.
Nyiri J. C. (1997) ‘Open and Distance Learning in
the Information Society’, Keynote address,
Eden Conference, Budapest. (Monograph).
Rashdall H. (1936) The Universities of Europe in
the Middle Ages, Oxford: Clarendon Press.
Ravitch. (1993) ‘When School Comes to You: The
Coming Transformation of Education and its
Underside’, The Economist 328(7828), 45–46.
Rumble G. (1989) ‘On-line Costs: Interactivity at a
Price’. In Mason R. and Kaye A. (eds.),
Mindweave: Communication, Computers and
Distance Education, Oxford: Pergamon Press
146–165.
Stronhölm S. (1996) ‘From Humboldt to 1984 –
Where are We Now?’In Burgen A. (ed.), Goals
and Purposes of Higher Education in the 21st
Century, London: Jessica Kingsley 3–12.
Turner F. M. (1996) ‘Newman’s University and
Ours’. In Turner F. M. (ed.), The Idea of a Uni versity: John Henry. Newman, New Haven, CT:
Yale University Press 282–301.
ReCALL
ReCALL 10:1 (1998) 21–37
Puissance du binaire, créativité du
synaptique
Michel P Perrin
Université Victor-Segalen Bordeaux 2
Donc la machine a gagné!
Au dix-neuvième coup du 6ème jeu de son
match contre Deep(er) Blue, le 13 mai 1997,
le grand maître Garry Kasparov déclare forfait
pour éviter le déshonneur d’un échec et mat en
bonne et due forme.
Avant le match, la presse spécialisée, et
l’autre déclaraient en substance: si la machine
gagne c’en est fini du monde «moderne» tel
que nous le connaissons depuis la Renaissance; c’est le début d’une ère inconnue.
“If Deep Blue triumphs, it will be an emphatic
indicator that artificial intelligence need not
attempt to emulate the brain in order to surpass
it.” (Levy, 1997a: 44)
Jusqu’à présent, je dirais qu’il n’y a rien là que
de familier, rationnel, acceptable, même si
c’est un peu mortifiant pour le genre humain.
Mais, quelques lignes plus bas dans le même
article, voici ce qu’écrit le même Levy, et nous
franchissons déjà une frontière:
“How well Kasparov does in outwitting IBM’s
monster might be an early indication of how well
our species might maintain its identity, let alone
Vol 10 No 1 May 1998
its superiority, in the years and centuries to
come.” (id.:45)
Rien que cela! Le ton était donné. L’enjeu
était bien celui de la technique et de l’humain:
avec, pour ou contre l’humain.
Et la machine gagna le match: d’une
manière qui frappait l’imagination populaire,
puisque ce n’était pas, cette fois, dans le secret
de quelque laboratoire universitaire, mais au
vu et au su du village planétaire tout entier, car
on pouvait suivre le match en direct sur Internet, à l’adresse sans bavures ni déguisement:
www.chess.ibm.com. On en arrivait au point
de non retour, où le meilleur de notre
biologique se voyait mis en infériorité par le
meilleur du technologique – pourtant inventé
par nous.
Pour dire les choses schématiquement, la
puissance du binaire l’emportait, ô combien
publiquement, sur la créativité du synaptique.
Ou, de manière peut-être encore plus parlante,
le monde du pur quantitatif se mettait à créer
du qualitatif: c’était la première abolie des
nombreuses frontières dont nous allons voir
que les nouvelles technologies les abolissent
en effet.
C’est ainsi que Kasparov, fatigué sans
21
Puissance du binaire, créativité du synaptique: M P Perrin
doute, vexé aussi, humain trop humain, commença par accuser l’équipe des programmeurs
d’IBM (tiens, il y avait donc derrière Deep
Blue une équipe humaine, avec à sa tête le
grand maître Joel Benjamin, au nom prédestiné?) de tricherie. Pour ensuite déclarer:
«Suddenly [Deep Blue] played like a god for
one moment,» Et le même Stephen Levy d’expliquer, une semaine après son premier article
déjà cité:
“What really shook Garry Kasparov... was a
move that the computer didn’t make. On Move
36, Blue had an opportunity to shift its queen to
a devastating position – clearly the smart
choice. Instead it took a subtler but superior tack
that wound up to be near decisive in defeating
Kasparov...” (Levy, 1997b: 4)
La machine n’avait pas tant joué comme un
dieu que comme un homme, refusant de se
conformer à l’évidence de l’avantage à court
terme. Mais pour en arriver là, quel rapport de
forces? D’un côté le champion d’échecs, frêle
«roseau pensant» comme disait le grand
Blaise, qui comme tout un chacun d’entre
nous, peut dans le meilleur des cas, passer
simultanément en revue dans sa tête deux états
de positions sur l’échiquier: 2 états. En face de
ses 75 kilos bien vivants, le monstre en
principe inerte d’IBM, un super assemblage de
puces de silicium RS/6000/SP pesant 1,4
tonne et capable, lui, d’examiner et comparer
200 millions de positions à la seconde (certains disent même 400 millions par moments):
200 millions contre 2. Pour son coup «divin»
Deep Blue a «computé» pendant deux minutes
(c’est le terme propre, meilleur ici en l’occurrence qu’ordinateur: «compute» existe en
français depuis 1584, et désigne entre autres le
mode de calcul des fêtes religieuses dans l’année liturgique...), soit 24 milliards d’analyses
de position, contre 240 pour l’homme.
Il faut donc une machine 100 millions de
fois plus performante que le cerveau humain
pour égaler ce dernier, voire le battre à son
propre jeu. On serait tenté de dire CQFD. Malgré l’énormité de l’écart quantitatif, la balance
reste à peu près égale. C’est au prix exorbitant
d’un ratio 1/108 seulement, donc quelque part
22
rassurant, que le quantitatif produit du qualitatif. Et puis, si l’homme trouve la situation
par trop vexatoire, n’oublions pas qu’il lui suffit de débrancher la machine pour que tout
s’arrête.
Pour que tout s’arrête? N’y a-t-il pas dans
Odyssée 2000 un robot nommé HAL dont il
me semble me souvenir qu’il est capable de se
reconnecter tout seul au circuit électrique? Ou
si ce n’est lui c’est donc son frère, dans une
autre histoire de science fiction.
Eh oui, à travers les nouvelles technologies
que nous manions quotidiennement, la science, de plus en plus, rencontre la fiction.
Mais restons sur terre: et comme il est bien
connu qu’on «n’explique que ce qu’on ne
comprend pas» (Barbey d’Aurevilly) je me
contenterai de questions, et de prises de position, sans prétendre conclure à quoi que ce
soit: nous sommes sur un terrain extraordinairement mouvant, et très rapidement
évolutif.
L’extraordinaire disproportion de 1 à 100
millions ne s’explique que d’une seule façon:
alors que le cerveau humain procède sélectivement et synthétiquement (synaptiquement si
l’on préfère) la machine, elle, ne peut, à
chaque fois que calculer séquentiellement: à
chaque coup, il lui faut être linéairement
exhaustive, épuiser toutes les possibilités:
seule la vitesse incroyable de ses couplages de
processeurs le lui permet. Le cerveau est massivement parallèle. L’ordinateur est formidablement séquentiel.
Du point de vue humain cette exhaustivité
répétée à chaque coup est un gaspillage inouï:
et c’est cela qu’on nomme l’intelligence artificielle. Intelligence artificielle? Plutôt incommensurable stupidité! comme l’exprime l’auteur de Mind, Brain and the Quantum,
Michael Lockwood:
“not so much artificial intelligence as incredibly
rapid artificial stupidity, where exhaustive and
undiscriminating searches produce results we
would achieve, if at all, only by highly selective
searches guided by insight. (1997:14)
Cette stupidité incroyablement rapide, de
plus en plus rapide, c’est elle nonobstant qui
ReCALL
Puissance du binaire, créativité du synaptique: M P Perrin
permet le progrès pédagogique dû aux nouvelles
technologies. Atitre d’exemple, puisqu’on n’est
jamais si bien servi que par soi-même, deux
courts aperçus de réalisations produites dans
notre Département à huit ans d’intervalle, pour
souligner l’évolution en tendance:
1. Le didacticiel MEDICAN, essai de vraie
valeur ajoutée, en un temps où l’EAO se
contentait le plus souvent de numériser des
«drills» existants. MEDICAN, dès 1989,
mettait en oeuvre trois principes:
• d’abord le détournement pédagogique
d’un ensemble de QCM imprimés, prévus
pour vérifier les connaissances médicales
du futurs internes en médecine britanniques. MEDICAN commence par oraliser
ces QCM, afin de les rendre efficaces pour
l’apprentissage de l’anglais médical par
des étudiants français de médecine . L’utilisateur ne fait qu’entendre les questions –
les ordinateurs de l’époque commençaient
à numériser le son. C’était un début de
démarche multimédia, la règle d’or étant
qu’en langue de spécialité, la matière première doit venir des spécialistes. Il ne
saurait être question, pour un linguiste, de
créer un questionnaire médical, que ce soit
en français ou en anglais.
• ensuite le recours à l’hypertexte: en
créant des liens, pour chaque question du
QCM, avec l’article correspondant d’un
traité complet de médecine qu’on a
numérisé pour l’occasion, on procure à l’utilisateur la possibilité d’appeler à l’écran
un texte de référence qu’il ne serait,
autrement, jamais allé chercher en bibliothèque. C’était le moyen naturel de faire
lire une bonne dose d’anglais de sa spécialité par tout futur médecin francophone.
• enfin, parce que dans la terminologie
médicale, les nombreux polysyllabiques
anglais sont autant de pièges pour le locuteur francophone, MEDICAN fournissait
un dispositif de prévision et vérification de
la place des accents toniques, avant de
donner l’occasion à l’utilisateur d’enregistrer et réécouter sa propre voix: l’ordinateur devenait un laboratoire audio-actif
comparatif ad hoc.
Vol 10 No 1 May 1998
2. Huit ans plus tard, en 1997, le prototype
HYPERLINKS, mis au point avec Supercard, démontre la nouvelle valeur ajoutée
que procurent les cartes d’acquisition
vidéo: en numérisant une séquence de
télévision, on se donne le moyen d’ensuite
la séquencer et segmenter ad libitum. A
partir de là, tout devient pédagogiquement
possible: on peut créer tous les écarts possibles, et bien sûr tous les liens, entre
image, son et texte.
On voit que, pour l’essentiel, ce qu’a permis
l’évolution de la technique, en quelques
années, c’est le traitement par DECONSTRUCTION de chacune des composantes,
texte, image et son, du document complet multimédia, afin de mieux permettre à l’apprenant
d’en RECONSTRUIRE lui-même et le sens et
la structure tant de surface que profonde: ce
faisant, de véritablement construire sa compétence de compréhension de l’écrit comme de
l’oral, et de pouvoir vérifier à son gré s’il a ou
non compris – préalable logique à tout
maniement de la langue en production. C’est
simple comme bonjour, tout le reste n’est que
technique. C’est simple, mais cela rencontre le
(et rend compte du) fonctionnement incroyablement complexe de nos 1300 grammes de
matière grise organisée en myriades de connexions. Les possibilités ainsi ouvertes par le
numérique corroborent les intuitions remarquables de Vygotsky, et sans doute celles de
Piaget, ces deux grands psychologues de la
cognition. Pour L.S. Vygotsky, il existe un
“effet structurant de l’utilisation d’artefacts sur
l’activité du sujet: il nomme cela le concept
d’acte instrumental pour caractériser la recomposition d’ensemble de l’activité qui en est la
conséquence.” (Schneuwly, 1985: 67)
Et c’est bien l’effacement de la frontière
entre traitement quantitatif des données et
exploitation qualitative de ce traitement qui
permet le changement de paradigme pédagogique auquel nous assistons. On passe de
l’ère de l’enseignement conçu comme transmission directe et univoque du savoir-connaissance à une médiation, qui conduit à la con23
Puissance du binaire, créativité du synaptique: M P Perrin
struction des compétences fondée sur la découverte – démarche heuristique – du savoir nécessaire. A partir de ce fait avéré, voyons quelles
autres frontières conceptuelles ou matérielles la
puissance du binaire aujourd’hui efface.
La question centrale, celle de la frontière
homme-machine, nous en dirons un mot plus
tard: elle est en fait à la fois cruciale et futile,
donc pas traitée (pas vraiment traitable) pour
l’instant tant s’y mêlent le savoir et l’irrationnel, le réaliste et l’imaginaire, l’émotionnel, voire le millénarisme. En termes simples
et concrets, y aura-t-il, y a-t-il non seulement
adéquation, mais aussi équation entre le
mode opératoire de la machine et celui de
l’homme? Nous choisirons, ici, de tirer un trait
de partition net: l’adéquation est fortement
souhaitable et d’ailleurs inéluctable: il s’agira
de s’en servir au mieux. L’équation, si elle
n’est pas chimère, ne saurait qu’être brouillage
de notre identité. S’il nous fallait ne plus
apprendre les langues que pour converser avec
une machine, où serait le progrès? Nous serions dans l’ordre d’un clonage non biologique,
qui fait certes partie depuis les temps immémoriaux des fantasmes de l’humanité: il est
important que cela reste un fantasme.
Nous ferons donc abstraction, en tout cas
moratoire, de l’abolition possible de cette
frontière-là. Il reste que d’autres sont d’ores et
déjà effacées. D’abord la distinction qui déjà
s’estompe, on l’a vu pour commencer, entre
quantité et qualité: elle découle de la numérisation même.
Puis la frontière entre canaux/véhicules de
l’information, qui deviennent «transparents»,
ou identiques. Pour un ordinateur, le texte,
l’image, le son, c’est du pareil au même dès
lors qu’il s’agit de les convertir en une succession de 0 et de 1, suite en base 2 de «bits»,
binary digits, ou encore d’ouvert-fermé: le
courant passe ou non, point final. Sur le plan
électrico-mécanique, un ordinateur n’est rien
d’autre qu’un super interrupteur. La machine
ne «sait» pas que le son est du son et l’image
de l’image, pas plus que l’oiseau ne «sait»
sans doute qu’il vole, ou le poisson qu’il nage
– la conscience de ce qu’il est et de ce qu’il
fait, jusqu’à plus ample informé, est l’apanage
de l’homme.
24
C’est la nature même du multimédia que
d’abolir cette frontière-là. A ce propos, on évoquera la remarque pertinente de Patrick Loriot:
«L’image a aidé l’homme à vivre, comme la
magie, dont elle est le parfait anagramme»
(Nouvel Observateur, suppl. TV, 31/8/97).
Certains commerciaux l’ont bien compris, qui
fondent leur publicité sur des slogans du genre
«Faites appel à tous vos sens pour apprendre
plus vite». Déjà Aristote disait qu’on «ne
pense jamais sans images». L’esprit a besoin
des sens pour fonctionner: donc le multimédia
permet à l’esprit de mieux fonctionner!
En corollaire se trouve également abolie la
relation taille/puissance: la miniaturisation
croissante des composants, l’accélération constante de leur fonctionnement, caractérisent
l’industrie informatique. Nicolas Negroponte
nous en rappelle la croissance exponentielle –
elle n’est pas sans rappeler l’histoire de cet
empereur chinois qui avait eu l’imprudence de
promettre de doubler sa mise initiale d’un
grain de riz à chaque case du jeu d’échecs: à la
62ème tous les greniers de Chine se trouvèrent
vidés, à la 63ème tous ceux d’Asie, à la 64ème
ceux du monde entier... Nous y sommes: le fil
de cuivre téléphonique banal a une bande passante de 64,000 bps ou bauds par seconde, ce
qui convient pour transmettre la voix; le son
stéréo haute fidélité exige 1,2 millions de bps;
et la vidéo 45 millions – ce qui est évidemment beaucoup: il faudrait un très gros fil de
cuivre. Mais les progrès fulgurants des modes
et moyens de compression ramènent ce
dernier chiffre à 1,2 millions. A partir de là
tout est possible car la fibre optique, nouveau
véhicule de l’information numérisée, n’est pas
plus grosse qu’un cheveu, fait circuler, elle,
100 milliards d’impulsions par seconde,
autrement dit 1 million de chaînes de télévision en simultané: autant dire l’infini. Plus
modestement, dès aujourd’hui, nos réseaux
Ethernet en base 100 mégabits (et non plus 10,
comme le plus souvent encore) permettent la
vidéo en direct sur écrans d’ordinateurs, donc
la didactisation «en ligne» du document multimédia.
Considérons encore notre relation au
temps et à l’espace, au couple espace-temps.
Une autre révolution copernicienne s’est
ReCALL
Puissance du binaire, créativité du synaptique: M P Perrin
opérée: c’est Negroponte, toujours, qui fait
état du passage de la société de l’atome
physique, tributaire des moyens de
mécaniques de transport, à celle du bit virtuel.
La meilleure illustration en est la bascule en
temps réel et permanent du fonctionnement
des places boursières, qui ignorent maintenant
les fuseaux horaires. De ce fait, du point de
vue pédagogique, tombe la frontière entre
Chronos, le temps objectif qui fuit (ah! le fugit
tempus de nos jeunesses latinisées) sans
relâche; et Kairos le temps subjectif, «le temps
opportun de l’apprendre», celui qui permet les
«pauses structurantes» (H. Trocmé-Fabre)
dont l’hypermédia maîtrisé permet l’émergence et l’utilisation.
Se trouve par là annulée notre soumission
habituelle au séquentiel: ici nous sommes
dans le passage du multimédia, dispositif technique, à l’hypermédia, mode cognitif: pour
préparer cette conférence, j’ai fureté sur la
Toile et rassemblé en sautant par hyperlien
d’un site à un autre, d’un auteur à l’autre, une
masse impressionnante de documentation en
un temps record sans bouger de mon bureau.
S’il m’avait fallu faire la même chose à la
façon d’il y a vingt ou trente ans, lorsque nous
préparions encore nos thèses sur des fiches de
papier A4 plié en quatre avec intercalaires de
carbone, vous m’attendriez encore! Nous
avons compressé le temps de bibliothèque,
l’espace-structure du livre. L’hypernavigation,
la recherche booléenne ont véritablement
changé non la nature, mais certainement les
modalités, du travail intellectuel: bien utilisée
pour les parcours de découverte, la machine
décuple notre potentiel, fait de chacun de nous
un chercheur, bâtisseur de son propre parcours
d’apprentissage.
Bien entendu, la frontière réel/virtuel
tombe elle aussi de ce fait. Le site WEB de la
revue Regards de juillet 1997 signale par
exemple ceci: Thomson multimédia et la
société américaine Infinity Media développent
ensemble un projet d’images en trois dimensions, visibles sans lunettes spéciales, ni
casque. L’écran d’arcade permettrait à un
spectateur de voir ce qui se trouve derrière un
arbre au premier plan de l’image, en se
déplaçant dans la salle. Cela va bouleverser
Vol 10 No 1 May 1998
notre regard. 1 Autre exemple frappant récent:
on se souvient du retentissement qu’avait
connu il y a deux ans la reconstitution en
images de synthèse de l’abbaye de Cluny.
Cette fois-ci, dans un «film sans caméra» sur
Clovis, Jacques Barsac recrée la réalité historique disparue en partant de bribes
archéologiques, notamment remparts du
Mans. Quelques rares documents dénichés par
Michel Rouche, historien spécialiste de
l’époque, lui permettent de reconstituer, à partir d’une ivoire de Barbarini, un portrait plausible. Une image de guerrier sur une cruche en
or du 4ème siècle donne par «clonage» à la
palette graphique une armée complète. Il aura
fallu pour cela huit mois de travail sur trois
ordinateurs Macintosh haut de gamme, soit 41
heures de «travail-machine» pour 10 secondes
de film – la quantité, encore, qui génère de la
qualité. Au final, nous héritons d’un document
unique de 49 minutes, pour 2 millions de
francs. Ces images virtuelles à partir de documents réels permettent de donner vie à l’Histoire. (Télérama 2473 du 4 juin 1997).
Et si l’on veut abattre encore une frontière,
considérons l’évaluation: il n’a plus de cloison
étanche entre formation et évaluation, ni
entre évaluations formative et sommative:
puisqu’en tâche de fond l’ordinateur peut
computer en secret et livrer toutes crues, ou
cuites comme on préfère, les données statistiques les plus complètes sur le parcours de
chacun. Espion peut-être, mais précieux auxil iaire pour tout ce qui est mesurable dans un
parcours d’apprentissage: à charge pour le
conducteur des apprentissages d’interpréter
les données, de faire sens avec.
Gommée encore la frontière ludique/
sérieux: apprendre redevient un acte-plaisir de
faire, de découvrir, de résoudre: les NTE nous
redonnent la maîtrise du faire. Observons les
enfants: par essence, l’activité est jeu; si l’apprentissage est actif, on apprend en s’amusant,
on s’amuse en apprenant.
Occultée encore la frontière entre directif
et non directif: un même programme peut
permettre l’un et l’autre: le parcours obligé et
la libre exploration. Et JE peux obliger l’ordinateur à suivre un parcours donné, afin de ME
donner plus de liberté et de richesse de choix:
25
Puissance du binaire, créativité du synaptique: M P Perrin
des moteurs «customisés» de recherche liront
bientôt toute la presse pour moi, et feront
automatiquement la revue thématique de
presse que j’aurai commandée: la chose est
déjà en cours à partir de certaines publications
scientifiques en ligne.
Bien sûr aussi, du point de vue de la
dichotomie saussurienne langue-parole qui
traverse notre siècle et culmine après bien des
égarements structuralistes dans l’école énonciative la plus féconde, s’estompe la frontière
linguistique/pragmatique: quoi de mieux en
effet que le multimédia pour fournir la contextualisation la plus large, permettre les mises en
situation, susciter la réflexion sur le fonctionnement de la langue. Le «poids du contexte
dans la gestion du sens», comme dit notre
collègue J. P. Narcy dans le dernier livre
d’A. Ginet (1997).
Toujours dans l’ordre du linguistique, constatons qu’il y a affranchissement par rapport
aux formes fermées du texte imprimé: par ses
possibilités de lemmatisation, la richesse de la
troncature, la mise à contribution de la logique
booléenne, l’outil informatique permet tous les
rapprochements, tous les prolongements.
D’où résulte l’effacement de la frontière,
encore trop souvent traduite dans nos systèmes
et programmes, entre cerveau droit sensoriel
et synthétique d’une part et cerveau gauche
rationnel et analytique d’autre part (en schématisant beaucoup, évidemment). Physiquement réunis par le corps calleux, nos «deux
cerveaux pour apprendre», certes, mais dissociés par deux cents ans de mise en pratique
d’une pédagogie du verbal/graphique uniquement. On supprime ainsi les cloisons et les
hiérarchies entre styles d’apprentissage:
visuels, auditifs, graphiques, holistes ou sérialistes, tous peuvent trouver de quoi satisfaire
leur mode de fonctionnement cognitif.
Les nouvelles technologies sont donc bien
facteur de progrès pédagogique. A la condition
toutefois que les concepteurs de didacticiels et
tutoriels privilégient, dans leur programmation, ce qui stimule et renforce ce fonctionnement cognitif de l’apprenant: autrement dit
qu’on passe vraiment de l’EAO, l’enseignement assisté par ordinateur d’il y a dix ans
(mille ans à l’échelle de l’accélération du pro26
grès technique!) à CALL (Computer-Assisted
Language Learning), voire TELL (Technology-Enhanced Language Learning, a very fortunate tell-tale acronym) – en tout cas Learn ing. Il n’est pas possible ici d’entrer dans le
détail descriptif d’aucun des systèmes –
auteurs pour les langues récemment mis au
point. Je ne dirai strictement rien des produits,
certains excellents, d’autres inutiles, principalement sur CDRom, qui sont destinés par
leurs auteurs à l’utilisation directe par l’apprenant. Je citerai simplement le nom de certains systèmes-auteurs, destinés à faciliter la
fabrication de supports de travail multi- et
hyper- media par les professeurs pour leurs
élèves. Il s’agit de systèmes conçus par des
collègues linguistes en France, et commercialisés pour certains. Rien de chauvin ou d’exclusif à cela bien sûr, mais tout simplement la
mention de ce que je connais le mieux dans le
genre:
LAVAC
HYPERLAB
LEARNING SPACE
EMATECH
HELPYOURSELF
SMARTALEX
PROGLOSS
Tony TOMA
Université Toulouse 3
Pascal JABLONKA
IUFM de Paris
J.-Claude BERTIN
Université du Havre
Lynton HERBERT
Ecole des Mines d’Alès
Alain CAZADE
Université Paris 9
Tony STENTON
Université Grenoble 3
J.-Claude BARBARON
CRIFEL, Bordeaux
Tous ces systèmes permettent de respecter les
principes essentiels d’une véritable mise en
oeuvre à des fins pédagogiques de l’outil
informatique. Ils autorisent, mieux incitent à,
la didactisation du document authentique, l’un
de mes chevaux de bataille préférés, en
lien avec le concept de langue de spécialité:
je crois fermement que le seul moyen
d’intéresser vraiment, donc de faire progresser
des apprenants non spécialistes de langues,
souvent nos collègues enseignants-chercheurs,
ceux dont nous disons maintenant qu’ils
relèvent du secteur LANSAD (Langues pour
ReCALL
Puissance du binaire, créativité du synaptique: M P Perrin
spécialistes d’autres disciplines) c’est de les
faire travailler sur des documents riches et
fortement marqués culturellement, actuels,
renouvelés, parfois fournis par eux et didactisés par nous. Bref de faire que le cours de
langue devienne le lieu d’une réflexion fondée
sur la spécialité disciplinaire de nos
apprenants, mais qui les amène à un point de
vue «sociétal» sur leur discipline. Professeurs
de culture générale à partir de la langue de
spécialité: à ces conditions nous restons
maîtres, légitimement, du terrain. Autrement
dit, l’outil multimédia nous donne le moyen
de dépasser largement le strict linguistique,
pour aller au langagier et au culturel, pour
intégrer le pragmatique.
Ceci implique, entre parenthèses, que
finisse par tomber une autre frontière, celle-là
bien présente encore, du moins en droit sinon
en fait: enfreinte constamment de manière
plus ou moins flagrante par tout un chacun,
elle est en décalage absurde et obsolète par
rapport aux nouveaux modes de travail que
rend possible la technique. Il s’agit, vous l’aurez compris, de la frontière du copyright.
D’un verrou plutôt, qu’il faudra bien rendre
plus souple, sinon faire sauter ! Il y a là une
mine pour les juristes: on se reportera, sur ce
sujet, au document de GEMME, le Groupement pour les Enseignements sur Mesure
Médiatisés. A l’évidence, seul conviendrait un
système de dédouanement a priori par forfait,
dans le style de ce que permet pour l’écrit le
CFC (Centre français du Droit de Copie, rue
d’Hautefeuille à Paris), bien trop confidentiel,
et splendidement ignoré par l’institution...
C’est la seule solution, qui permet aux professeurs (et aux élèves!) de faire leur métier
sans léser les auteurs dans leurs droits
légitimes. Sinon nous sommes, ridiculement,
en infraction dès l’instant, pour ainsi dire, que
nous mettons en marche un magnétoscope. Or
si l’outil existe, on s’en servira, on s’en sert. A
cet égard, le téléchargement à partir d’Internet
ouvre des brèches nouvelles. Même si on peut
souhaiter émettre des réserves par rapport à
certains aspects de cette «mondialisation»
dont on nous rebat les oreilles, il faudra bien
que les législations nationales prennent en
compte l’évolution des pratiques mondiales!
Vol 10 No 1 May 1998
Mais fermons cette parenthèse. Les systèmes-auteurs dont il était plus haut question
ont pour caractéristique commune de rester
ouverts, et, pour les meilleurs, évolutifs. On
les voit se bonifier, se complexifier: pas trop –
il importe qu’ils restent utilisables par des
semi-profanes. L’étape suivante, par exemple,
sera une reconnaissance efficace de la parole,
encore au stade expérimental (voir IBM
VoiceType sous OS2Warp).
Bref, il s’agit, en se tenant informé des
avancées techniques, de suivre sans crainte
tous les chemins d’exploration, de se servir de
l’outil pour rendre l’apprenant le plus actif
possible, sans répéter les erreurs de l’ère
«idiot-visuelle» qui tua les remarquables possibilités du laboratoire de langue en le transformant en instrument du psittacisme passif.
Au fil de la Toile, on trouve de petites merveilles de ce point de vue de l’exploration,
sans prétention, mais efficaces: enthousiasme
de jeunes enfants de CM1 sur le site de Classe
de CM1 de Piquecos (46): enfants en liaison
avec observatoires et auteurs: journal sur le
Web, même «pendant la récré»; Classe CM2
de Villard de Lans (74) réseau de 150 classes
dans le Vercors: forum de discussion et journal.2
Le meilleur du pédagogique multimédia est
tout en résolution de problèmes, exercices
lacunaires, tâches cognitives de reconstruction, déduction, induction, top down et bottom
up, interaction: du danger de passivité accrue
que la télévision a représenté et toujours
représente, on vient à l’hypermédia/hypertexte, lequel ouvre au contraire grand la porte
à l’apprentissage tout actif, autorise le toutchoisir. C’est cela l’intelligence d’Internet,
non pas centralisée, univoque, totalitaire
(comme le serait on ne sait quel Big Brother)
mais fondée sur la multiplication et le parallélisme des mises en relation: Michel Resnick
donne quelque part à ce propos la métaphore
du vol en triangle des grands oiseaux migrateurs: l’oie de tête semble tout mener ; en fait
il n’en est rien; il s’agit, d’une «harmonie sans
conducteur, d’un ensemble très sensible de
processeurs hautement interactifs». On constate le même phénomène dans la synchronisation spontanée des applaudissements d’une
27
Puissance du binaire, créativité du synaptique: M P Perrin
salle. Ainsi, autour d’un même événement, on
variera les angles d’approche en associant les
médias (Ginet, 1997: 43). Des études sont en
cours quant au mode d’activation du cerveau
par l’hypermédia: cf. Séminaire de recherche
Hypermédias, éducation et formation, université Paris 6, IUFM de Créteil et INRP, c/o Eric
Bruillard, [email protected] et André Tricot,
CREPCO, université de Provence.
Aussi bien, les NTE mettent tout le monde
à égalité car «le savoir va à l’apprenant»
(Michel Serres). La machine est capable
d’analyse de certaines erreurs, donc de signalement et d’interrogation: êtes-vous sûr de
votre réponse? Chacun est amené à découvrir
contenu et règles: on n’apprend jamais rien
qu’on ne découvre un tant soit peu par soimême. L’ordinateur multimédia est l’outil
heuristique par excellence. Il favorise la
démarche inductive, onomasiologique, du sens
(compris par la richesse de la contextualisation
et la pertinence des tâches: cf. le dispositif
VIFAX, diffusé par l’université de Bordeaux
2) à la forme/règle (sémasiologie plus traditionnelle): si déjà la perception monosensorielle est construction, alors la pluri-sensorielle du multimédia équivaut à de la
cognition au carré: ce que Rudolf Arnheim
appelle l’exploit cognitif.
Si la machine est instrument de découverte,
elle est aussi outil d’échange et de partage:
certes le danger d’isolement pathologique
existe, mais les chances sont plus grandes de
voir l’outil servir à la mise en relation, comme
par exemple à travers le système TANDEM,
qui a ses promoteurs ici à Dublin. De Michel
Serres encore, on citera la réflexion: «Tout
apprentissage consiste en un métissage»
(1991: 86). Sa métaphore filée du manteau
d’Arlequin s’applique à merveille à l’hypernavigation.
Et l’outil multimédia est incitatif, motivant:
au sein du programme européen LINGUANET,
qui développe un système de communication
avec traduction automatique partielle entre
polices et services de sécurité de part et
d’autre de la Manche, il a été observé que les
policiers tiennent à écrire dans la langue de
l’autre: de 2 ou 3 coups de téléphone pénibles
par jour, on est passé à plusieurs dizaines de
28
messages électroniques. On se reportera, à ce
sujet, aux dires de Jesse Kornbluth, le rédacteur en chef d’AOL, America on Line, qui
compte 8,5 millions d’abonnés: ex-journaliste
star de Vanity Fair, Kornbluth a quitté l’Establishment du Media-Marketing pour laisser,
nous dit-il, s’épanouir la pensée critique: AOL
représente pour lui une communauté de
170,000 critiques amateurs volontaires qui
parlent des livres qu’ils aiment, un peu comme
les lecteurs du jury du Livre Inter: pour lutter,
nous dit-il en substance, contre une culture en
voie d’abêtissement rapide aux USA sous l’effet des médias de masse. «Chaque fois que
quelqu’un se connecte à un service en ligne, il
s’améliore du même coup en tant que lecteur
et que rédacteur. Surtout, il pratique une pensée libre, individuelle... il faut rendre possible
l’expression directe l’échange, la solidarité et
l’entraide».
Nous voici donc confrontés, avec le multimédia, à une interpellation plus large des
capacités cérébrales individuelles et collectives. Le cerveau, on peut l’espérer, fonctionne
à un peu plus que les 2 à 10 % habituellement
sollicités! On assiste à des symbioses inédites.
Nous entrons peut-être dans l’ère d’une nouvelle «synapsie». Voudra-t-on parler, alors, de
CIRCUMMEDIA?
Oui sans doute, pour nommer à la fois l’activation concomitante et équilibrée des deux
hémisphères cérébraux et la synergie entre
l’individuel et le planétaire. A condition,
naturellement, que les professeurs acceptent
d’en tirer tout le parti possible: il y a nécessité
impérative, pour eux, de se former. D’apprendre à concevoir des ensembles de tâches qui
permettent à chacun d’apprendre, à son
rythme, selon ses propres stratégies: on rappellera la lapalissade de Pit Corder (en 1974
dans un article sur ESP): «on enseigne à un
groupe, mais c’est l’individu qui apprend.» A
cet égard, on pourra mentionner les utiles
DESS de formation de formateurs, notamment
celui d’Alain Ginet à Grenoble. Il faut savoir
aller au delà de l’objection commune: ça
prend du temps! Oui, et de la méthode aussi,
mais c’est du Kairos, et du collectif: le travail
d’équipe, dans les centres de langues devient
une nécessité; on va vers la mutualisation des
ReCALL
Puissance du binaire, créativité du synaptique: M P Perrin
ressources et des productions didactiques. A
ce prix, les obstacles inhérents à la peur du
changement s’estomperont:
“Already we have the technology to develop the
educational models of the future, but we often
shy away from the unknown, the imaginative
and the creative, and give excuses to ourselves
and others for our own inadequacies and failures in accepting responsibility for paving the
way and setting the pace for those to come.”
(O’Donoghue, 1993: 638)
Globalement, d’après tout ce qu’on vient
d’esquisser, les NTE devraient nous permettre
de cesser d’assassiner Mozart dans nos systèmes clos.
Après ce tableau très schématique, et
quelque peu idyllique, tout va-t-il donc pour le
mieux dans le meilleur des mondes? Tout
n’est-il dans les NTE qu’espace de liberté de
l’esprit, sans anarchie ni autoritarisme?
Potentiellement oui, si l’homme maîtrise
son invention. Mais face aux prodigieuses
possibilités d’extension des capacités de l’esprit humain par l’ordinateur, il n’en est pas
moins urgent de rappeler:
1. que le progrès technique n’est pas en soi
doté de sens moral. Il ne sert qu’à ce à quoi
on le fait servir – nous disions pour le
meilleur, certes. Mais le pire n’est pas
exclu. C’est le même outil qui permet
d’une part aux enfants du Vercors de collaborer avec des écrivains connus pour
écrire leur journal de classe; d’autre part
aux animateurs du réseau pédophile Toro
Bravo d’exploiter et avilir des enfants du
même âge en Colombie, ou aux révisionnistes de répandre l’idée que la Shoah
n’est qu’un détail insignifiant de l’histoire...
2. que pour autant, il n’est jamais possible
d’arrêter le progrès scientifique et technique. Les moratoires du biologiste
Jacques Testard, son plaidoyer pour le
non-clonage de l’humain, sont sans doute
louables fort louables, mais interdire ne
fait qu’inciter (voir la Prohibition américaine). Plutôt donc conscientiser: nous
Vol 10 No 1 May 1998
avons besoin d’une éthique forte de toute
la communauté: peut-être me sera-t-il permis d’avancer, que la France, sur ce point,
notamment avec sa Commission Nationale
d’Ethique, est peut-être un peu moins en
retard que d’autres. Même si les politiques,
souvent, hésitent: dans un rapport noblement intitulé «De l’élève au citoyen », un
sénateur chargé de mission écrit en 1997:
“Dans le tintamarre médiatique, il n’y a
guère que les responsables politiques qui
ne s’expriment pas. Seraient-ils indifférents
envers ce vecteur numérique, comme
d’autres le furent avec l’imprimerie? Ou
croient-ils pouvoir protéger leur supériorité
culturelle en sous-estimant le risque de la
voir supplanter par les ‘maîtres des formes
modernes de la communication’? (Sérusclat, 1997).
Philippe Quéau cite, comme exemple d’hésitation, les chiffres de la TGB de Paris-Tolbiac,
la Très Grande Bibliothèque de France: 5 milliards de francs pour la construction, contre
100 millions seulement pour la numérisation
de son fonds. L’administration n’est pas près
de sortir de sa culture «béton-bunker-secret».
Il serait pourtant important, comme le préconise l’UNESCO avec son projet «Mémoires
du Monde» de prendre de vitesse les
numériseurs de fonds patrimoniaux, qui en
rendront payante la visite virtuelle. A côté de
ces timidités à grande échelle, on trouve de
remarquables réussites citoyennes dans l’ordre
de l’intégration des NTIC: la petite ville-laboratoire de Parthenay, en Vendée, est un bon
exemple d’appropriation sociale de la technologie.
Malgré cela, c’est la société tout entière qui
balance: entre l’enrichissement culturel de
tous, et l’enrichissement matériel de quelquesuns. De cela Jean-Baptiste de Foucauld fait
profession de foi (Nouvel Observateur, 15 mai
97, 24): en harmonie et en analogie avec les
théories les plus avancées de la physique
quantique, qui veut, on le sait, qu’à un battement d’ailes de papillon ici, corresponde aux
antipodes un mouvement quelconque d’amplitude semblable ou bien supérieure, il croit que
29
Puissance du binaire, créativité du synaptique: M P Perrin
mon progrès ici a forcément un retentissement
là-bas. Hervé Bourges, président du CSA,
devant Internet (France Inter, 26/8/97), se dit,
lui, partagé entre «extase et effroi ». Catherine
Trautman, ministre de la culture, soumet un
projet de loi pour éviter la mainmise des commerciaux sur Internet, car cela confinerait au
«fascisme mental» (Le Monde Diplomatique,
mai 1997). Le ministère de l’Education
Nationale vient d’ajouter «Technologie» à son
intitulé. Et le Premier ministre Jospin de
renchérir: «diaboliser la technique traduirait
un aveu d’impuissance» (France Inter, Carcans-Maubuisson, Université d’été de la communication, 25 août 97). Bref, l’interrogation
est dans l’air du temps, un peu dans le style
«passez devant et suivez-moi». Bien difficile,
entre l’effroi et l’extase, de raison garder. Et
pourtant!
Et pourtant, il s’agit de voir en face les
dangers. Afin de ne pas risquer d’en oublier la
parade. Moins que jamais, il ne faut se
tromper d’ennemi, ni d’ami. C’est ainsi que le
rédacteur en chef du Monde Diplomatique
Ignace Ramonet rappelle, dans son numéro
d’août 95, les possibles, et malheureusement
bien réels, ravages de la confusion entre réel et
virtuel: plusieurs adolescents américains se
sont fait écraser à l’automne 1993 sur des
autoroutes pour avoir agi comme leurs homologues du film intitulé justement The
Program, lesquels se couchaient par jeu (et
sans dommage) sur le bitume au milieu de la
circulation. Certes le virtuel permet à un
apprenti-pilote de prendre en mains sans
risques un Airbus dans un simulateur de vol. Il
permet aussi le refuge dans cette forme de
solipsisme que peut symboliser la vogue des
Tamagotchis, les poussins virtuels japonais –
tout cela pouvant ressortir à la «culture de
mort» que dénoncent certaines voix. Ou,
moins gravement, à un infantilisme quelque
peu débile: le tamagotchi fait des émules.
SONY a sorti le Post Pet, conçu par Kazuhiko
Hachiya. C’est un facteur virtuel qui s’occupe
du courriel de son maître. On le voit, ours,
tortue ou lapin, prendre la lettre en mains et la
porter à son destinataire: les fonctions d’Eudora, avec animations. Il faut le nourrir, le dorloter. Il tient un journal secret, peut fuguer si
30
on le maltraite, se réfugier dans forêt secrète –
il faudra alors en acheter un autre. Société en
mal de maternage? Culture du refuge dans le
cocooning? Déjà les vrais pets remplacent parfois l’enfant parfois, dans les couples DINKs
(Double Income No Kids); quid, alors, de
l’animal de compagnie virtuel?
Mais le virtuel n’est pas à condamner en
soi: s’il existe distanciation par le jeu, et relation humaine par une communication qui est,
elle, réelle, on voit des avantages certains.
Ainsi du pays virtuel inventé par “Queen Liz”,
Liz Sterling, australienne de 36 ans: son royaume, le Lizbekistan, disparaîtra le 9 septembre 1999. Entre-temps, il aura, avec ses 400
citoyens/sujets, connu un gros succès, marqué
de la devise «Liberté, égalité, virtualité», sans
recherche de profit matériel pour quiconque: il
s’agit bien du seul plaisir de la construction
virtuelle (www.lizbekistan.com).
Au plan des applications linguistiques, et
par analogie: on voit bien l’oubli facile de
l’adéquation fondamentale «rei et intellectus»
dans la fabrication d’exercices coupés de toute
mise en situation «vraie»: cette déviance est
possible aussi par le livre, mais tellement plus
facile à générer automatiquement: avoir un
sens mais pas du sens. La moulinette des
applications automatiques est facile à mettre
en marche, à partir d’un dictionnaire incorporé. Or c’est la situation d’énonciation qui
fait la signification. «Most of language begins
where abstract universals leave off» disait déjà
Dell Hymes, contre Chomsky. Si on perd le
«sens du sens» on a vite fait de faire retour à
mentalité magique: pensons à un événement
récent, la secte Heaven’s Gate et ses 39 «suicides» de jeunes gens, derrière la façade
respectable de l’entreprise informatique
Higher Source, le tout lié à l’apparition de la
comète Hale-Bopp. Tout un fatras plus ou
moins New Age, que servent à merveille les
NTIC et la façon dont peut se répandre, avec
elles, la propagande. Au point que pour Sherry
Turkell, professeur de sociologie au MIT,
«Internet est une métaphore de Dieu, voire
Dieu en personne» (Patrick Sabatier, Libéra tion, 28 mars 97).
A cette aune, on en arrive vite au laminage
des cultures minoritaires, à l’hégémonie d’une
ReCALL
Puissance du binaire, créativité du synaptique: M P Perrin
langue unique, à la standardisation des modes
de vie et de travail. Ce danger n’a pas échappé
à Jacques Delors (1994):
“Quant à l’extension et au perfectionnement
spectaculaire des moyens de communication, ils
rapprochent les individus et les peuples au sein
du ‘village-planète’mais tendent aussi, si l’on n’y
prend garde à banaliser la culture, à laminer la
diversité des cultures.”
De fait, le risque existe d’un accaparement par
et pour le profit financier d’un outil qui
devrait bénéficier à tous. Ici les opinions sont
partagées et très tranchées: il y a ceux qui
voient dans les NTIC et Internet un nouvel
Eldorado. A commencer par l’Administration
américaine: le rapport Magaziner – affirme
vouloir faire d’Internet une vaste zone de libre
échange, comprenons commerciaux – on
pensera à ces encarts publicitaires qui déjà
envahissent bien des pages Web. Adoptant
cette logique, le PDG de la Compagnie
Générale des Eaux, Jean-Marie Messier, qui
vient de s’assurer l’exclusivité du réseau de
télécommunications de la SNCF, ne craint pas
d’écrire: «Il faut maîtriser toute la chaîne de la
communication multimédia: le contenu, la
production, la diffusion et le lien avec
l’abonné.» (Le Monde, 8 février 1997). Contre
ce type d’attitudes se sont fondées des associations comme VECAM, qui se donnent pour
objectif de lutter contre la mercantilisation du
cyberspace.
D’un côté, donc, l’utopie libertaire des
internautes sans loi ni maître, qui prônent le
libre lien social sous toutes ses formes; de
l’autre le dessein libéral d’appropriation des
contenus échangés via les réseaux à des fins
de profit financier. Entre les deux, sans doute,
par un minimum de régulation, la culture et la
sagesse.
Car nous sommes, il est vrai, passés bien
près du règne de la pensée unique (Guerre du
Golfe) et de la falsification systématique de la
vérité (Timisoara) avec la toute-puissance de
chaînes de télévision qui tiennent dans le
monde entier, grâce au progrès technique
des transmissions par satellite, leurs spectateurs passivement rivés à l’image et à son
Vol 10 No 1 May 1998
commentaire fabriqués à des fins de ce qui
ressemble fort à de la propagande. C’est la
numérisation, le binaire donc, progrès technique encore plus grand, qui, paradoxalement,
redresse la balance: la mise en réseau de millions d’ordinateurs personnels redonne «respiration» aux particularismes, permet le choix et
la comparaison, l’interactivité, la confrontation des points de vue, la pluralité, donc la
réflexion. C’est ce que croit et dit Tony Toma
dans son ouvrage Les enseignants face au
multimedia (1997:29): «Internet, loin d’uniformiser les cultures au profit d’une culture
mondialiste aseptisée, permet l’affirmation
d’une multitude de particularismes individuels.»
On fera donc le choix et le pari du paradoxe optimiste: plus le binaire séquentiel
devient puissant, plus il favorise le développe ment en nous du synaptique holistique. S’il est
vrai comme le dit encore T. Toma que le
cerveau droit est premier (l’animal n’a pas
besoin de savoir compter pour vivre, il lui suf fit d’évaluer globalement une quantité; hiéroglyphes et idéogrammes sont antérieurs à
l’écriture) l’alphanumérique représente, de ce
point de vue, une avancée récente du cerveau
gauche, au détriment du droit. Avec, en
revanche, le développement de l’hypermédia
dans toutes ses potentialités, voici l’occasion
pour nous devenir à la fois plus analytiques et
mieux synthétiques.
Dans la question, que nous évoquions pour
commencer, des rapports entre l’homme et la
machine, prendrons-nous alors le parti de l’utopie pessimiste, qui prédit et redoute comme
on l’a dit plus haut «la perte d’identité de l’espèce humaine» et ne peut donc que refuser en
bloc les avancées de la technique? Ou celui de
l’utopie optimiste – le même, à l’envers – qui
envisage un monde de robots tellement lisses
et parfaits que l’homme n’aura plus besoin
d’être homme, donc par définition imparfait?
Entre les deux, comme souvent, comme toujours sans doute, réside la position médiane et
inconfortable de la raison pratique: celle qui
accepte les découvertes de la science et les
progrès de la technique; qui pour autant ne
démissionne pas de sa condition humaine.
Que nous disent en effet, aujourd’hui, les
31
Puissance du binaire, créativité du synaptique: M P Perrin
spécialistes des neurosciences?
Ils nous disent qu’en l’état actuel de la
technique, les ordinateurs, si puissants soientils, parce qu’ils fonctionnent sur le mode algorithmique, ne savent pas reconnaître ce qu’on
ne peut nommer facilement qu’en anglais les
«patterns» (motifs, modèles, patrons) que
perçoit et enregistre globalement le cerveau
humain. Et la raison pour laquelle on ne peut
pas programmer des ordinateurs pour qu’ils
arrivent à ce même résultat, c’est que l’homme
ne sait pas décrire COMMENT s’opère en lui
cette «pattern recognition»; on voit bien à
l’œuvre l’interconnexion «massivement parallèle» de nos neurones, chacun (rappelons qu’il
y en a des milliards) capable de 11000 liens
synaptiques en même temps; on connaît de
mieux en mieux la composition chimique des
neurotransmetteurs; mais nous ne savons pas
dire pourquoi ni comment une combinaison
holistiquement perçue l’est comme semblable
ou différente; donc nous ne pouvons pas modéliser ce fonctionnement; donc on ne peut pas
programmer un cerveau artificiel. Tant mieux,
serions-nous tentés de dire. Mais ne pavoisons
pas naïvement.
Bien sûr, les naïfs justement, vrais ou faux,
sont prompts à souligner les zones de ridicule
stupidité des machines d’aujourd’hui: tout le
monde sait que «les subtilités du langage
humain échappent toujours à la logique binaire
des cerveaux de silicium» (Titre d’un article
du Monde sur la traduction automatique, 10
mai 1997, p. 22). Ceux qui aiment le langage
et les langues, dont nous sommes par
définition,
aimeraient
pouvoir
dire
ECHAPPERONT toujours. Mais le meilleur
n’est jamais sûr, nous verrons bien. Les anecdotes, plus croustillantes les unes que les
autres, qui abondent dans ce domaine, ne peuvent masquer les réels progrès de la traduction
assistée par ordinateur, même si le terme fort
imprudent de traduction automatique tend à
être abandonné: cela ne marche que pour un
mode de communication prévisible, répétitive,
donc codable. Et que dire de l’incapacité de
Deep Blue et congénères de résoudre, comme
nous le rappelle l’un des adeptes les plus
enthousiastes du tout numérique, Nicolas
Negroponte soi-même (1993), un très simple
32
problème, à la portée pourtant d’un enfant
doué d’un minimum de pensée latérale (E de
Bono): soit la séquence UDTQC: comment la
continuer?
Donc, si la machine calcule cent millions
de fois plus vite que l’homme; si, de ce fait,
elle fait preuve d’une forme artificielle d’intelligence, jusqu’à présent c’est une intelligence
qui ne permet pas de dire que la machine
«pense». Jusqu’à présent, l’informatique est
restée une branche de la mathématique.
Mais de plus en plus la théorie du mathématicien Türing, le véritable inventeur de l’ordinateur (qui date de 1935), et le test de la
machine imaginaire qui porte son nom, sont
battus en brèche par les tenants de la physique
quantique, particulièrement les biophysiciens.
Ces savants d’un nouveau type s’orientent,
eux, vers les réseaux neuronaux, par lesquels
la configuration des machines se rapprocherait
bien davantage du fonctionnement du cerveau.
David Deutsch, physicien à Oxford, spécialiste de l’informatique quantique, a conçu une
machine, tout aussi théorique pour l’instant
que celle de Türing, qui serait capable de fonctionner non plus grâce à la succession binaire
de 0 et de 1, mais avec des nombres réels,
entiers ou décimaux, «a theoretical quantum
computer and DNA computer». De même
nature est l’autre machine théorique BSS
développée à UCLA Berkeley par Steven
Smale et Lenore Blum (1997). «Crossing the
quantum frontier», New Scientist, 26 April
1997: 38). Ce qui restait du domaine du nonprogrammable (par exemple la rose de Penrose, «A non repeating, or aperiodic pattern
illustrating something which is not supposed
to exist in nature: five-fold symmetry») pourrait alors devenir numérisable.
Qu’est-ce donc qu’un réseau neuronal?
Rien d’autre qu’un vaste assemblage de
microprocesseurs, éventuellement distants,
analogue au réseau des neurones dans le
cerveau, reliés entre eux de telle façon que
l’ensemble puisse non seulement exécuter par
répétition à l’infini certaines tâches prédéfinies, mais réellement apprendre à les
accomplir. Tout est encore, certes, sur le mode
binaire de base du SI.....ALORS: réponse par
un output choisi au stimulus d’un input donné
ReCALL
Puissance du binaire, créativité du synaptique: M P Perrin
(c’est ainsi que fonctionne la recherche
booléenne des moteurs de recherche sur
INTERNET). Mais le «plus» du réseau neuronal est que le feedback sur la pertinence des
réponses fournies par la machine en réaction à
ces stimuli lui est en quelque sorte réinjecté
pour qu’elle s’auto-améliore et se régule, donc
apprenne à ne pas refaire les mêmes erreurs: les
micro-processeurs ajustent d’eux-mêmes leur
mode de fonctionnement. Petit à petit, l’exhaustivité stupide devient moins nécessaire. En
principe il s’agit d’une simple simulation, purement électronique: «it normally exists only as a
simulation on a conventional computer»
(Michael Lockwood, 1989: 56). Délicieux et
terrible normally de la part de celui qui parlait
plus haut de incredible stupidity.
En réalité, il existe déjà des greffages
hybrides. Des neurones vivants sont connectés
à un circuit électronique se comportant
comme un neurone. On construit ainsi un
réseau hybride comprenant le neurone artificiel et deux neurones biologiques: le tout n’a
été jusqu’à présent testé que sur d’inoffensifs
crustacés, sur des neurones de ganglions de
homard. Mais l’imagination scientifique est
sans limites – et notre imaginaire collectif n’a
pas oublié Orange mécanique et sa chimère.
Toujours est-il que le greffage sur ces braves
homards a bien été conduite, pour ainsi dire
sous nos yeux, par des chercheurs bordelais,
qui concluent: «Les neurones électroniques
trompent les biologiques». (Laboratoire de
neurobiologie et physiologie comparée de l’université de Bordeaux 2, 1997). C’est ainsi
qu’au Japon une équipe a mis au point un
robot «émotif»: capable de ‘lire’colère, plaisir
ou peur humains (par caméra CDD dans son
«œil» gauche) et d’y répondre par mimique
adéquate. La machine opère par reconnaissance des formes et programme de logique
floue. Le taux de reconnaissance serait exact à
87% (c’est-à-dire aussi bon que chez les
humains, qui, eux, reconnaissent les expressions du robot à 83%, sauf la peur). Bientôt
combiné à la reconnaissance vocale, on voit
quelles applications cela pourrait donner, pour
le meilleur de la domotique pour handicapés
moteurs, ou pour le pire de tous les fantasmes
à la Frankenstein. (Equipe Fumio Hara, uniVol 10 No 1 May 1998
versité deTokyo, www.r6.kagu.sut.ac.jp/~mecchara/e3.html , La Recherche, no 301 septembre
1997). De son côté Rodney Brooks, professeur
au MIT, a mis au point un «bébé artificiel» qui
explore son environnement réel, et non plus
virtuel, en modifiant sa programmation en
fonction de ses «expériences». Très élémentaire
encore, ce robot «regarde» avec ses caméras et
«touche» avec ses «bras»; et il apprend à interagir avec monde réel par essais et erreurs. Pour
parvenir à ce résultat, l’inventeur a conçu non
pas un programme central unique, mais huit
processeurs autonomes spécialisés: un ensemble de petits modules auto-programmables,
avec l’idée que ce sera plus efficace qu’une
pré-programmation exhaustive comme celle de
Deep Blue. Le robot Sojourner, sur Mars, qui
«apprend» le terrain au fur et à mesure, fonctionne de la même manière: même s’il demeure
en partie piloté de la Terre, il lui faut réagir
immédiatement, alors que la transmission
depuis la Terre prend plusieurs minutes.
Bref, de plus en plus, de mieux en mieux,
avec les réseaux neuronaux la machine «pense».
C’est ce que dit Marvin Minsky, No 2 du MIT,
dans The Society of the Mind. En même temps,
de plus en plus, il devient possible de localiser
dans le cerveau l’activité de pensée: à tel point
que nombreux sont les savants qui déclarent
qu’on n’a plus rien à faire de la distinction
esprit-cerveau: c’est le cas par exemple de JeanPierre Changeux dans l’Homme Neuronal, et de
Jean-Didier Vincent dans sa Biologie des Pas sions. Sur la neurodépendance de la pensée sous
toutes ses formes, subjectives, cérébrales, somatiques, comportementales et communicationnelles, on lira avec profit l’austère traité de
Julien Barry, Neurobiologie de la pensée, dont
voici un extrait édifiant:
“On peut penser que les transformations rapides
des messages sensoriels en perceptions conscientes résultent de multiplexages synchronisés de circuits réverbérants réticulés présentant chacun des co-résonances chaotiques
oscillantes spécifiques. Ces multiplexages de
configurations adaptives pourraient permettre la
formation de colligations relevant selon les cas
de logiques classiques ou de logiques floues.”
(1995: 318)
33
Puissance du binaire, créativité du synaptique: M P Perrin
Alors, le profane comme moi, comme nous
tous sans doute ici, a tendance à se dire que le
vocabulaire même de la science d’aujourd’hui,
mieux, l’intitulé de ses concepts, CHAOS,
FLOU, tendent à balayer l’omniscience, le scientisme, le positivisme. N’y aurait-il pas là
une lueur d’espoir pour les humanistes, sinon
les humanités? La réponse, affirmative, vient
du même Julien Barry, qui tout en allant pourtant très loin dans la logique du tout-matière,
écrit en effet un peu plus loin:
“...les processus de compréhension et des
opérations intellectuelles ... les ordinateurs ne
font que (les) simuler par des moyens différents,
mais ne (les) réalisent pas ès-qualités”. (id.321)
Et il poursuit, en procédant in extremis à un
rétablissement spectaculaire (pardonnez la
longueur de la citation: je ne peux pas ne pas
la donner):
“Des prothèses neurales comportant des interfaces mécaniques et neurochimiques ont déjà
trouvé des commencements d’applications biomédicales substitutives diversifiées ...en matière
de correction de la surdité (cf. E. Gros,
L’ingénierie du vivant, Paris, Odile Jacob, 1990).
Il reste que dans tous les cas c’est l’Ego humain
qui est l’artisan et l’utilisateur de l’intelligence
artificielle, dont il doit demeurer le maître
comme celui de ses dérivés, et en fin de compte
la véritable conscience et le véritable juge de l’univers, parce que (au delà de tous les traitements corticaux préalables de son cerveau) il
accède par téloduction mentale et noétique consciente à tous les Univers de la pensée, avec
leurs ouvertures sur tous les possibles et tous
les imaginaires, avec leurs exigences et leurs
valeurs propres, qui confèrent à la précarité
existentielle de l’Ego une sorte de ‘transcendance’ sans équivalents, mais singulièrement
inconfortable.” (ibid.)
Conscience, transcendance, inconfort: maîtresmots, mots clés. On voit de plus en plus même
les matérialistes de stricte obédience, scientifiques et philosophes (comme par exemple
Quiniou), et comme déjà, bien sûr, Sartre
avant lui, admettre les actes de liberté d’un
34
Ego matériel. Intéressante convergence de
tous les penseurs de toutes écoles, qui terminent tous la présentation de leurs travaux et
découvertes par des questions, par LA question. Fini le péremptoire de Malebranche et
Descartes. Tous concluent sur le «mystère» de
la personne humaine, de la conscience:
“Should we then conclude that computers have
inner lives comparable to our own? I think not.
Consciousness is a great mystery.” (Lockwood,
1997)
Alors je dirai (avec et après, entre autres, John
Searle) que les machines, après tout, pensent
peut-être, mais peu importe: car elles ne
savent pas qu’elles pensent – elles n’ont pas,
n’auront jamais de conscience morale. Les
processus mentaux/cognitifs sont sans doute
quelque part réductibles à des fonctions
cérébrales, à terme toutes programmables: peu
importe, à nouveau. Les machines n’en restent
pas moins enfermées à tout jamais dans le fini
du monde technicien (Jacques Ellul sera content). C’est le pari, en tout cas, que j’aimerais
prendre, étayé de la convergence frappante
d’opinion, avec bien des nuances naturellement, de la communauté des scientifiques
d’aujourd’hui. Les biologistes notamment,
face aux fantastiques avancées du génie génétique, sont unanimes. François Jacob, prix
Nobel de médecine, les résume tous, qui termine ainsi en mars 1997 son livre La souris, la
mouche et l’homme:
“Nous sommes un redoutable mélange d’acides
nucléiques et de souvenirs, de désirs et de protéines. Le siècle qui se termine s’est beaucoup
occupé d’acides nucléiques et de protéines. Le
suivant va se concentrer sur les souvenirs et les
désirs. Saura-t-il résoudre de telles questions?”
En transposant du biologique à l’électronique et au numérique, nous dirons que le siècle
qui se termine s’est beaucoup occupé d’electrons et d’octets. Le suivant va se concentrer
sur le cognitif et le poétique. Autre version,
sans doute, du mot, apocryphe ou non, de Malraux sur le 21ème siècle, qui sera spirituel ou
ne sera pas. Il reste de l’inconnu devant nous:
ReCALL
Puissance du binaire, créativité du synaptique: M P Perrin
et comment vivre, en effet, «sans inconnu
devant soi», selon le mot de René Char, que
cite également F. Jacob?
C’est encore Penrose, celui-là même qui
travaille à la conception de la machine quantique capable de rendre obsolète tous nos ordinateurs actuels, qui écrit lui aussi:
“The notion that the human mind can ever fully
comprehend the human mind could well be folly
– it may be that scientists will eventually have to
acknowledge the existence of something that
might be described as the soul.” (1995: 40)
Pour terminer, alors, un peu de soul music?
Devant l’impressionnante unanimité du questionnement (qu’on pourrait documenter ad
libitum), nous avancerons non pas une conclusion, au sens dogmatique du terme, mais bien
une question, peut-être une hypothèse. Celle,
sans doute, du philosophe Thomas Nagel, qui
s’efforce de montrer à la fois les fantastiques
découvertes et les non moins incommensurables limites des neurosciences aujourd’hui
(et exprime cela simplement, en disant qu’on
ne peut savoir l’effet que cela fait à la chauvesouris d’être chauve-souris). Plus scientifiquement, la sagesse semble être dans la théorie
actuelle du «double aspect», où:
“la conscience représente la face subjective et
le système nerveux central la face objective
d’une même entité baptisée l’Esprit-Cerveau.”
(Missa, 1994)
Ceci permet au plus farouche des anti-spiritualistes d’intégrer la notion de vie intérieure,
d’intériorité, où est notre libre arbitre, qu’aucune tomographie avec émission de positons
ne saura jamais «lire» – les scanners les plus
perfectionnés ne savent que montrer quelle
zone du cerveau consomme plus d’oxygène
que d’autres en un instant donné. L’observateur peut dire que telle zone du cortex est
activée ; impossible pour autant d’affirmer à
quoi «pense» le sujet, quelles sont ses images
mentales: les machines à détecter le mensonge,
ne savent pas dire de quel mensonge il s’agit...
Si le dualisme qui considère l’esprit et le
cerveau comme deux entités totalement disVol 10 No 1 May 1998
tinctes (John C. Eccles) est à l’abandon, à l’opposé, la théorie de l’identité (l’esprit est le
cerveau, le cerveau est l’esprit) de Changeux
(«L’homme n’a plus rien à faire de l’Esprit, il
lui suffit d’être un Homme Neuronal») ou de
l’Américaine Patricia Churchland, adepte du
naturalisme pur, paraît elle aussi hors de course.
Au lieu de cette alternative, un neurobiologiste
comme Pierre Karli, refusant toute explication
qui ne laisserait pas place à la subjectivité et à
la liberté, souligne que la personne humaine ne
se laisse pas réduire à sa seule identité
biologique En même temps que l’individu
biologique, il y a l’acteur social, et le sujet en
quête de sens et de liberté intérieure – cette
chose qui fait que l’individu participe à quelque
chose qui le dépasse, qui fait de lui, diront certains, une personne capable de s’approprier sa
propre parole: d’où l’intérêt de voir les NTIC
promouvoir et faciliter la réalisation optimale
des potentialités de tout être humain en devenir.
Régis Debray, pessimiste, pense que la
vidéosphère aujourd’hui est en train de détruire notre civilisation fille de l’écrit. «Savoir
que n’égale pas savoir.» (Vie et mort de l’im age, une histoire du regard en Occident, Gallimard, 1997.) Selon lui, McLuhan a gagné contre Gutenberg. Entre Djihad et MacWorld (q.v.
Barber, 1995), c’en est fini de la culture et de
la démocratie. Mais Debray, dans cette His toire du regard, pense à la télévision abrutissante que l’on gobe passivement, en zappant
allègrement du reportage à la fiction et vice
versa, sans plus savoir distinguer l’un de
l’autre. Dans sa thèse «médiologique» par
laquelle il condamne notre époque, Debray
oublie, en fait, l’ordinateur multimédia, outil
d’apprentissage guidé, d’échange et de communication vraie,
“La thèse médiologique est qu’il est possible
d’établir, pour chaque période de l’histoire – du
néolithique, de l’invention de l’écriture à l’ère
électronique – des corrélations vérifiables entre
les activités symboliques d’un groupe humain...
et son mode de saisie, d’archivage et de circulation des traces (idéogrammes, lettres, caractères, sons, images).” (Debray, 1995: 5)
A son pessimisme, je préférerai le mot de
35
Puissance du binaire, créativité du synaptique: M P Perrin
Proust: «Le seul véritable voyage, ce ne serait
pas d’aller vers de nouveaux paysages, mais
d’avoir d’autres yeux». Nous les avons, ces
autres yeux, grâce aux NTIC qui nous ouvrent,
sans que nous bougions, à toutes les réalités
des autres langues et cultures. La numérisation
à grande échelle, l’interactivité que permettent
les réseaux, l’hypernavigation intelligente, le
multimédia utilisé à bon escient par de vrais
didacticiens, tout cela peut au contraire, faire
monter dans le monde les niveaux de conscience, les niveaux de culture, les niveaux de
tolérance mutuelle, notamment par le biais
d’une meilleure maîtrise par chacun d’une ou
plusieurs langues d’ailleurs. L’outil multimédia, la pédagogie hypermédia nous mènent à
cette auto-nomie, véritable gestion de soi,
l’auto-poèse comme dit le cognitiviste
Francesco Varela, qui est la plus haute ambition de toute entreprise éducative.
Ceci est possible, à la condition que s’instaure une véritable éthique des NTIC. C’est la
thèse, à laquelle je souscris des deux mains, de
trois auteurs récents, tous trois sensibles à la
manifeste valeur ajoutée possible des nouvelles technologies; tous trois également très
vigilants face aux dérives et dangers non
moins manifestes du tout technologique accaparé par le tout commercial. On peut se livrer
à une critique vigoureuse des approches technocentriques, effectivement préjudiciables,
sans pour autant rejeter l’apport extraordinaire
du binaire au synaptique.
Ces trois auteurs que j’aimerais citer pour
finir sont Pierre Rabardel Les hommes et les
technologies en 1995, avec un premier
chapitre intitulé «Pour une approche des techniques centrée sur l’homme»; Philippe Breton,
A l’image de l’homme en 1997; et donc Pierre
Karli, Le Cerveau et la liberté, également en
1997). Certes Prométhée sera toujours parmi
nous, avec sa divinisation de la technique, son
hubris, gros de toutes les catastrophes. A l’opposé d’hubris, il y a humilis. Cette humilité de
bon aloi, qui enracine l’homme dans l’humus
(homo et humus ont la même étymologie nous
rappelle M. Serres dans son Tiers instruit,
142), justement, et fait sa grandeur dans sa
fragilité: les pieds sur terre, la tête dans les
étoiles.
36
Ceci nous ramène à Pascal, qui a tout dit
dans ses Pensées:
“Un néant à l’égard de l’infini, un tout à l’égard
du néant, un milieu entre rien et tout, un homme
en somme.”
Dans cette optique, il est encourageant de constater, puisque nous sommes dans une instance
à visée européenne, que certains au moins de
nos Eurocrates l’ont bien compris: le Livre
vert de la Commission européenne DG
Vb5 s’intitule en effet Vivre et travailler dans
la société de l’information: priorité à la
dimension humaine (juillet 1996), avec cette
magnifique adresse électronique: [email protected]. Malheureusement ce
n’est pas la direction européenne pour l’éducation qui l’a trouvée: c’est celle de L’emploi,
des relations industrielles et des affaires
sociales – celle des techniciens... Acceptonsen l’augure: que la technique, toujours, sache
maintenir «people first»!
Notes
1. Regards:
http://www.regards.fr/archives/97
/9707/9707res08.html
2. Classe de CM1 de Piquecos 46, http://www.actoulouse.fr/piquecos; Classe CM2 de Villard de
Lans, 74, http://www.ac-grenoble.fr/vercors
Bibliographie
Alberganti M. (1997) Le multimedia, la révolution
au bout des doigts, Paris: Marabout/Le Monde.
Andler D. (1992) Introduction aux sciences cogni tives, Paris: Gallimard.
Arnheim R. (1974) La pensée visuelle, Paris:
Champs/Flammarion (Visual Thinking, 1969).
Balpe J. P. et al. (1995) Hypertexte et hypermédias,
Paris: Hermès.
Barber R. (1995) Djihad vs. MacWorld, New York:
Time Books.
Barry J. (1995) Neurobiologie de la pensée, Lille:
PUL.
Breton P. (1995) A l’image de l’homme, Paris: Seuil.
Bruillard E. et al. (dir). (1996) Actes du séminaire
Hypermédias, éducation et formation 1996,
LIP6: université Pierre et Marie Curie Paris 6.
Changeux J. P. (1983) L’Homme neuronal, Paris:
Fayard.
ReCALL
Puissance du binaire, créativité du synaptique: M P Perrin
Churchland P. S. and Sejnowski T. J. (1992) The
Computational Brain, Boston: MITPress.
Debray R. (1997) Vie et mort de l’image, une his toire du regard en Occident, Gallimard.
Delors J. (1994) ‘Quelle éducation pour le 21ème
siècle?’ Magazine européen de l’éducation 16.
Edelman G. M. (1992) Biologie de la conscience,
Paris: Odile Jacob.
Gardner H. (1989) The Mind’s New Science: a His tory of the Cognitive Revolution, New York:
Basic Books.
Ginet A. et al. (1997) Du laboratoire de langues à
la salle de cours multi-médias, Paris: Hatier.
Gros E. (1990) L’ingénierie du vivant, Paris: Odile
Jacob.
Johnson-Laird P. N. (1994) L’Ordinateur et l’esprit,
Paris: Odile Jacob.
Karli P. (1997) Le Cerveau et la liberté, Paris:
Odile Jacob.
Kerckhove D. de (1990) La civilisation vidéo-chré tienne, Paris: Retz.
Langues Modernes (Les) (1996) ‘Le multimedia
dans tous ses états’, 1.
Levy S. (1997a) ‘Man vs. Machine’, Newsweek 12
May.
Levy S. (1997b) ‘Big Blue’s Hand of God’,
Newsweek 19 May.
Lockwood M. (1997) The Independent 13 May.
Lockwood M. (1989) Mind, Brain and the Quan tum: the Compound ‘I’, Oxford: Blackwell.
Miquel C. (1991) La puce et son dompteur:
mythologies modernes et micro-informatique,
Paris: L’Harmattan.
Missa J. N. (1993) L’esprit-cerveau: la philosophie
de l’esprit à la lumière des neurosciences,
Paris: Vrin.
Navalo E. and Naïm P. (1989) Des réseaux de neu rones, Paris: Eyrolles.
Neumann, J. von (1992) L’ordinateur et le cerveau,
Paris: La Découverte.
O’Donoghue M. (1993) ‘Applications of electronic
communication projects under investigation in
further education’, TeleteachingA-29.
Negroponte N. (1995) Being Digital, New York:
Knopf.
Perrin M. (1992) ‘De l’utilisation communicative
des documents authentiques’, Du linguistique
au didactique, Actes du 11ème Colloque du
GERAS 1990 de Bordeaux, GERAS éditeur:
Bordeaux. 11–33.
Perrin M. (1995) ‘Les langues de spécialité, facteur
Vol 10 No 1 May 1998
de progrès pédagogique’. In Budin G. (ed.),
Proceedings of the 10th European LSP Sympo sium, Vienne, Autriche: IITF Infoterm, vol. 1,
pp. 47–83.
Quiniou P. (1987) Problèmes du matérialisme,
Paris: Klinksieck.
Rabardel P. (1995) Les hommes et les technologies,
Paris: Armand Colin.
Resnick M. (1994) Turtles, Termites, and Traffic
Jams: Exploration in Massively Parallel
Microworlds, Cambridge, MA: MIT.
Schneuwly B. and Bronkart J. P. (1985) Vygotsky
aujourd’hui, Paris/Lausanne: Delachaux et
Niestlé.
Serres M. (1991) Le Tiers instruit, Paris: François
Bourin.
Sérusclat F. (1997) Rapport pour l’Office par lementaire d’évaluation des choix scientifiques
et technologiques , Paris: Imprimerie nationale.
Smale S. and Blum L. (1997) ‘Crossing the Quantum Frontier ’, New Scientist 26 April.
Thily H. (1996) ‘L’apport des nouvelles technologies d’information et de communication dans la
didactique de l’anglais de spécialité’, thèse de
l’université de Savoie à Chambéry.
Toma T. (1997) L’enseignant face au multimedia,
Paris: Martorana.
Trocmé-Fabre H. (1995) Né pour apprendre, Paris:
ENS Saint-Cloud (7 vidéogrammes).
Vincent J. D. (1986) Biologie des passions, Paris:
Seuil.
Vygostky L. S. (1930) ‘La méthode instrumentale
en psychologie’. In Schneuwly B. et Bronkart
J. P. (eds.), Vygostky aujourd’hui, Paris/Lausanne: Delachaux et Niestlé, 1985.
Weidenfeld G. et al. (1997) Techniques de base
pour le multimédia, Paris: Masson.
Wiener N. (1948) Cybernetics, New York.
Wolton D. (1997) Penser la communication, Paris:
Flammarion.
Michel P. Perrin est professeur des universités et
dirige à Bordeaux le Centre régional interuniversi taire de formation en langues. Il est président du
GERAS (Groupe d’études et de recherches en
anglais de spécialité) et de RANACLES, (Rassem blement national des centres de langues de l’en seignement supérieur) .
Université Victor-Segalen Bordeaux 2
Email: [email protected]
37
ReCALL 10:1 (1998) 38–45
The language learner and the
software designer
A marriage of true minds or ne’er the twain shall meet?
Susan Myles
Middlesex University
In Memoriam
When Susan presented this paper at EUROCALL 97 she was extremely ill. As Susan’s PhD supervisor, I knew only too well what she was going through when she mounted the rostrum and nervously organised her notes, but I could not have imagined that in two short months her life
would come to an end. Such was Susan’s courage in her struggle against cancer that she gave few
outward indications of her suffering and only looked to the future, continuing to teach, to
research and to give her attention to her home and family.
My last conversation with Susan centred on t his paper and the trials she was about to conduct
with her students in the new academic year. The paper is published here with the minimum of
editing. It stands as a tribute to Susan’s research but represents only a short summary of the vast
amount of data that she had collected so meticulously, and it only hints at the interesting findings that were beginning to emerge. It is likely, however, that the mass of notes that Susan has
left behind will yield further results.
Susan is greatly missed by her colleagues and students at Middlesex University and by all her
friends and family.
Graham Davies, Thames Valley University
Susan Myles describes a research project currently being undertaken in the field of computer assisted
vocabulary learning (CAVL). The aims of the research are stated, the research methodology adopted is
outlined, the modes of testing adopted are justified, findings of previous research experiments in the
field are seen to provide some useful guidelines both for analysing the data and for conducting the second round of trials, some initial impressions gained from the data are tentatively given and finally an
attempt is made to anticipate the direction in which the results might lead.
38
ReCALL
The language learner and the software designer: S Myles
Aims of the Project
The main aim of this research project is to
establish what design features need to be
incorporated into a CAVL (German) package
to maximise its effectiveness. The mental
activity of vocabulary acquisition – that of
German vocabulary in particular – is difficult, tedious and underresearched, and students’ rates of retention of German vocabulary are lower than they would like. CALL
software writers, especially if non-linguists,
appear to regard vocabulary as merely some
body of words to be thrown at the learner;
often the only problem they appear to see is
that of defining the corpus, thereby largely
ignoring the complex psycholinguistic
processes involved. Some research has been
conducted into these processes, but the findings have not generally been applied to the
design of learning and teaching materials.
Hence – as has been confirmed by initial
interviews with the subjects – CALL software often leaves the learners feeling that
their needs have been overlooked in the
design process. Consequently the software is
probably much less effective than it could
reasonably be expected to be. What appears
to be needed is a vocabulary teaching package the design of which takes these needs as
its very starting point.
The project takes the form of a case study
of a very small group of students of German
as a foreign language, and has gathered data
on the students as language learners. It is
essentially a qualitative experiment: it will not
attempt to generalise from this minute sample
to the general population, the language learner
per se or the language learner in a particular
language learning situation. The lack of
research hitherto conducted into the CAVL
learning process is considered sufficient justification to warrant a research project of this
nature; the value of the data is seen to lie in
their ability to point the way to further
research rather than their inability to produce
statistically meaningful generalisations. Taking a group of just six students, from the same
language learning class, the sample has within
it a certain amount of common factors which
Vol 10 No 1 May 1998
increase the uniformity of the subjects, such as
the way in which they have hitherto been
taught, the language learning habits they have
consequently acquired, the amount of German
they already knew and their motivation for
participation in this experiment.
CALL software trials
After the initial needs analysis, the students
trialled four CALL software packages each of
which attempts to teach German vocabulary
by a different method:
1. The German Master,1 a very primitive,
DOS-based authorable program which presents individual items of vocabulary out of
context although in semantic clusters, and
flashes them onto the screen at a regulatable speed. Each item is presented and
tested bilingually by means of an instant
translation into or from L1.
2. Vocab,2 which presents vocabulary items
once again in semantic clusters, here not as
isolated items but within a context-sentence, the bare shape of which (in the form
of a number of blanks) is then flashed up
in order to cue the item being tested. The
learner has no recourse to L1 in this package.
3. Fun with Texts,3 a straightforward textreconstruction package, not specifically
designated to teach vocabulary but which
can be used as a CAVL program. The pro gram devises a variety of reconstruction
exercises around a passage of text which
contains the items of vocabulary to be
tested. The learner has no recourse to L1 in
this package.
4. Travel Talk,4 the most recent of the four
packages trialled and the only package of
the four to boast multimedia facilities.
Essentially, semantically clustered within a
learner-centred approach which organises
the vocabulary items according to how it
sees the functions and notions of the
learner, this package presents and tests
each item bilingually in the form of a written or spoken translation.
39
The language learner and the software designer: S Myles
The aim of each trial was to teach 20 items of
vocabulary, taken from a topic area which the
subjects would in any case be covering in the
coursebook but which they had not yet been
taught. Alongside them a control group was
formed, comprising students from the same
class as the sample, who were being taught the
same bodies of vocabulary but by non-CALL
methods. Both groups were tested on these
items at appropriate stages and the results of
the two groups are to be compared.
The data gathered from the trials were
based on the following for each of the four
packages:
a) Pre-trial vocabulary test (L1→L2), to
ascertain whether any of the items was
already known;
b) Software trial (including:
• initial presentation of vocabulary on
screen (by a variety of methods, according
to the package being trialled), L2→L1 in
the case of those packages which present
the vocabulary bilingually;
• post-presentation vocabulary test (L2→
L1 since this invariably produces higher
scores than L1→L2);
c) re-presentation of the 20 items (L1→L2);
d) vocabulary test immediately following
software trial (L1→L2);
e) retest one month after software trial (L1→
L2);
f) retest six months after software trial (L1→
L2).
Based on the data generated by the software
trials, there will be a second run of trials, using
of course a different set of students – not yet
exposed to the items to be presented – and trialling only one package, Gapkit.5 The authoring of this package and the methodology
adopted in the second trials will attempt to
incorporate as far as possible the recommendations suggested by the findings of the original experiment and of previous research
experiments in this field. Again the package
will aim to teach, test and finally retest a body
of 20 items, using the traditional translation
mode of testing, and a control group will be
tested on these same 20 items having been
40
taught them by non-CALL methods. The final
test results of both groups will be compared, in
order to suggest whether it appears likely that
using an appropriately designed and administered CAVL program can be seen to increase
the effectiveness of these learners’ efforts to
learn vocabulary.
Methods of testing vocabulary
There is a range of methods available for the
testing of vocabulary, some of which may have
produced richer data and might therefore have
appeared more attractive than the traditional
translation mode as a research method. Previous
research experiments have used, for example:
a) word association tests,
b) yes/no word recognition tests, or
c) VKS (vocabulary knowledge scale) tests.
Each of these will now be briefly considered,
by way of a justification of its rejection as a
research testing mode.
a) A number of experiments have been conducted using word association tests in
order to ascertain what a learner’s mental
lexicon might look like.6 Such tests apparently have much to recommend them, in
that they are extremely simple to administer; they require just two players: one to
call out single words, the other to respond
to each of these words with the first word
that enters his or her head. However, to
quote Meara, “tried and trusted tools which
work for L1 situations are rarely wholly
appropriate for L2 situations, and word
association research is clearly one of these
cases.”7 In any case, “learners’ vocabularies are by definition in a state of flux, and
not fixed; learners often tend to give idiosyncratic responses; the indications are that
semantic links between words in the
learner’s mental lexicon are somewhat tenuous – all these considerations would lead
one to suspect that learners’ responses
could be considerably less stable than the
response patterns of native speakers.”8
ReCALL
The language learner and the software designer: S Myles
b) Yes/no word recognition tests are based on
lexical decision tests that have been used
extensively by cognitive psychologists
attempting to model the mental lexicon.9
They work by measuring the subjects’
responses to a number of real words and a
smaller number of non-words on the
screen. Subjects have simply to indicate
whether or not they know the word. However, such tests are intended as a vocabulary measurement instrument. This project
is not attempting to estimate the size of the
learner’s mental lexicon, but instead is
seeking to discover at least some of the features of the most effective virtual teacher,
by attempting to teach a body of vocabulary and analysing the resultant data.
c) The VKS10 comprises the following five
points, treating each point on the scale as a
‘state’ of the learner’s current knowledge.
1. I don’t remember having seen this word
before.
2. I have seen this word before but I don’t
know what it means.
3. I have seen this word before and I think
it means...
4. I know this word: it means...
5. I can use this word in a sentence, e.g....
Thus the learner is tested on a number of
vocabulary items by being asked to indicate the current state of his/her knowledge
of each of the items using the VKS. Whilst
the ease of application of this scale might
appear tempting for use in a research
experiment, it was rejected for this project
for a number of reasons. First, from vocabulary acquisition research it appears most
likely that L2 vocabularies are relatively
unstable, and that words constantly move
in and out of a number of different states.
In fact, using the scale it is even possible
to track the movement of individual items
between states.11 Second, it relies on the
subjects’ being aware of the state of their
knowledge of a particular word, whereas
in practice learners’ own assessment of
their knowledge is not always wholly reliable. Finally, a knowledge test of this kind
is based on the ability of the subjects to
Vol 10 No 1 May 1998
recognise familiar or unfamiliar words, but
does not test their productive knowledge.
True, it begins to test their knowledge of
its appropriate usage (see point 5 on the
VKS scale), but it always operates from
the essential trigger of the L2 item, hence
it cannot test whether the subject would
have been able to produce the item – correctly and appropriately – from a cue
devoid of L2 hint – that is, a cue in L1.
It is doubtful whether any test procedure
adopted can ever be other than a compromise,
since there is so much more involved in
‘knowing’a word than any of these test modes
can ever hope to be able to test. In his attempt
to answer the question “What is it to learn a
word?”, Ellis reminds us: “We must learn its
syntactic properties. We must learn its place in
lexical structure; its relations with other
words. We must learn its semantic properties,
its referential properties, and its roles in determining entailments. We must learn the conceptual underpinnings that determine its place
in our conceptual system. Finally, we must
learn the mapping of these Input/Output specifications to the semantic and conceptual
meanings.”12 The complexities of the wordlearning process are thus not to be underestimated, but have hitherto not always been
recognised in the testing of word knowledge
in related research experiments.
In view of these criticisms of various test
modes, the traditional translation mode was
adopted in this experiment since this was
taken to be the most reliable way of testing
whether the subjects had actually ‘learned’the
word to the extent of being able to produce it
correctly and appropriately. In addition, in at
least some cases these alternative modes of
testing can be seen to be inappropriate for the
package in question, as that package was perhaps not designed to produce the richer data
which these other methods may have produced. The German Master,13 for example,
has the traditional translation mode of testing
and indeed of presentation at the very heart of
its rationale, hence it was not considered
appropriate to use more innovative modes of
testing of its vocabulary.
41
The language learner and the software designer: S Myles
Data analysis
Research findings in the fields of CAVL, psycholinguistics and SLAcan been seen to guide
the course of this experiment by indicating
what aspects of the data merit close analysis.
For example, it appears that the following
could have a significant bearing on the degree
of success achieved by subjects using the four
programs trialled:
1. Role of context in comprehension
2. Surface or deep processing of learning
3. Make-up of list of items to be taught (e.g.
syntactic class)
4. Subject’s selection pattern from within list
of items
5. Error analysis
6. Learning strategies or styles
7. Comparison between data produced from
L2 → L1 testing and from L1 → L2 testing
8. Word assocations.
Initial impressions of the data
Whilst some modes of presentation suggested
by their immediate appeal to learners that they
would transpire to be more effective than others, none of the four packages proved to be
anything like as effective as its marketing
would claim, if the final test results only are
considered. The impressions of the students
trialling the software suggest that the packages
are not doing their job particularly successfully. A combination of a thorough review of
each package trialled and consideration of the
findings of previous research experiments suggests certain reasons for this lack of success:
1. The design of the package has not been
preceded by a formal needs analysis. The
methodology adopted in the trials was in
each case that recommended by the
authors of the package, but the design of
CALL software is all too rarely based on
psycholinguistic or even pedagogical considerations; accordingly the data are suggesting that the recommended methodology might not be fulfilling the potential of
42
the software. For example, the authors of
The German Master14 recommend setting
the speed of presentation at three seconds
per item, whereas the subjects found that
in practice this was barely sufficient time
in which even to read the item, let alone
for it to be stored in the memory. Some
research has been conducted into whether
the storage of items in the memory is a
spontaneous process triggered by having
the items flashed in front of the eyes, or
more likely to be the result of a conscious
effort on the part of the learner15; the data
collected in this experiment would suggest
that a very low retention rate indeed is
attained without any conscious effort being
made to learn an item, thus there is a
requirement for an interval of time in
which that conscious effort can be made.
2. The impact of the package on the quality of
learning is such that any learning which is
promoted is of an essentially surface
nature. Indeed, deep learning – the quality
of which is indisputedly superior to surface
learning – is actively precluded by the
methodological design. To have any
chance of retaining the vocabulary, the
subjects need to have learned the items by
deep processing, and therefore there is a
need for deep learning to be promoted.16
The vocabulary is in some cases presented
out of context, a mode of presentation
hardly favoured by the advocates of the
communicative
approach,
however
defined: “If the communicative approach
tells us anything, it tells us that language
and language use is context-dependent.”17
However, it may be that the very program
which presents the vocabulary, not in contextualised sentences but in the form of a
list of items – maybe even designed for
rote-learning – transpires to yield the highest score in the post-trial tests or proves
most popular with the students. In fact
none of the four packages trialled presents
the vocabulary entirely decontextualised:
all are connected by semantic clustering at
least. It may appear obvious that such clustering would be the absolute minimum
requirement in the presentation of items
ReCALL
The language learner and the software designer: S Myles
for vocabulary teaching, but some research
findings suggest very strongly that students actually have more difficulty learning new words presented to them in
semantic clusters than they do learning
semantically unrelated words. 18 These
findings are based on interference theory,
as it evolved during the first half of the
century, which hypothesised that as similarity increases between targeted information and other information learned either
before or after the targeted information,
the difficulty of learning and remembering
the targeted information increases likewise.19 Such data might lead one to question the wisdom of presenting L2 students
with their new vocabulary organised for
them into clusters of semantically or syntactically similar words. Along the same
lines, more recent psychologists have
posited a ‘distinctiveness hypothesis’ 20
This relates ease of learning to the distinctiveness (non-similarity) of the information to be learned. Their data strongly suggests that distinctiveness is a crucial factor
in the learning of new information and
that, as the distinctiveness of the information to be learned increases, so does the
ease with which that information is
learned. Can it be, then, that, far from
facilitating learning, the presentation of
new vocabulary items to L2 learners in
clusters impedes it? It appears that the
ideas and evidence presented by a significant number of researchers who have
explored learning and memory are being
ignored in software design.
3. In some cases the vocabulary is presented
in the context of a short passage of text,
but without a title. The title could be playing a role in promoting effective comprehension and subsequent retention of the
new vocabulary, and this role will be considered in the data analysis of this research
project.
4. The design process of the package is
apparently not informed by theories of
learning or cognitive styles. Whilst there is
a body of research which is useful in this
connection,21 learning styles are notoriVol 10 No 1 May 1998
ously difficult to discern and describe reliably. Liddell22 questions whether cognitive
theory can help teachers to teach and
learners to learn. In fact, even in SLA
research which claims to be classroombased, the body of research on how teachers teach far exceeds that on how learners
learn. The reason he posits for this is that
“learning is not generally directly and
immediately observable [...] learning
emerges only after some unspecified time
and may not be produced by one specifically identifiable event, but rather by the
cumulative effect of a number of events.”23
The task of needs-related CALL software
design is thus much more difficult than
might at first appear.
5. The apparently haphazard fashion in which
some students approach the task of vocabulary learning would suggest that a number of learners need to learn how to learn.
A psychologist’s view is useful in this connection: “Studies of vocabulary acquisition
from reading demonstrate that neither dictionary look-up nor direct instruction is
necessary for vocabulary acquisition [...]
Successful learners use sophisticated
metacognitive knowledge to choose suitable cognitive learning strategies appropri ate to the task of vocabulary acquisition.
These include:
• inferring word meanings from context;
• semantic or imagery mediation between
the L2 word (or a keyword approximation)
and the L1 translation; and
• deep processing for elaboration of the
new word with existing knowledge.”24
Ellis distinguishes between the implicit
and the explicit learning process, and
argues that success lies in acquiring cognitive strategies for acquiring unfamiliar
lexis: “Metacognitively sophisticated language learners excel because they have
cognitive strategies for inferring the meanings of words, for enmeshing them in the
meaning networks of other words and concepts and imagery representations, and
mapping the surface forms to these rich
meaning representations. To the extent that
43
The language learner and the software designer: S Myles
6.
7.
8.
9.
44
vocabulary acquisition is about meaning, it
is an explicit learning process.”25 It might
be possible to incorporate within a CAVL
package modes of presentation or exercises
the rationale of which promotes deep
learning, such as will facilitate the adoption of successful learning strategies for
the learners’ future use. Certainly psychologists such as Ellis are of the firm opinion
that such skills are teachable: “Learners
can usefully be taught explicit skills in
inferencing from context and in memorising the meanings of vocabulary.”26
The design features of the package do not
appear to be conducive to retention. For
example, it is emerging in the initial data
analysis that the absence of the dimension
of sound appears to be impeding the learners in their attempts to learn vocabulary.
Thus it may transpire that the multimedia
package is the one which has the greatest
appeal or proves the most effective as a
vocabulary teaching tool.
The nature and effect of feedback on
progress is a factor which is often overlooked, partly as a result of the absence of a
formal needs analysis. Learners’needs could
and should be ascertained, and Help
designed accordingly to accommodate them.
The testing procedures adopted, both
within the programs and at the end of the
first round of trials, have been shown in
some research experiments to be unreliable, giving students a false impression of
the amount of vocabulary learned.
In some cases insufficient thought has been
given to the way in which the learner is to
discern the correct meaning of a vocabulary item in order for efficient learning to
take place. Research is divided on the issue
of whether an L1 translation is necessary –
or even desirable – for each item being
taught, and an experiment conducted into
the validity of monolingual vocabulary
presentation produced some surprising
results.27 Their subjects were asked to read
Anthony Burgess’ A Clockwork Orange , a
novel containing 241 totally unfamiliar
words drawn from a Russian-based slang
known as Nadsat. There is no glossary
available for these words, but each of them
is repeated an average of 15 times within
the novel. A few days after reading it the
subjects had a surprise vocabulary test
including 90 Nadsat words sprung upon
them, and within that short time and under
those conditions considerable acquisition
had apparently taken place: subjects were
found to have acquired some 45 new
words. Are these results to be interpreted
as an indication that L2 vocabulary could
or should be presented monolingually and
within an extended, authentic context such
as a long reading passage or novel? Certainly opinion is divided here: there are
research findings to refute this on the basis
that bilingual presentation (availability of a
glossary in some form) produces higher
scores.28
Future directions
The bodies of data generated by the second
round of trials will be analysed and a comparison will be made with the data generated by
the control group. If CAVL software can be
seen to appear more effective as a vocabulary
learning tool than anything tried hitherto, specific recommendations will be made to a computer programmer and software designer as to
how CAVL courseware might be designed and
administered in order to maximise its effectiveness as a vocabulary teaching tool for certain types of learner. If, however, the limitations of the computer itself as a teaching
medium appear to render it unsuitable for the
efficient learning of L2 vocabulary, the following questions will be asked:
1. Where exactly were the shortcomings of the
computer as a teaching medium seen to lie?
2. Realistically, what potential is there for any
improvement in its efficiency as a teaching
medium?
References
1. The German Master, Version 2.0 (1993), Kosmos Software Ltd: Dunstable, Beds.
ReCALL
The language learner and the software designer: S Myles
2. Vocab (Wortspielerei), Version 2.0 (1994), Wida
Software: London.
3. Fun with Texts, Version 2.1e, (1992), Camsoft:
Maidenhead, Berks.
4. Travel Talk (1994), Libra Multimedia Ltd:
Windsor, Berks.
5. Gapkit (1996), Camsoft: Maidenhead, Berks.
6. Meara P. M. (1983) ‘Word associations in a foreign language’, Nottingham Linguistics Circu lar 11 (2), 29–38.
7. Ibid, 5.
8. Ibid, 35.
9. Shillaw J. (1996) ‘The application of Rasch
modelling to yes/no vocabulary tests’, Vocabulary Acquisition Research Group Virtual
Library, University of Wales, Swansea.
10. Wesche M. and Paribakht T. S. (1996) ‘Assessing second language vocabulary knowledge:
depth versus breadth’, The Canadian Modern
Language Review 53 (1), 28.
11. Meara P. M. and Sanchez I. R. (1997) ‘Matrix
models of vocabulary acquisition: an empirical
assessment’, Vocabulary Acquisition Research
Group Virtual Library, University of Wales,
Swansea.
12. Ellis N. (1995) ‘Vocabulary acquisition: psychological perspectives’, The Language
Teacher 19 (2), 13.
13. Op. cit.
14. Ibid.
15. Liddell P. (1994) ‘Learners and second language acquisition: a union blessed by CALL?’,
CALL 7 (2), 163–173.
16. Goodfellow R. and Powell C. (1994)
‘Approaches to vocabulary learning: Data from
a CALLinteraction’, ReCALL 6 (1), 27–33.
17. Laurillard D. (1991) ‘Principles of computerbased software design for language learning’,
CALL 4 (3), 141.
18. Tinkham T. (1993) ‘The effect of semantic clustering on the learning of second language
vocabulary’, System 21 (3), 371–380.
19. Crowder R. G. (1976) Principles of learning
and memory, Hillsdale, NJ: Lawrence Erlbaum.
20. Hunt R. R. and Mitchell D. B. (1982) ‘Independent effects of semantic and nonsemantic distinctiveness’, Journal of Experimental Psychol ogy: Learning, Memory and Cognition 8 (1),
81–97.
21. See for example: Clarke J. A. (1994) ‘Cognitive
style and computer-assisted learning: problems
and a possible solution’, Alt-J 1 (1), 47–59;
Manning P. (1993) ‘Methodological considerations for the design of CALL programs’. In
Hall A. and Baumgartner P. (eds.), Language
learning with computers, WIL. 76–101.
22. Op. cit., 167.
23. Ibid., 168.
24. Ellis, op. cit., 14.
25. Ibid., 15.
26. Ibid., 16.
27. Saragi T., Nation I. S. P. and Meister G. (1978)
‘Vocabulary learning and reading’, System (6),
70–78.
28. Grace C. (1996) ‘Effects of the first language
on the retention of second language vocabulary’, CALICO 96 unpublished conference
paper.
CORRECTIONS
In ReCALL 9 (2), in the article on page 8 the name of Paul Baker at the University of
Lancaster was mis-spelled. Apologies for this.
In the same issue, in the CALICO 97 report on page 64, Matthew Fox’s affiliation was given
as University of Southampton, whereas Matthew is at Southampton Institute.
Vol 10 No 1 May 1998
45
ReCALL 10:1 (1998) 46–52
Does computer-mediated
conferencing really have a
reduced social dimension?
Tricia Coverdale-Jones
University of Lincolnshire and Humberside
This paper looks at computer-mediated conferencing (CMC) in the international arena, and considers
whether culturally influenced behaviour has an effect on communication online. There is consideration
of the indicative areas for cross-cultural misunderstanding taken from research into management communication, and also from research into gendered difference in posting styles on newslists and in ‘netiquette’ guidelines. The results from a small sample of questionnaires exemplify the cultural attitudes
towards learning of a UK-based group of respondents. The question is raised of whether the ‘reduced
social dimension’ of CMC allows participants in a conference to overcome social barriers, or whether
the lack of social clues present in face-to-face interaction leads to greater confusion.
Introduction
When we have experience of computer-mediated conferencing (CMC) and email in the
international arena, we are inclined to consider
whether culturally influenced behaviour has
an effect on communication. In this paper, I
shall attempt to show that communication
style online is affected by cultural factors just
as in any face-to-face, telephone or written
fields of communication, though possibly to a
lesser extent than in face-to-face interactions.
CMC includes email and conferencing systems, but also newsgroups and lists on the
Internet. The distinctions between these two is
becoming more and more blurred as email
acquires more conference-like features. All
46
these variations on CMC are tools which can
be harnessed to assist language learning.
I shall refer to previous research which has
indeed shown that there are observable differences in posting styles according to gender,
i.e. that CMC is not a neutral or culture-free
arena. In the Management field, research into
cross-cultural differences has provided a
framework within which we can analyse types
of difference, and the basic assumptions from
which misunderstandings arise. The question
of whether online communication is really a
field in which cultural or other differences
simply fall away will be a primary concern of
my paper. Cultural differences are significant
as affective factors in the learning environment, and ones which teachers or facilitators
ReCALL
Computer-mediated conferencing: T Coverdale-Jones
need to be aware of. The teacher or facilitator
may be unaware of these potential areas of
difficulty, as indeed may the learner.
1. Definition/description of areas of
difference in teaching and
learning
From his article, Cultural Difference in Teach ing and Learning (Hofstede 1986) we can take
the idea that cultural values and behaviours
may be reflected in learning styles, indeed that
teaching and learning play an important role
as a means of transmitting cultural values.
Hofstede refers to the teacher-student interaction as one of the archetypal role-pairs in any
society:
“Not only are these role patterns the products of
a society’s culture, they are also the device par
excellence by which that culture itself is transferred from one generation to the next” (Hofstede 1986: 302).
I shall here briefly summarise Hofstede’s categories of cultural difference, to which I shall
be referring later in this paper. In his analysis
Hofstede categorised cultural differences into
four fields; his findings were based on an
extremely large sample of 116,000 questionnaires, administered to managers from forty
countries. The four areas he identified were:
Table 1 Hofstede’s four factors (based on Hofstede
1986:307–8)
Power distance … “defines the extent to which the less
powerful persons in a society accept inequality in power
and consider it normal.”
Uncertainty avoidance “defines the extent to which people within a culture are made nervous by situations which
they perceive as unstructured, unclear or unpredictable,
situations which they therefore try to avoid by maintaining
strict codes of behaviour and a belief in absolute truths.”
Individualism/collectivism refers to “the extent to which a
person looks after his/her own interest and the interest of
his/her immediate family. Collectivist cultures assume
that any person..belongs to one or more “in-groups”from
which he/she cannot detach him/herself.”
Masculinity/femininity refers to the differentiation of roles
between the sexes. In a more masculine society the distinctions between these roles are more clearly differentiated.“In both masculine and feminine cultures, the dominant values within political and work organisations are
those of men.”
online communication (which I shall refer to
as the ‘anarchic’ view), viz public discussions
of whether the Internet should be censored.
Central to the ‘anarchic’ view is a belief that
students will participate freely in online discussion without much teacher input, and be
willing to share their ideas in a public forum.
2.1 Power distance and CMC
•
•
•
•
uncertainty avoidance
power distance
individualism/collectivism
masculinity/femininity.
These four factors are defined in Table 1.
2. How these may impinge on CMC
There are many assumptions associated with
CMC practice which may also be the assumptions of many CMC practitioners, who may
represent a largely US-based, male-dominated
group (Herring 1994). These assumptions
include a view of freedom from constraint in
Vol 10 No 1 May 1998
In applying this expectation in a high power
distance culture, however, we are open to the
pitfalls provided by cross-cultural assumptions. Consider Goodman’s (1994) description
of the roles of teacher and learner in a high
power distance society in higher education:
“High Power Distance societies are characterized by teacher-centred education, in which the
teacher transfers wisdom to students. Information flow is from the teacher to the student and
students are not expected to initiate communication or speak up unless called upon to do so.”
(Goodman 1994: 138)
When we consider this factor we must chal47
Computer-mediated conferencing: T Coverdale-Jones
lenge the assumption that moderators of a computer conference, or even a seminar, can apply
a ‘hands-off’ approach in leaving the interaction to their students with limited input or
encouragement from the tutor/moderator. This
approach will meet with bafflement on the part
of students from some cultures, especially
those from outside Western Europe or North
America (Hofstede 1994), who expect the
teacher to determine the learning content and
path, and certainly will not expect to determine
the content of the online course and assignments themselves (cf. McConnell 1992).
2.2 Uncertainty avoidance and CMC
The related factor of uncertainty avoidance
similarly affects the process of learning in this
context as learners with high uncertainty
avoidance will, according to Hofstede, prefer
structured learning situations with precise
objectives, detailed assignments, and a schedule. The usual way to deliver knowledge to the
learner is for the lecturer to give a lecture,
with no interruptions or disagreements from
the students (Goodman 1994). This also
reflects a greater power distance. Thus, in a
CMC context a student could hold back her
contribution until she felt that it was her ‘turn’
to speak, i.e. until the moderator or a leading
member of the group has given their input
first, or until guidelines on when and what
kind of contribution is expected of her have
been announced. Indeed, we as language
teachers recognise this situation from our own
experiences of mixed-nationality groups, not
only in societies with high uncertainty avoidance and not only in CMC.
In my own experience of an online course,
a high dropout rate amongst predominantly
British participants could have been affected
by such factors; learners may have expected
the course tutors to deliver content to us rather
than expect us to work things out for ourselves. Britain has a weak uncertainty avoidance score on Hofstede’s scale, however the
participants can still show individual variation
in behaviour and expectations despite the
‘same’ cultural background. Some may prefer
a highly structured, teacher-centred delivery of
materials, without the ‘chore’ of taking
48
responsibility for their own learning path in a
learner-centred approach, and this despite the
fact that the course may still be structured in
terms of objectives, tasks and deadlines.
2.3 Individualism and CMC
The third factor of cultural difference which
may affect learning in the real or the electronic
classroom is that of individualism-collectivism. Once again, this factor is not completely separate from the previous two, in that
the same or similar behaviour illustrates its
presence. To quote Goodman (1994) again:
“Individuals will find more satisfaction working
with a group for a collective goal rather than
working individually for their own achievement.
Students are not expected to draw attention to
themselves by calling out answers.” (1994: 138)
The added impetus of working towards a collective goal can be strengthened in the electronic forum, which facilitates the sharing of
ideas. What we do not yet know is whether
social impulses are weakened by lack of eye
contact, gesture and ‘atmosphere’.
It will take a certain amount of courage on
the part of the student to ‘speak up’ in a computer conference, if she/he is not used to holding the floor, i.e. the student may be unsure
who has the right to ‘speak’at any given time.
However, in the electronic forum, this may
cause fewer problems than in face-to-face
interactions. Participants can always ‘say’ as
much as they want to without fear of interruption or of taking another contributor’s turn.
The question of turn-taking, already referred
to in terms of power distance, is regarded,
somewhat uncritically, by many enthusiastic
CMC practitioners as a problem which is not
significant online. In my own experience,
some participants did hold the floor more than
others by sending longer and more frequent
postings; however, I felt this to be unimportant, which I certainly would not have felt in a
face-to-face seminar.
2.4 Masculinity and CMC
It is possible that a certain type of message can
deter some participants who eventually drop
ReCALL
Computer-mediated conferencing: T Coverdale-Jones
out of the conference. Some research which
analyses group interaction has been conducted
at Lancaster University (Hardy, Hodgson and
McConnell 1993) on turn-taking by men and
women in CMC, and the experience of online
talk. This goes some way towards answering
the question: does CMC really have a reduced
social dimension? A quantitative approach in
this study showed the female participants actually sending more messages than males (in
contrast to some previous studies) and generating roughly an equal number of words. However, a qualitative approach which involved
interviews with the women participants and a
textual analysis of messages and reactions to
them (including no response) showed that participants recognised a difference in style
according to gender. The authors conclude that
the difference is in women’s ‘rapport’ talk and
men’s ‘report’ talk, citing the analysis proposed by Deborah Tannen (Tannen 1992). A
female participant, who had expressed online
her anxieties about contributing, stopped posting messages to the conference when the sympathetic males in the group tried to analyse
why she felt so insecure. This analytical
approach actually seems to have deterred her
from making further contributions.
This brings me to consider other aspects of
online behaviour which suggest that there is a
considerable social dimension in CMC and
email. The fourth factor of Hofstede’s analysis, that of masculinity/femininity, describes
more masculine cultures as differentiating the
male and female roles more separately/explicitly; in these cultures the dominant culture was
the masculine culture which led to more societal emphasis on achievement and competitiveness. The research on gender online has
tended to focus on US-based lists; masculinity
scores for the USA are fairly high in Hofstede’s study, although Britain and Ireland score
slightly higher.
ings to a number of newsgroups on the Internet. This research has shown how online communication is not actually taking place in a
context-free arena without cultural ‘baggage’.
She has analysed two areas which reflect gender differences in nine lists, the ‘netiquette’
guidelines and the behaviour reflected in messages posted. Her analysis of the messages
posted shows a continuum along two scales,
that of attenuative/supportive and adversarial.
The adversarial style is, at its most
extreme, represented by ‘flaming’. Herring
notes that “the overwhelming majority of participants exhibiting this style are male” (1996:
118). When these styles are analysed by gender, there is certainly an overlap, but a clear
differentiation at the ends of the continuum,
two overlapping bell curves, as in Figure 1.
Herring notes a considerable overlap
shown here; however, the extreme ends of the
continuum are evidence which contradicts the
“myth that gender is invisible on computer
networks” (1996: 120). The fact that many
postings contain elements of both styles does
not contradict her analysis that online communication is not gender-free. Herring also analyses the data from responses to her questionnaire in terms of positive and negative
politeness, following Levinson’s model
(Brown and Levinson 1987), and finds a similar range of gendered views in the responses.
Regarding ‘netiquette guidelines’, she
finds that these reflect the values of the dominant group of a particular list. Many lists send
out netiquette guidelines which promote an
agonistic debating style, which is seen as hostile by many of the women in her survey who
do not differentiate between hostile and non-
3. Gender differences in online
behaviour
I shall refer briefly to Susan Herring’s (1994,
1996) research into gender differences in postVol 10 No 1 May 1998
Figure 1 Distribution of adversarial and attenuative/supportive posing styles by gender (from
Herring 1996).
49
Computer-mediated conferencing: T Coverdale-Jones
hostile disagreement. She found both in the
questionnaire responses from men and the
netiquette guidelines an emphasis on what she
calls anarchic values: “freedom from censorship, candor, and debate” (1996: 127).
We can see here a clear differentiation
according to the predominant group on the list.
Again, supposedly ‘neutral’ territory is culture-bound. The two cultures of men and
women (Tannen 1990) are different in their
emphasis on positive or negative politeness.
So we may assume that CMC or other forms
of online communication are affected by cultural communication styles as much any other
type of communication. The fact that many
early adherents of CMC have acclaimed the
medium as neutral territory may possibly be
due to the absence of norms by which to
behave and assess behaviour. The question
remains, however, whether there is a reduced
social dimension online rather than no social
dimension.
4. Research methodology/
questionnaire
The questionnaire which was presented at
EUROCALL 96 remains the basis of the
research, but difficulties in getting returned
questionnaires have meant that the results are
not numerous enough or varied enough to be
conclusive. I shall refer here to Section B of
the questionnaire which deals with qualitative
responses, as there are not enough responses
(Section C) for quantitative analysis. All the
respondents so far are graduates from the UK.
However, from the results so far we can see
that the answers are as would be predicted by
Hofstede’s and others’ categories for a Northern European country like Britain.
The first question asks what behaviour is
most annoying in a computer conference. The
responses reflect a concern with task achievement, associated with masculinity, where getting the task completed with a minimum of
time or fuss is the goal or basic assumption.
Thus the respondents report their dislike of
long messages, ambiguous thread headings,
frivolous or irrelevant behaviour, also of
50
“people responding in a direct manner to one
person but copying to everyone where this is
inappropriate. People picking up a personal
communication which others have not seen
and continuing the discussion publicly.”
The second question asks what behaviour
is most appreciated from other participants. A
similar concern with achievement rather than
social interaction is shown in some of these
answers, which express appreciation for short
messages, prompt and considerate replies and
acknowledgement, serious debate, useful references and quoting relevant paragraphs from
the previous communication so that it is easy
to pick up the thread.
When asked what changes they would like
to see in the way people behave in computer
conferences, they wished for more focused
discussion, more discipline in following the
thread or topic or in re-naming of a thread
when the message has switched topics, less
Americanisms (sic) and more serious viewpoints and clearer guidelines on how to
behave ‘inclusively’ where appropriate and
not otherwise.
The fourth question on the role of the tutor
in CMC accepts all the norms of CMC which
had no doubt been explained to these users as
the rules of the game; roles suggested included
summarising, moderating, focusing, introducing a provocative or stimulating thread, closing down drifting discussion and similar functions to direct the discussion. The assumption
here is that the main discussion takes place
among the participants, and that input from the
tutor is as a guide rather than as the source of
all knowledge.
This can also be seen in the answers to the
fifth question on the role of the learner. The
learner should engage in discussion or debate,
introduce new relevant ideas from own experience, be able to benefit from a wide range of
views and be prepared to share knowledge,
questions, doubts and to learn from peers.
Learner-centred education is taken as the norm
by these participants; it is simply not questioned or discussed.
Amongst the advantages cited of using
CMC, compared to other methods of learning,
we find a similar belief in the student input
ReCALL
Computer-mediated conferencing: T Coverdale-Jones
and contribution to the learning process. The
respondents say that CMC facilitates the discussion of problems, speeds up communication and feedback, is a useful way to order
thoughts and express issues that there may not
be time for normally; and it enables the sharing of knowledge openly, preparedness of
individuals to ‘speak up’ where in seminars
they might be too shy or less confident; and
they find the asynchronicity of text-based
CMC facilitates participation when the time is
right for individuals.
Only when asked about the disadvantages
do respondents refer to the social aspects which
may be lacking. They mention loss of context,
lack of spontaneity, “it can fall flat if people
lose interest. Some people react badly to the
computer.” Also mentioned is a lack of paralanguage – body language, etc., loss of personal
control/interaction, and CMC is seen as “more
problematic for group project work due to
asynchronicity than face-to-face collaboration.”
5. Conclusions
The state of research at present is far from
complete. At EUROCALL 96 I pointed out the
need for data, for awareness and for
learner/participant training to be culturally
sensitive online. It remains unknown whether
the CMC context will improve communication in areas of cross-cultural difficulty. Some
research in other areas shows that there are
indeed differences in communication style
according to gender, i.e. that CMC is not a
completely neutral forum. The reduced social
content cited by (Day 1993) may be simply
invisible to users from Western Europe and
North America. What we can say at this stage
is that the limited findings so far of the quantitative part of my research appear to support
the analyses in the field of cross-cultural communication in management.
As access to CMC and online communication spreads across the world, particularly to
other continents, we may encounter instances
where misunderstandings and the false attribution of motives arise. If we are able to
encounter these with awareness, and to impart
Vol 10 No 1 May 1998
that awareness to our students involved in
international exchanges, we may find that
CMC may overcome the barriers to some
extent.
References
Brislin R.W. and Yoshida T. (eds.) (1994) Improv ing Intercultural Interactions: Modules for
Cross-Cultural Training programs, Thousand
Oaks: Sage Publications.
Brown P. and Levinson S. (1987). Politeness, Cambridge: Cambridge University Press.
Cherny L. (1994) Gender Differences in Text-Based
Virtual Reality: Proceedings of the Third
Berkeley Conference on Women and Language,
Berkeley Women and Language Group, Berkeley: WITS. http://www.eff.org/pub/Net_culture/
Gender_issues/cherny.article
Coverdale-Jones T. (1996) ‘Cross-Cultural Issues in
CMC’, presentation at EUROCALL ‘96,
Berzsenyi Dániel College, Szombathely, Hungary.
Coverdale-Jones T. (1997) ‘Cross-Cultural Issues in
CMC’, Actes/Proceedings of the ITC Confer ence, Paris, January 1997, Journal du Multimé dia (forthcoming).
Day M. J. (1993) Networking: The Rhetoric of the
New Writing Classroom. CMC RHETORIC file
available in TESL-Larchives.
Falk-Bano K. (1996) Intercultural Conflicts in
British-Hungarian and American-Hungarian
International Organisations: SIETAR 96.
Goodman N. R. (1994) ‘Intercultural Education at
the University Level: Teacher-Student Interac tion’. In Brislin and Yoshida (op. cit. 1994).
Hardy V., Hodgson V. and McConnell D. (1993)
Computer Conferencing: a new medium for
investigating issues in gender and learning.
Unpublished paper from Centre for the Study of
Management, Lancaster University.
Herring S. (1994) ‘Politeness in Computer Culture:
Why women thank and men flame’. In
Bucholtz M., Liang A. C., Sutton L. and Hines
C. (eds.), Cultural Performances: Proceedings
of the Third Berkeley Women and Language
Conference, Berkeley Women and Language
Group, Berkeley: WITS.
Herring S. (1996) ‘Posting in a Different Voice:
Gender and Ethics in computer-mediated communication’. In Ess, C. (ed.), Philosophical
Approaches to Computer-Mediated Communi cation, Albany: SUNYPress, pp. 115–145.
Hofstede G. (1986) ‘Cultural Differences in Teach51
Computer-mediated conferencing: T Coverdale-Jones
ing and Learning’, International Journal of
Cultural Relations 10, 301–320.
Kaye A. R. (ed.) (1991) Collaborative Learning
through Computer Conferencing, Berlin:
Springer-Verlag.
McCreary E. and Brochet, M. (1991) ‘Collaboration in International Online Teams’. In Kaye
(op. cit. 1991).
Mulvaney B. M. (1994) ‘Gender Differences in
Communication: An Intercultural Perspective’,
Online Chronicle of Distance Education and
Communication 6 (1), November.
http://www.eff.org/pub/Net_culture/Gender_issues/
Riel M. M. and Levin J. A. (1990) ‘Building elec-
52
tronic communities: successes and failures in
computer networking’, Instructional Science
19, 145–169.
Trompenaars F. (1995) Riding the Waves of Cul ture, London: Nicholas Brearley Publishing.
Tricia Coverdale-Jones is Senior Lecturer in Eng lish as a Foreign Language and German at the
University of Lincolnshire and Humberside. She
has a long-standing interest in CALL and its appli cation in the classroom. Other interests include
pronunciation/intonation, online learning and
cross-cultural communication.
Email: [email protected]
ReCALL
ReCALL 10:1 (1998) 53–58
Virtual language learning:
potential and practice
Uschi Felix
Monash University
How realistic is it to achieve good quality language learning and teaching using technology? This
paper looks at the advantages and disadvantages of using CD-ROMs and Web-based materials in the
quest for providing meaningful interactive language learning strategies to students. It will demonstrate
that the advantages outweigh the disadvantages, at least in terms of pedagogy, and that there is no
need to reject technology despite difficulties and frustrations because the latest developments in technology, especially on the WWW, have significantly increased the potential for even more authentic interaction in the classroom. Illustrations from our Vietnamese course are included.
Introduction
The learning and teaching of languages is a
difficult business at the best of times, involving as it does close contact between teacher
and students in meaningful interaction. That
difficulty is underlined by the temptations of
technology. If we agree with Brown (1994:
159) that interaction is “the heart of communicative competence”, then we cannot avoid
the question of how a machine can offer a type
of learning that is already difficult to foster in
the classroom, and how the desired level of
interaction can be produced in a medium that
is not yet intelligent.
The notion of interactive software has taken
a big step forward recently with the arrival of
multimedia applications incorporating video,
Vol 10 No 1 May 1998
sound and text, and offering information with
which the user can ‘interact’. The trouble is that
the sort of interaction offered by pointing and
clicking within a rigid program is still primitive
and a far cry from what was meant by the proponents of interactive language teaching. Meaningful interactive language teaching can never
be one-way or teacher centred (Rivers 1987),
but, according to Wells (1981: 29), involves
“the establishment of a triangular relationship
between the sender, the receiver and the context
of situation”. Just as in the more recent taskbased approach, the dominant strategies are
group work and collaborative activities, with
student tasks reflecting real-life experiences,
personalities and beliefs. Achieving this is a
challenge in any teaching environment, and particularly challenging for machines.
53
Virtual language learning: U Felix
The basic question therefore is whether
technology offers a real prospect of producing
quality language learning. If it does, the task
then becomes to find ways of using it to produce desired outcomes. This article looks at
the promise and the pitfalls of what is available, and illustrates one way ahead as embodied in a new Web-based course in Vietnamese.
Goals of the technology
In the quest for quality, there is no room for a
simple focus on technology. The essential justification for any use of technology has to be
the improvement of teaching and learning that
it allows, and everything needs to judged
against this requirement.
To put this another way, the central justification for the introduction of technology cannot be to save money, or to make money, or to
save time, or to redistribute time, or to
increase enrolments, or to retain enrolments,
or to acquire expertise. This is not to deny that
many of these goals are admirable – even the
desire of some staff to acquire, or to deploy,
expertise in the field is reasonable in itself –
and some of them may be of great and increasing importance in a world where resources are
diminishing while demands remain the same
where they are not growing. In the real world,
some of them will inevitably play an important role in institutions’ decisions to become
involved in the field and in whether they do
this by using existing material or by creating
their own, knowing that the latter course gives
control over pedagogy, content, assessment
and feedback (and may even offer some hope
of monetary return) but at a very high cost in
the provision of the necessary expertise and
infrastructure support.
Nonetheless, all these goals are subsidiary
and need to be subordinated to the over-riding
educational goal. The test should always be
whether learning outcomes are being
improved. If that test is passed, many of the
other things will flow automatically.
Technology is not itself the solution, but a
tool that offers the prospect of contributing to
the solution. And while it is not the only tool,
54
it is one with a great deal of promise; much
more promise than when CALL was preeminently the domain of the lone user of drill and
practice packages. The WWW is the most
exciting tool that has emerged to date in language learning, offering as it does a plethora
of meaningful activities available in real-time
authentic settings. In this respect, it does not
simply promise to match conventional teaching but offers some advantages over existing
approaches. This is not to suggest that the new
tools and traditional tools are to be seen as
rivals. There must always be competition for
the limited amount of time available to teachers and students, but the assumption here is
that the new tools can and should be incorporated into teaching alongside the old.
Advantages of technology
Student diversity
Teachers in Australia are confronted with an
increasingly diverse student population. There
are now more women in the universities, a
greater range of cultural backgrounds, more
students for whom English is not the first language, more indigenous students, and more
students with education backgrounds that
deviate from what has been regarded as the
norm (Trent 1997).
Technology holds out the prospect of catering for students in a variety of ways relevant
to the individual (Felix 1997a, 1997b). Offering resources on CD-ROM or the WWW can
allow for student differences – ability, interest,
learning strategies, time spent on learning,
attention span, prior knowledge – to be dealt
with more systematically and more easily than
in a classroom.
Pedagogy
There are two powerful advantages in pedagogically sound multimedia programs. Firstly,
they can provide large amounts of linked
material on language, literature and culture in
the form of tutorials, games, lectures and contextualised exercises using video, audio and
text – all in one flexible resource that students
can work with alone or in pairs or take home if
ReCALL
Virtual language learning: U Felix
they have the appropriate hardware. Not even
the best teacher could hope to provide all that
in a regular classroom environment without
collapsing under the burden of coordinating
technical and pedagogical resources. Integrating an appropriate resource of this nature into
an already excellent teaching program, however, could add an exciting and useful dimension to the learning and teaching environment. While not all students are in favour of
technology, a large number find it enjoyable
and useful to work with good multimedia
materials, and a common observation is that
they find it non-threatening (Felix 1997a,
Rézeau 1997).
Secondly, the WWW in particular provides
opportunities for truly interactive language
teaching at the highest level. Students can be
involved in co-operative exercises in which
they are engaged in a task or quest in true-tolife situations in which they have some sort of
influence over the outcome, such as MurphyJudy’s (1995) approach whereby students produce hypermedia to read on the Web. Students
can be encouraged to enter real competitions
on the Web such as the one in which they are
led through a wonderful set of written and
visual instructions, manipulating beautifully
produced images and texts, changing sizes and
lay-outs to create a collage representing their
idea of Singapore, with the potential of winning a real prize (http://www.mewasia-singapore.com).
They
can
now
visit
http://www.goethe.de/z/20/semz4/deindex.htm
to buy a real ticket for a trip on the train in
Germany (naturally with previously established passwords) or join a chat site in France
in which they can exchange written messages
in real time in an environment which places
them in ‘virtual’ authentic settings around
Paris (telnet://logos.daedalus.com:8888). Not
much imagination is necessary to harness
these wonderful free resources for excellent
language teaching. More detailed examples
are given in Felix (1997b).
Delivery
The great advantage of WWW technology is
the flexibility that it offers, particularly in
areas of delivery:
Vol 10 No 1 May 1998
•
•
•
•
•
Direct and instant links to the tutor. This
may lead to prompt oral or written
responses, but even when it does not, an
efficient mode of communication is available. Naturally, provision needs to be
made to cope with the extra demand on
tutors’time.
Bringing groups of students together.
Again, communication can be in real time,
or chat sites can provide a forum for the
exchange of views.
Extending learning communities. Students
in traditional language classes tend to
work with the same peer group throughout
their academic life. While this has obvious
advantages, the opportunity to work with
students in other places and in countries
where the target language is spoken can
add a wonderful dimension to the learning
experience.
The interesting thing about working
with the WWW is that good things can
even happen by accident. Very soon after
our Vietnamese course appeared on the
WWW, we had an email message from a
designer of a site in the USA developed
especially for students of Vietnamese
around the world who wished to communicate with each other. This site, now linked
to our course and vice versa, turned out to
be a most valuable additional resource
because, it too, contained numerous excellent links to sites in Vietnam.
The potential for co-operative work among
students that is task or project oriented (as
described above).
The possibility of a wide variety of feedback and assessment formats (see below).
Disadvantages of technology
Enthusiasm is dangerous. We are unlikely ever
to find tools that offer only advantages. Intelligent use of technology requires awareness of
the balancing disadvantages.
As with any tool, the way in which it is
employed is of critical importance. Further,
any comparisons need to be fair and not
weighted by an inappropriate choice of com55
Virtual language learning: U Felix
peting systems, or by succumbing to the temptation to contrast actual performance in one
system with potential performance in another.
This criticism has been levelled at some
experimental research in the area (Dunkel
1991). For example, while it could make sense
in some cases to compare computer-based
learning with poor classroom teaching – one
can imagine situations where this is a real
choice, as one can imagine situations where
computer-based learning is the only option –
these sorts of comparisons do not have much
to say about the general power of computerbased learning. In the same way, it is not helpful to compare the experience of isolated students working on their own with a computer
with the experience of a small group of motivated students working together in an ideal
setting with an enthusiastic and qualified
teacher. For comparisons to be fair and useful,
they need to be between existing best practice
teaching with the best alternative solution.
Even within an ideal context where Webbased material is integrated into already excellent teaching, there are significant weaknesses
in the technology:
•
•
•
•
•
56
Access can be slow at long distances or in
heavy traffic where many students seek to
access the same component simultaneously.
Even with fully optimised sites, sound and
video take longer to load over the Web
than on a CD-ROM, and response rates are
significantly slower. This disadvantage can
be reduced by providing copies on CDROM which will give students full access
to activities other than those that are linked
to other Web sites.
Server complications are a threat. Web
developers are rarely responsible for server
configuration and maintenance, and software incompatibilities or unannounced
changes to or upgrades of server configurations can disable functions.
There is no control, or even knowledge, of
the end-user’s hardware or browser software, nor of the end-user’s will or capacity
to download relevant plug-ins.
There is a temptation to use a great variety
of development tools, leading to userunfriendly programs which require the
end-user to download numerous plug-ins if
the program is to run properly.
Ideal and practice: Vietnamese
WWW course
What follows from these remarks is that an
excellent approach to the potential of the technology is to combine CD-ROM and Webbased applications. More widely, the recommended approach if teaching is offered at a
distance is mixed mode: face-to-face teaching
supplemented by the WWW and by other uses
of technology like video-conferencing. An
excellent example of this latter approach for
the teaching of ESL in Finland was demonstrated at the EUROCALL’97 conference
(Tammelin 1997).
Our version of this is the beginners’ Vietnamese program, also demonstrated at the
conference and available on the Web site of
Monash University’s Centre for Languages
(http://www.arts.monash.edu.au/viet/). It differs from the Finnish project in that the Webbased materials are more extensive and constitute a large part of the teaching materials. The
course was trialed on the Web during the first
semester of 1997 – providing useful insight
into the problems of Web-delivered courses –
and integrated into the first year course at
Monash. The course incorporates freely available Web resources into the module of beginning Vietnamese. The Web component
includes (1) visual and textual information on
Vietnam either taken from linked existing sites
or specifically developed for the course; (2)
the Vietnamese alphabet and tones and a
vocabulary contained in visual and sound
files; (3) an on-line dictionary; (4) grammar
lessons linked to the visual component; (5)
sound and video databases for downloading;
(6) a student database; and (7) a chat site. It
provides practical exercises and games, selftest and password-protected timed tests which
are submitted directly to the lecturer via the
WWW. In producing the module, emphasis
was placed on developing and incorporating
ReCALL
Virtual language learning: U Felix
interactive strategies for both teaching and
feedback.
The course consists of 15 lessons, each
containing extensive exercises that include
free writing, matching sound to pictures or
dialogues to video clips, working with the
contents of a virtual reality movie, translation
exercises, listening comprehension exercises
and many more. Only when students feel that
they have mastered the content of a lesson
through attempting the accompanying timed
practice tests, are they required to submit tests
and other written work directly to the lecturer.
Students are able to interact with their lecturer
directly in class as well as through email at
any other time and through the in-built chat
site.
The chat site has two options. The first is
meant for simple communication exercises
between students or between lecturer and students in which the text disappears after the site
is closed. In the second, which is used mainly
for structured co-operative writing exercises,
all written text is retained so that the lecturer
is able to give feedback to the students.
The culture section has many links to sites
in Vietman which are used for setting up the
types of interactive exercises discussed earlier.
There is also a link of the month which is
changed continuously to give students more
variety in activities. Two such examples are
street signs in Vietnam and cooking recipes
around which many meaningful activities can
be structured.
The course has also been transferred to
CD-ROM so that students can have faster
access to the exercises containing sound and
video.
Evaluation of the course has been preliminary only, consisting of observing and questioning students interacting with the Web
materials, interviewing the lecturer in charge
of the course and asking outside volunteers to
evaluate the user-friendliness of access to the
Web materials. Feedback has been very positive in relation to the materials themselves,
especially the use of the chat sites and the
links to authentic sites in Vietnam. Naturally
the lecturer is overwhelmed by the extra
demands on his time in terms of coping with
Vol 10 No 1 May 1998
the technology itself and the added number of
enquiries coming in by email. Teaching the
majority of sessions in the computer laboratory, he is delighted, however, with the fact
that students can now work at their own pace
and on different parts of the course, all in the
same class, enabling him to attend to a greater
variety of questions and difficulties at the
same time.
Negative comments have come from the
independent external testers who needed to
download various plug-ins (Quicktime is necessary to run the videos and the Vietnamese
script needs to be included) and read through
rather long instructions on how to do this and
how to use the course in general. While most
of this is unavoidable, we are in the process of
addressing these problems (all feedback welcome). When a course is as extensive as this
one, it is more difficult to make it userfriendly, especially when instructions are
given by the programmer rather than the acad emic. Our local students do not face any of
these problems, because we pre-load all necessary plug-ins in the laboratory and instructions are given by the lecturer in person.
It needs to be pointed out that developments as extensive as this are expensive and
time-consuming (Felix and Askew 1996). We
had a $25,000 grant to carry out the work,
with many hours of unpaid work necessary to
complete the first stage and now the more
extensive evaluation process. A recommendation for future projects is to embark on developments of this nature only with adequate
financial back-up and access to appropriate
expertise, hardware, software and administrative and technical support. Implementation of
materials of this sort is potentially the most
difficult aspect of the development.
Conclusion
There is no reason to shy away from technology even if reasons for its introduction need to
be carefully considered and the advantages
weighed against the disadvantages. Despite
the difficulties (including those of cost), it has
real potential to add significantly to the quality
57
Virtual language learning: U Felix
of teaching in languages, particularly if it is
integrated into conventional approaches. Welldesigned programs should present a rich variety of content in a flexible resource, giving
learners a host of choices among the sorts of
activities that they wish to engage in, and providing flexibility of approach, learning style
and the use of time. The adoption of the technology should not reduce the interaction that
takes place between students and teacher or
student and student in the classroom, but it has
the potential actually to increase the amount of
meaningful interaction by importing the outside authentic world from the Web.
References
Brown H. D. (1994) Teaching by Principles. An
interactive approach to language pedagogy,
Englewood Cliffs: Prentice Hall Regents.
Dunkel P. (1991) ‘The Effectiveness of Research on
Computer-Assisted Instruction and ComputerAssisted Language Learning’. In Dunkel P.
(ed.), Computer-Assisted Language Learning
and Testing, New York: Newbury House, 5–36.
Felix U. and Askew D. (1996) ‘Languages and multimedia: dream or nightmare?’, Australian Uni versities Review 39 (1), 16–21.
Felix U. (1997a) ‘Integrating multimedia into the
curriculum: a case study evaluation’, OnCALL
11(1), 2–11.
Felix U. (1997b) ‘In the future now? Towards
meaningful interaction in multimedia programs
for language teaching’. In Meissner F.-J.(ed.),
58
Interaktiver Fremdsprachen- unterricht, Wege
zu authentischer Kommunikation, Gunter Narr
Verlag: Tübingen (ISBN 3-8233-5177-X).
Murphy-Judy K. (1995) ‘Writing (hypermedia) to
read’, Proceedings of the Computer Assisted
Language Instruction Consortium (CALICO)
Annual Symposium 1995, Duke University,
USA, 133–137.
Rézeau J. (1997) ‘The learner, the teacher and the
machine: golden triangle or Bermuda triangle?’,
http://www.uhb.fr/~rezeau_j/Dublin97.htm
Rivers W. M. (1987) Interactive Language Teach ing, Cambridge: Cambridge University Press.
Tammelin M. (1997) ‘Creating a telematics-mediated learning environment – new perspectives
and pedagogical challenges’. Paper delivered at
EUROCALL ’97 – Where Research and Prac tice Meet.
Trent F. (1997) ‘Teaching Diverse Groups’,
http://ultibase.rmit.edu.au/Articles/trent1.html
Wells G. et al . (1981) Learning through interac tion: The study of language development, Cambridge: Cambridge University Press.
Associate Professor Uschi Felix is Director of the
Language Centre at Monash University in Mel bourne, Australia. She has a research background
in applied linguistics, especially in innovative
teaching methods and teaching evaluation. During
the last decade, her work has focussed on CALL in
all its various aspects, concentrating on the system atic integration into the curriculum of tested CALL
applications from stand-alone software to WWW
materials in various languages.
Email: [email protected]
ReCALL
ReCALL 10:1 (1998) 59–67
Breaking down the distance barriers:
perceptions and practice in technology-mediated
distance language acquisition
Matthew Fox
Southampton Institute
Has the time come to re-evaluate the role of the teacher in technology-enhanced language learning
(TELL)? Studies into Computer Assisted Language Learning (CALL) and TELL have tended to focus on
issues relating to learner/computer interaction or learner/learner interaction mediated via the computer
(eg Warschauer 1996: 7–26). Relatively little research has been undertaken to try and understand how
technology can best be used for language acquisition (cf. Matthews 1994: 35-40 1a or Zähner 1995:
34–481b) particularly at a distance, to improve both the effectiveness of the learning and the learner’s
enjoyment of it. Indeed those studies which have been undertaken have tended to be inconclusive (cf.
Pedersen 1987). This paper attempts to begin to redress the balance by focusing on teaching and
learning issues related to technology mediated distance language acquisition, with particular emphasis
on the role of the teacher. The findings in this paper are based on the pilot phase of the Language
Learning Network, a project to design, deliver and evaluate a technology mediated vocational distance
language course. With distance learning, as with classroom-based courses, communication with and
support from the tutor is considered paramount. The project has established models for regular synchronous and asynchronous contact with tutors, provided in the context of time and budgetary constraints.
Having validated the courses for accreditation and wider distribution on a commercial and part-time
studies basis, much attention has been paid to the questions of learner support, assessment and quality
assurance.
1.
Background to the Language
Learning Network
The Language Learning Network is a three
year project funded by the Higher Education
Funding Council in England under the banner
of Continuing Vocational Education. The aim
of the scheme is to promote lifelong learning
Vol 10 No 1 May 1998
by developing courses which will exploit new
modes of delivery to increase flexibility of
access for learners.
The Language Learning Network has been
developed as a technology-mediated distance
language course which attempts to bridge the
paradigms of classroom-based language
instruction and of self-study. The aim is to
59
Breaking down the distance barriers: M Fox
provide a distance course with extensive tailor-made materials and systems for supporting the learner which go some way to reducing the negative effects of learning in
isolation.
Developed with relatively simple technological solutions which allow for cost-effective and efficient development times together
with potential cross-platform flexibility, the
courses employ Hypertext Markup Language
(HTML) and the multimedia capabilities of
the Internet, to offer learners a varied set of
learning materials. Courses last 15 weeks,
with 3 weeks Information Technology induction and 12 weeks of language self study with
tutored instruction. There is no face to face
contact involved in the courses, as learners
study from multimedia PCs at home or at
work. However, support is given through distance tutorials and electronic means such as email and discussion lists. For a more full
account of the pilot course and its development using the tenets of PragmatiCALL,2
please refer to Fox (1997).
2.
Research and the Language
Learning Network
Four companies signed up for the first course
at roughly GCSE level which was piloted
between March and June 1997. Initially 26
participants enrolled but one company withdrew at a very late stage leaving 12 learners in
total, 6 of whom survived through to the very
end, after a series of job moves and redundancies depleted numbers. At the same time, a
control group of around 35 (numbers fluctuated) was formed from students studying on
the Southampton Institute Language Programme. This group was to study with identical materials in paper form, delivered by tutors
over the normal three hours per week of
classes, one a grammar class, the other two
general language classes. No computers were
to be used in class time.
Clearly, with the reduction in numbers in
the pilot course the validity of the course as an
empirical experiment was put into question.
The research design, as is often the case in
60
language studies undertaken with groups
involved in routine classes, would in any case
have made use of quasi-experiment since the
reality of working with groups of undergraduates and professionals of mature age with very
real demands and expectations about their
studies meant there was limited control over
variables and no random assignment to group
possible. Indeed the group profiles were quite
different, with all of the experimental group
being mature students. As Nunan (1992: 27)
says “unfortunately it is not always practicable
to rearrange students into different groups or
classes at will” and often experiments have to
be carried out with “subjects who have been
grouped together for reasons other than carrying out an experiment”. This was certainly the
case for the Language Learning Network
experiment.
Data was collected in a variety of ways. A
pre-course questionnaire was administered to
all the pilot group to gauge motivation and
attitudes to language learning, computer-mediated learning and computer-mediated language
learning. Additionally a pre-test was administered to both the pilot and control group which
asked for an evaluation of confidence in core
language skills and tested knowledge of basic
syntax, in line with the expected level of
knowledge for average students taking the
course.
Interviews were carried out with pilot
course participants which explored issues
raised in the pre-course questionnaire. Furthermore, telephone and video-conferenced
tutorials were recorded, as were control group
classes. Both groups undertook identical
assessments for the course, a feature of normal language programme practice. Finally
there was a post-test and post-course questionnaire administered to both groups and follow-up interviews with the control group participants.
Clearly, these forms of data elicitation have
provided large quantities of information which
now need to be analysed and explained in
greater detail. It is not proposed to do this
here. Only the following issues will be
explored in this paper: learner attitudes and
the role of the tutor.
ReCALL
Breaking down the distance barriers: M Fox
3.
Pedagogical framework to the
project
3.1 Structure and autonomy:
the design approach
One of the key issues in the development of
the course was to provide learners with a
coherent structure to their learning, should
they wish to follow it. By this it is meant that
each unit or chapter of the course has a pathway through it which, if chosen by the learner,
will ensure that all the pedagogical elements
and skills practice designed into the unit will
be undertaken and experienced by the learner.
As is inevitably the case with distance
courses, the Language Learning Network has
followed the trend in language education for
increased learner-centred instruction. In keeping with the Communicative and now PostCommunicative paradigms prevalent in UK
language teaching and learning, the teacher’s
role is in a state of flux. Increasingly the
teacher is being seen as a facilitator and participant in the learner’s learning (e.g. Brumfit
1984: 60) who suggests that “learning will be
dependent partly on the teacher’s ability to
stop teaching and become simply one among a
number of communicators in the classroom”.
In fact, Brumfit’s position holds well for the
model of distance tutorial adopted by the
course, which places the tutor primarily as a
communicator and facilitator,3 who will mentor the learner through his or her studies. For a
more detailed discussion of the role of the
tutor in the Language Learning Network, see
below.
A fundamental truth in Computer Assisted
Language Learning accepted in the design of
the course is that the learner is freed and
empowered, at a simple level, to tackle exercises in any order she or he wishes; at a more
sophisticated level she or he is also freed to
employ the learning styles and strategies of his
or her choosing rather than having an
approach dictated by the teacher. Although the
range of research into the effectiveness of
instruction has not proved conclusive, it is felt
that some instruction rather than the ‘zero
option’ of no instruction, is preferable. That
being the case, however, it is also recognised
Vol 10 No 1 May 1998
that “allowance will probably have to be made
for variations in learning style, although it is
not clear what instructional factors and learner
factors need to be taken into account to ensure
an effective matching” (Ellis 1994: 660).
Clearly, an emphasis on self-instruction in a
course such as the Language Learning Network does free the learner to choose his or her
own learning style.
Nevertheless, experience also shows that
receiving guidance is both helpful and reassuring for the learner. As Dickinson (1987: 33)
makes clear: “One of the teacher’s responsibilities is to help learners develop the most
effective learning techniques... Many teachers
achieve this through negotiation always treating the learner as responsible but presenting
themselves in the valid role of experts in
learning techniques”. In the Language Learning Network, the teacher ’s role as advisor was
important, but it was felt that further use could
have been made by the learners of the tutors’
expertise.
Nevertheless, since the core design principle of the Language Learning Network was to
develop learner-centred approaches to the use
of technology for language learning, learners
were encouraged to explore the resources
available as much as possible and to make use
of them for both the elements of autonomous
learning and for the tutorials. Boyd-Barret and
Scanlon’s view (1991) that “the educational
significance of computing to a significant
extent may reside not in the machines but in
the ways in which teachers and learners interact with them and in doing so, the ways in
which teachers and learners interact among
themselves” is a crucial one.
3.2 Tutor-supported activities
The focal point of the learning activities was
the weekly tutorial. Taking Kohn’s view
(1995) that “distance communication concerns
bi-directional and interactive distance-tutor
and learner-learner communication”, learners
were given a programme of activities in
advance. This allowed them to prepare tasks
either based directly on roleplays and questions appearing on the course CDROM, or
vocational skills practice such as answering
61
Breaking down the distance barriers: M Fox
the telephone or making a reservation. Learners were also encouraged to ask for feedback
and support beyond the tutorial via e-mail or
the phone. See below for a discussion of the
tutor’s role and learner attitudes to the tutorials.
4.
Learner attitudes to
Technology Enhanced
Language Learning (TELL)
4.1 Studies in TELL
Studies into effectiveness of computer-based
learning in Higher Education courses have
been documented in several sources, though
general studies have tended to look at computer mediated communications for distance
education (e.g. Mason and Kaye 1989;
Harasim 1980) or the use of standalone programs as replacement for taught components
of language courses (e.g. Willingham-McLain
and Earnest-Youngs 1997).
Studies into motivation and learning styles
of self-study learners have shown that there are
unique features in the practice of self study
(Dickinson 1987: 18–35), which have equal
bearing on the learning process, whether computer-mediated or traditionally text book and
cassette-based. It is acknowledged, however,
that self-study brings the benefit of allowing the
learner to make their own choices about learning, in relation to their cognitive styles, strategies and learning techniques. As Dickinson
says (1987: 33) there is a strong argument for
maintaining tutor input “... a learner who
checks her own interpretation of the target language by asking for simplification, and who is
regularly denied this by the teacher, for whatever reason... is likely to be demotivated”. By
providing the means to obtain advice and clarification, the Language Learning Network
attempts to maintain motivation, in a teacher/
learner relationship which is inevitably altered
from its conventional classroom role. Stevens
(1995: 15) points out quite rightly that the‘feedback loop’ is central to the learner’s ability to
create constructs for their own learning.
4.2 Learner attitudes and participation
The pre-course questionnaire revealed that all
62
the participants were instrumentally motivated
for either professional or social reasons. From
the pre-course questionnaire administered to
the 24 potential participants, the following
noteworthy data emerged. On a five point likert scale, the mean score regarding the
learner’s perception of the value of language
skills in a professional and social environment
was 3.75, and while most described the quality
of previous language learning experience in
downbeat terms, with regard to success and
enjoyment, they recognised now the potential
usefulness of language skills. Most learners
also felt that computers would offer them
more than a traditional book-based distance
course (3.652) but if a choice had to be made,
that a classroom course was preferable to distance course (3.375). The majority of the
learners also felt that the technology would
not be an obstacle to their learning (3.870).
However, this did not always prove to be the
case. When, in interviews and post-course
questionnaires, the learners were asked why
they had not exploited the full functionality of
the course (e.g. discussion lists, e-mail, even
sound files) they cited time as a primary reason, but a secondary reason that emerged was
that they were unable to make all elements of
the software function or that their computers
were not properly specified, even though specifications had been discussed and assurances
been given as to the availability of suitable
PCs at the start of the course. Participants nevertheless made very few requests for technical
support, in spite of the technological difficulties, citing lack of time as the main reason.
Other attitudes which emerged from the
pre-course questionnaire were that most found
the prospect of autonomous study with the
computer an attractive one (3.304) and the
possibility of self-managed learning was also
attractive in terms of deciding how to
approach learning (3.609) and choosing when
to learn (4.167). Nevertheless, the post-course
feedback was at odds with this view, with half
the finishing participants asking for greater
regimentation and imposition of targets by the
tutors!
By the end of the course attitudes had
evolved and been modified. The most signifiReCALL
Breaking down the distance barriers: M Fox
cant aspect in the learners’ self appraisal of
their performance was time. Most felt that
they had been unable to devote the time necessary to do their learning justice, or that they
had been given inadequate support by their
companies to enable them to complete the
course as they wished. This revelation, while
no surprise to the tutors involved, does raise
questions about workplace-based training and
the facilitation of studies. Might it be that
there is in fact a disparity between companies’
stated commitment to employee workplacebased training and the pressures which they
bring to bear on their employees to complete
work to time? It might be thought that this
were a short-sighted redirection of resources
for short-term benefit at the long-term cost of
employing less well-skilled and less motivated
employees.
able to perform needs analyses, to set objectives, analyse language, develop and select and
prepare materials, carry out assessment procedures, aid with learning strategies, carry out
administration and management and to act as a
librarian of potential resources. Counselling
and supporting also remain crucial roles.
As the pilot course progressed, the tutorial's core role became increasingly apparent.
Feedback from learners also supported this
view of the significance of the weekly structured contact with the tutor. It gave the learner
several things: a learning target for the week;
feedback at various levels; an intellectual
challenge; encouragement; advice; social contact; access to administrative information;
technical information. These issues will now
be discussed in turn.
5.2 Learning targets for the week
5.
Redefining the teacher’s role
5.1 The tutor’s role defined
The role of the teacher in the telematicallydelivered course is both in keeping with the
traditional framework for the distance tutor,
and is also radically different in that it requires
a new set of technological skills, and both differing responses and differing response times
to learner needs. Maintaining Stevens’ idea of
the ‘feedback loop’ (op. cit.) and integrating
Kohn’s view (1995: 7) that the conditions for
good autonomous language learning are
“communication-embedded language learning, targeted language learning, facilitation
and tutoring, open pedagogic integration”,
was core to the tutors’role.
Certain key features in the supported distance learning paradigm remain familiar to the
tutor involved in the technology-based course.
The teacher will still be making judgements
about learner needs, and tailoring their input to
that. However, Dickinson proposes a taxonomy of skills adopted from Carver (1982,
1983) and McCafferty (no date) which teachers facilitating self-study require. They can be
condensed as follows. In addition to having the
necessary target language skills, teachers
involved with self-study learners need to be
Vol 10 No 1 May 1998
In their programme for the course, learners
were set objectives for each tutorial. In fact,
tutors were rapidly able to establish what the
differing needs were of each individual and set
targets which kept nominally to the learning
programme but also corresponded to the individual’s requirements and his or her progress
on the course. The tutor therefore needed to be
quite meticulous in recording individual
learner progress and achievement, to make
best use of the time available in the tutorials
and to ensure that the learners had realistic targets for the following session.
5.3 Feedback
The crucial role in feedback in both naturalistic and instructional settings, particularly in
speech production, an area in which computers have largely failed to be effective, is much
theorised about. Since most second language
acquisition (SLA) theory in the last 30 years
has drawn on the principles of Chomsky’s
Universal Grammar, it is widely accepted that
learners depend on feedback in the form of
negative evidence (or some argue for learnability through positive feedback, e.g. White
1987) to be able to develop their second language. The feedback must allow them to
understand what is correct and what is not.
While the core materials for the course
63
Breaking down the distance barriers: M Fox
were embedded within a rigid framework into
the CDROM, the tutor for his or her part, was
able to monitor and counsel remedial work
when necessary, which extended beyond the
constraints of the course materials or allowed
their use in a manner appropriate to the
learner. Using a marking scheme and comments sheet, tutors were able to concretise
some of the analytical processes which go into
evaluating learner performance. Individual
scores were given for the four language skills
areas, roleplay, and confidence. A further comments column was used to identify remedial
materials for sending to the learner. In fact, the
tutor’s role became increasingly demanding.
During the tutorial sessions, the tutor was
called upon to be a participant in the roleplays,
adapting to the level and content of the
learner’s performance, and was having to
monitor errors and provide feedback, to tailor
materials to the particular situation (e.g. where
learners had prepared the wrong work), to
interpret the silences and hesitations.
There is no doubt, however, that these tutorials allowed the learner to gain far greater
feedback than is commonly available in large
group classrooms, partly though the nature of
the tasks, and partly because tutorials were
conducted on a one-to-one basis. Tutors were
able to offer feedback at several levels: simple
error correction, offered instantaneously; evaluation and correction of pronunciation and
intonation; summary of errors and correction
of structures at the end of interchanges; evaluation of progress as a whole.
5.4 An intellectual challenge
Proponents of adaptive testing such as Noijons
(1997) will argue that computers can now vary
the level of challenge in exercises, at least in
testing reading and writing skills, by offering a
realtime assessment of performance until the
questions appropriate to the learners’ performance levels have been answered in sufficient
quantity to be able to form a evaluation
through probability calculations on competence and attainment. Clearly the great disadvantage of this system is in terms of development cost and time, since adaptive testing
systems require enormous databases of ques64
tions (which explains why their adoption might
only be possible as a result of government
national education policy rather than anything
else). However, in the form of the language
teacher we have a highly developed generator
of adaptive testing materials, able to gauge a
whole variety of factors such as learner attainment, mood or confidence when providing
learning challenges. Might it be that as the
stock of the computer rises continually in education, we are in danger of forgetting that we
have a most precious resource in the form of
our teachers? In the post course interviews,
learners said they were highly motivated by the
challenge of the tutorials which often stretched
them into expressing relatively complex ideas
and structures in short timespans.
5.5 Encouragement and advice
One of the key challenges of the distance
learner is to maintain motivation when isolated from fellow participants. In the Language Learning Network pilot phase, learner
isolation was not an issue for most learners,
since they were clustered within groups in
companies and indicated that they made much
use of the possibility of communicating with
each other face to face (which may explain the
lack of activity on the discussion lists, which
were designed to allow group cohesion to
develop). However, preliminary questionnaires indicated that all the participants had
low confidence about their ability in all the
language skills areas. A process of reassurance
and praise was undertaken intensively by the
tutors, which reflected a genuine satisfaction
with learners’ performances which sometimes
went well beyond their expectations for individual achievement. As has been stated on
several occasions in this paper, the importance
of the tutor’s role in fostering learning and
maintaining learner motivation in the courses
should not be underestimated. A reassuring
word or praise communicated via e-mail or the
phone, can be significant in keeping learners
interested in their course. Furthermore, the
tutor can play an important part as language
advisor. As an advisor, the tutor can explain
learning routes and strategies which will help
develop the learners’language acquisition.
ReCALL
Breaking down the distance barriers: M Fox
5.6 Social contact
As the course progressed, relationships
between the tutors and learners grew stronger.
One aspect that developed out of the one-toone tutorials, was that often the session would
be preceded or ended by a brief social
exchange. With some learners this exchange
became integral to the tutorial and was conducted in the target language. It seems that
that empathy between learner and tutor at a
distance is very important, and a brief icebreaker, as would happen in most telephone
calls or social contact, can contribute positively to the development of empathy. To a
certain extent, the telephone or video-conferenced tutorial is a more intimate process than
a classroom session. The sense of complicity
shared by learner and tutor can help to reduce
the sense of intimidation that the tutorial can
cause. Learners were asked to dial in to the
tutor rather than vice versa. Apart from the
obvious issue of phone-call costs not being
borne by the institution, this system also
ensured that the learner was mentally prepared
for the tutorial.
5.7 Administrative and
technical support
While letters and e-mail were used to communicate important issues to do with the course,
tutorials were also used as reminders and
cajolers to students to provide information,
keep to deadlines, arrange alternative tutorial
sessions etc. This role of the tutor mirrors that
of the classroom-based teacher.
Technical support was also given ad hoc
over the phone by the tutor. However tutors
should not be expected to get involved in technical issues for the obvious reasons of
demands on their time and the need for relevant expertise.
6. Conclusions
This paper has attempted to explore some
issues regarding learner motivation, course
design and teaching with regard to a technology-enhanced distance language course. The
focus has been less on the technology, and
Vol 10 No 1 May 1998
more on the human issues relating to the use of
technology. As explained in the PragmatiCALL (op. cit.) approach to CALL materials
development, the main thrust when developing
technology-enhanced or technology-mediated
language learning should be geared to providing learners with varied learning opportunities.
Approaches should recognise the strengths that
technology can bring to the language learning
experience but also appreciate that there are
aspects of learning best catered to by other
means. Currently, the tutor remains the best
source of adaptive input for the learner, whatever his or her mode of study. Furthermore, the
tutor plays a key role as a motivator and
provider of support for the learner. While the
mode of delivery of language courses may
vary, and consequently also the tutor’s role, it
is clear that effective tutoring (in the many
guises that term encompasses as described
here) is central to learner success.
Perhaps one of the key issues with distance
language learning, particularly with courses
such as the Language Learning Network that
strive to offer a clear pedagogical framework,
is not so much what the learner does during
his or her learning, but more what she or he
doesn’t do. In the age of learner-centred learning, we can only try to create the best possible
conditions for the learner to progress. By
preparing the learner psychologically for his
or her learning, by offering a range of support
structures and by continually striving to motivate, we can hope to develop coherent and
effective courses. And what of the role of
technology-enhanced learning? Clearly the
emphasis must be on enhancing learning.
Where technology can be exploited to reduce
the effects of distance, it is at its most useful.
In terms of course design, development
therefore needs to take close stock of learning
theory. Salaberry’s (1996) view seems a valid
one. “It is not the medium itself that determines the pedagogical outcome, but the specific focus of the theoretical approach on the
language learning phenomena”. Learning theory addresses issues in course design at the
macro level. However, many micro areas of
TELL also need further research, such as
learner control over the playback of listening
65
Breaking down the distance barriers: M Fox
materials, or the effectiveness of reading from
screen versus paper.
Currently, as a means of bringing together
materials from a multiplicity of media, technology certainly offers convenience, can also
offer useful instantaneous feedback, but is not
necessarily a be-all and end-all recipe for successful language learning.
Notes
1a and 1b. Matthews argues for the integration of
CALL into strong research agendas. Zähner
explores the issues of learner variation and the
resulting tension over attempts to marry SLAto
CALLdesign.
2. Pragmatic CALLhas several key elements:
• The courseware design is driven by a coherent pedagogical framework.
• The courseware is supported by interaction
with a tutor who will guide and advise as to its
best use (pedagogically first, technologically
second) and interaction with other learners (e.g.
through face to face encounters or CMC).
• The courseware is supplemented by appropriate tutor-selected and tutor-animated learning activities, which will probably be communicative, since communicative activities are
generally considered not to feature successfully
in CALL.
• The courseware will make appropriate use of
the technology, recognising both its strengths
and weaknesses as a learning facilitator.
• The learner is properly inducted into the use
of the courseware.
• The courseware is simple to use and robust.
• The courseware uses templates which are
simple to develop and cost effective.
• For remote learners, materials are tailored to
the specific demands of the distance mode of
learning.
3. The ‘communicator’ role is one in which the
tutor is providing comprehensible input for the
learner or acting as a partner in communication
with the learner. The ‘facilitator ’ role is one in
which the tutor is providing the stimulus and
guidance to enable communication on the part
of the learner.
References
Boyd-Barret O. and Scanlon E. (eds.) (1991) Com 66
puters and Learning, Wokingham: AddisonWesley.
Brumfit C. (1984) Communicative Methodology in
Language Teaching, Cambridge: Cambridge
University Press.
Carver D. J. (1982) Introduction to ‘The selection
and Training of helpers’. In Cousin W. (ed.),
Report of the Workshops in the Role and Train ing of Helpers for Self-Access Learning Sys tems, Moray House (mimeo).
Dickinson L. (1987) Self-Instruction in Language
Learning, Cambridge: Cambridge University
Press.
Ellis R. (1994) The Study of Second Language
Acquisition, Oxford: Oxford University Press.
Fox M. (1997) ‘Beyond the Technocentric – Developing and Evaluating Content-driven, Internetbased Language Courses’. In Borchardt F.,
Johnson E. and Rhodes L. (eds.), Proceedings
of the Computer Assisted Language Instruction
Consortium 1997 Annual Symposium “Content!
Content! Content!”, Durham, NC: Duke University.
Harasim L. (ed.) (1990) On-line Education: Per spectives on a New Environment, New York:
Praeger.
Kohn K. (1995) ‘Perspectives on Computer
Assisted Language Learning’, ReCALL 7(2), 5–
19.
McCafferty J. (no date) A Consideration of a SelfAccess Approach to the Learning of English,
The British Council (mimeo).
Mason R. and Kaye A. (1989) Mindweave, Oxford:
Pergamon Press.
Matthews C. (1994) ‘Integrating CALL into
‘strong’ research agendas’, Computers & Edu cation 23 (1&2), 35–40.
Oller J. jr. (1996) ‘Toward a Theory of Technologically Assisted Language Learning/Instruction’,
CALICO Journal 13(4), 19–43.
Noijons J. (1987) ‘Testing in Multimedia Language
Courses: Function, Format and Flexibility’, presentation at CALICO 97, New York, West Point.
Nunan D. (1992) Research Methods in Language
Learning, Cambridge: Cambridge University
Press.
Pedersen K. (1987) ‘Research on CALL’. In Flint
Smith W. (ed.), Modern Media in Foreign Lan guage Education: Theory and Implementation,
Lincolnwood, IL: National Textbook Company.
Salaberry M. R. (1996) ‘A Theoretical Foundation
for the Development of Pedagogical Tasks in
Computer Mediated Communication’, CALICO
Journal 14 (1), 5–36.
Stevens A. (1995) ‘Issues in Distance Teaching in
ReCALL
Breaking down the distance barriers: M Fox
Languages’, ReCALL 7 (1), 12–19.
Warschauer M. (1996) ‘Comparing Face to Face and
Electronic Discussion in the Second Language
Classroom’, CALICO Journal 13(2/3), 7–26.
White L. (1987) ‘Markedness and Second Language Acquisition: the Question of Transfer’,
Studies in Second Language Acquisition (9),
261–285.
Willingham-McClain L. and Earnest-Youngs B.
(1997) ‘An Empirical Study of ComputerAssisted Language Learning in a SecondSemester French Course. An Empirical Study
of TELL in Elementary College French’, pre-
Vol 10 No 1 May 1998
sentation at CALICO 97, New York, West
Point.
Zähner C. (1995) ‘Second Language Acquisition
and the computer: variation in second language
acquisition’, ReCALL 7 (1), 34–48.
Matthew Fox, previously a lecturer in French, is
Multimedia Projects Manager at the Centre for
Electronic Communications (Cecomm), Southamp ton Institute. He is conducting research into tech nology-mediated distance language courses.
Email: [email protected]
67
ReCALL 10:1 (1998) 68–78
Learning to learn a language –
at home and on the Web
Robin Goodfellow and Marie-Noëlle Lamy
Open University
This paper reports on work at the Open University's Centre for Modern Languages (CML) and Institute
of Educational Technology (IET), on the use of technology to support language learners working at
home and in virtual groups via the Internet. We describe the Lexica On-Line project, which created a
learning environment for Open University students of French, incorporating computer-based lexical
tools to be used at home, an on-line discussion forum, and guided access to the Francophone Web. We
report on some of the outcomes of this project, and discuss the effectiveness of such a configuration for
the promotion of reflective language-learning practices.
1. Introduction: reflective
learning at home and
on the web
Lexica On-line is a development from work
carried out by the authors, and others, on computer-based strategies for vocabulary learning
(Goodfellow 1995a, 1995b, Ebbrell and Goodfellow 1997), and by Lamy on the design of
distance language learning (The Open University, 1994, 1997). The vocabulary-related
work involved the development of a CALL
program for vocabulary learning, called Lexica. In the Lexica On-line project, this program was given to a group of students from
the OU Centre for Modern Languages' upper
intermediate French course, to use at home.
They were supported by means of a computer
68
conference accessible via a Web browser,
which also provided pathways to the French
Web in general. The project set out to address
the issue of whether this configuration of technical and tutorial support could promote the
development of reflective language learning
practices, i.e. enhance the students' understanding of how they learn, and help them to
develop more effective learning strategies.
The aims were:
•
•
•
To promote autonomous vocabulary learning and practice of reading skills
To generate on-line communicative interaction focused on the development of
reflective learning practices
To exploit the Francophone Web as a
learning resource.
ReCALL
Learning to learn a language: R Goodfellow and M-N Lamy
A group of 10 student participants was
selected at random from those who responded
to a questionnaire on Internet access, sent to
all the students of French of the Centre for
Modern Languages. They were all adults,
located in different parts of England. All had
PCs running Windows 3.1 or 95 and Internet
connections with Web browsers. They were
supplied with a copy of the Lexica program on
disk, including nine texts in electronic form
from the French course they were currently
following; a copy of the French-English
Collins-Robert dictionary on CD-ROM, and
access to a Web site at the Open University,
via a computer conference known as the project forum. The conference was moderated by
two French native speakers who also acted as
tutors throughout the project. Figure 1 shows
the overall configuration, in which students
were required to work on a starting set of
course texts, extracting vocabulary and processing it, discussing their progress with tutors
and other students on the on-line forum, and
using the French Web as a source for further
texts with which to repeat the cycle.
The objectives of the project were: firstly,
to test whether the students would be able to
use the lexical tools without face-to-face
supervision; secondly, to try and create selfsustaining interaction amongst the students
on-line, with minimal intervention from
tutors; and thirdly to introduce the students to
the Francophone Web in a controlled way, ultimately guiding them towards the completion
of a constructive task. In order to assist these
objectives, documentation was put up on the
project web site, covering the technical use of
Lexica and its pedagogical features (e.g. the
on-board concordancer, principles of creating
semantic groups etc.), the aims of the on-line
discussion, a glossary of technical terms, and
an introduction to the French Web. In addition,
two on-line tutors were engaged, with the brief
of encouraging students to comment on their
(and others') progress. The students committed
themselves to a minimum of ten hours work
over a period of six weeks. This was in addition to the workload already required of them
by their ongoing course (approximately 12
hours a week). To guarantee their compliance
for the duration of the project they were
promised a fee on completion of the work. At
the end of the project they were asked to
return the log files maintained by the Lexica
program, and to fill in a questionnaire reporting on their experience of the project. In addition, all the messages they sent to the project
forum were stored for later analysis.
2. Outcomes – what they did and
what they said
The outcomes focused on here are: student
workload, success in the vocabulary learning
procedures supported by the Lexica program,
the nature of the on-line discussion, and their
use of the Francophone Web. Occasional reference will be made to student attitudes as
revealed in the final questionnaire.
Figure 1 Configuration of students, tutors and
technology for Lexica On-Line
Vol 10 No 1 May 1998
2.1 Student workload
Table 1 summarises the amount of time, dur69
Learning to learn a language: R Goodfellow and M-N Lamy
Table 1 Student time on the project activities
Student
Estimated total time
(hours)
Estimated time with Lexica
Estimated time on Forum
Estimated time
on Web
s1
s2
s3
s4
s5
s6
s7
s8
s9
Average
20+
10-15
15-20
15-20
10
10
15-20
10
20+
14
6-8
5
10
7
7
3
8
5-7
15
8
5-6
3
2
5
2
4
4
2
9
4
12+
4
1
3
3
2
3
1
2
3
ing the six weeks of the project, that students
estimated they spent on each of the constituent
activities.
Even allowing for subjective inaccuracy, it
is clear that most of the nine students who
completed the project put in more than the
minimum amount of time for which they were
promised payment. (The one who dropped out
did so because of a series of problems with her
Internet service provider, which for a time
made her unable even to receive email.) The
estimations for time spent with Lexica are
broadly confirmed by the log files they sent
back. Most of their time was, in fact, spent
using the Lexica program at home. This was
to be expected, as the work was based round
their learning 50 new vocabulary items – a
stipulated minimum requirement. Several said
in the questionnaires that they would have
liked to develop their use of the forum and the
Web, but given that their existing course commitments continued throughout the project,
there was not enough time. The estimated time
spent on the forum includes reading others’as
well as writing their own messages. For some,
this was affected by a certain amount of slowness with access to the conference via their
modem. Features of the forum software which
allow for downloading and working off-line
were helpful, but again these take time to learn
to use. The relatively low times spent on the
Web were a result of the Web task not being
introduced into the work until week four of
the project. Most felt they would have spent
more time had it been introduced earlier,
though it is unlikely that they would all have
70
indulged as much as the student who spent
more than twelve hours exploring the French
sites they were given to look at.
In general it seems that the work of the project engaged these students up to and beyond
the level of workload expected, with considerable scope for extending it with respect to the
on-line discussion and the use of the Web. It is
clear, however, that a workload of this size
could not be sustained alongside other studying commitments for too long, even with a
financial inducement. It is an important consideration whether there are elements of conventional distance language learning courses
which could be substituted, not simply supplemented, by this kind of activity.
2.2 Success with Lexica vocabulary
learning activities
It is not possible to fully discuss their work with
the Lexica program without giving a description
of the program, which space precludes. Details
of the program can be found in the documentation on the project web site (http://wwwiet.open.ac.uk/lexica/welcome.html). Briefly,
the program consists of four activity modules:
•
•
•
Free selection of new vocabulary items
from the given texts
Use of the electronic French-English
Collins-Robert Dictionary and on-board
keyword-in-context concordancer to investigate and record information about meanings and use of these items
Grouping items according to relationships
of meaning and form
ReCALL
Learning to learn a language: R Goodfellow and M-N Lamy
• Self-testing for production of the items
The program saves all details of item selection, notes about meaning, groupings, and
results of self-tests. The number of items
processed (from selection to successful production), divided by the number of hours the
program has been in use, gives a general measure of effectiveness for a particular learner's
work. This measure has shown, in previous
studies, to have some degree of correlation
with qualitative assessments of learning, (see
Goodfellow 1995, Ebbrell and Goodfellow
1997). That means to say that strategies which
optimise the rate of successful processing of
items are often linked to deeper approaches to
vocabulary learning in general.
The students in this project achieved rates
ranging from nine items per hour to one (in the
case of a student who chose to do very little
self-testing), averaging 5.5. The log files confirm that the time they spent varied between
three and fifteen hours, and the number of
items selected was between 43 and 119 (all but
one achieved the minumum 50). The average
rate can be compared with other groups who
have used the program under conditions of
face-to-face supervision. Table 2 compares
them with an English as a Second Language
(ESL) group who worked as a class with a
supervisior, a Spanish as a Foreign Language
(SFL) group who worked individually with an
observer, and an English as a Foreign Language (EFL) group who received instruction
in the strategies which the program supports
(details of these studies can be found in Goodfellow (1995a) and Ebbrell and Goodfellow
(1997)).
This comparison shows that the Lexica OnLine students did not suffer unduly from the
absence of face-to-face supervision, although
it is likely that access to improved documenta-
tion about optimal learning strategies could
have futher enhanced their performance. For
this project it was expected that on-line discussion would take the place of the supervision that other groups received, but, as will be
seen in the following section, they did little
explicit discussing of their use of Lexica, preferring to talk more generally about the context of the vocabulary they were interested in.
From the point of view of the design of the
Lexica program, whilst it has proved to be
useable without direct support, and whilst its
activities were successful in generating a context for discussion and some degree of reflection, there remain a number of issues about
how to promote insights into strategies for
vocabulary learning, in particular those concerned with semantic structure and the
mnemonic grouping of related words and
expresssions.
2.3 The on-line discussion
As stated earlier, one of the objectives of the
project was to generate among students an online discussion in French which would (a)
have as a topic their language-learning practices, and (b) be sustained by them, with minimal intervention from tutors. These were seen
as key pedagogical and logistical issues in an
approach to distance language-learning in
which student collaboration is central both to
optimising the learning experience, and to
ensuring reasonable workloads for on-line
tutors. The locus for this discussion was the
project forum.
The project forum
The structure of the on-line forum is a
threaded bulletin board system accessed via a
World Wide Web browser such as Netscape or
Internet Explorer. Messages are displayed in a
hierarchy that shows which messages are
Table 2 Comparison with averages from previous studies
Group
Time (hours)
Items
Correct
Rate
ESL (group supervision)
SFL (individual supervision)
EFL (instructed)
Lexica On-Line (self-access)
4.7
4.8
3.8
10.8
16.5
29
25
62
75%
89.5
91%
84%
3.5
6
6.6
5.5
Vol 10 No 1 May 1998
71
Learning to learn a language: R Goodfellow and M-N Lamy
Figure 2 The Project Forum
responses to which other ones (Figure 2). Thus
it can be seen ‘who is talking to whom’. Users
read messages by clicking on the message
title. They reply by clicking the Reply button
and typing or pasting their response into the
box that appears. The reply then appears in the
tree structure underneath the message being
replied to. A chain of replies and replies-toreplies is called a ‘thread’. (Technical information about the forum and its software can be
found at http://trout.open.ac.uk/bbs/welcome.html).
In this forum students had a discussion area
for informal chat (the ‘Café’), but the tutorial
focus was discussion about vocabulary learning, initially their use of Lexica and subsequently their exploration of texts on the Francophone Web. The question was whether the
technology could support the kind of discussion which might have benefits in terms of the
development of reflective learning practices,
i.e. could help the students to become more
thoughtful about the processes involved in
their language learning. The tutors’ role in
this was to set initial tasks, such as “report on
the first ten vocabulary items you have
selected and say why you chose them”, and
then to moderate the discussion by encouraging comments and replies. A decision was
made not to do any overt language correction,
in order to encourage spontaneity.
72
The forum was also used to guide the students’ exploration of the Francophone Web,
via a ‘gateway’ page which contained a list of
sites which had been judged to be easy to navigate and potentially useful as a source of
texts. The selection included fiction, non-fiction, the printed press and the audio-visual
media, reflecting the topics and genres studied
in their Open University French course. Some
students were novice users of the Web, but
others were more experienced, so a French
search engine was included for those who
might wish to extend their explorations. Their
task, introduced in the fourth week of the project, was to find a suitable text, download it
from the Web into Lexica, study its vocabulary, and bring their findings and questions to
the project forum for discussion. All the students completed the search-and-download part
of the task, and, although not all of them
engaged in extended discussion about it, there
were significant contributions from at least
four of them about their findings.
The amount of on-line discussion
Table 3 summarises the amount of on-line discussion that went on over the whole six weeks
of the project, in terms of numbers of actual
contributions from each participant (a contribution is anything from a one-line response to
a half-page report on a task):
The table shows that all the students took
some part in the discussions, with some contributing two or three times as much as others.
In addition to these active contributions, all
Table 3 Numbers of contributions to the forum
Students
No. of
contributions
Tutors
No. of
contributions
s1
s2
s3
s4
s5
s6
s7
s8
s9
5
16
11
14
16
7
7
15
16
mn
es
(de
rg
44
13
5)
28
Total
107
Total
90
ReCALL
Learning to learn a language: R Goodfellow and M-N Lamy
students read all the messages sent, (indicated
by the forum’s ‘history’function which shows
who has read any particular message, and
when). There was also, however, a considerable amount of tutor input, despite the intention to minimise it. Most of the tutor interventions (‘mn’ and ‘es’ above) tended to be short
messages bouncing questions back to students,
those of ‘de’ in brackets were from an
observer, and those of ‘rg’ were mainly on
technical issues and in English.
A look at the shape of threads reveals that,
whilst a lot of the interaction took the conventional ‘classroom’ form of tutor-student-tutor,
there was also evidence of developing studentstudent interaction in several of the threads,
for example in the tutor-free ‘Café’area where
no language work needed to be undertaken.
Figure 3 shows a part of the interaction where
students were discussing their forthcoming
visit to Caen for the OU’s summer school.
There was also evidence elsewhere of student-to-student interaction and collaboration
focusing on linguistic issues. This was sometimes helped along by a tutor, but a number of
the student participants contributed quite substantially to this kind of discussion. Figure 4
shows sections from three student-dominated
threads dealing with language questions.
Nevertheless, one conclusion from the evidence of the shape of the on-line discussion
has to be that the project did not get the tutor
role quite right. One of the tutors, in fact,
expressed some concern, in the course of the
work, that she was not sure of what she was
supposed to contribute - this was exacerbated
#4 CAFE 15/4/97, Robin
#66 Cafe 22/4/97, Eamonn
#71 Oxygène 13/8/97, Davidw
#75 Caen 24/4/97, Stephenn
#86 Rendezvous à Caen? 13/8/97, Miken
#139 Salut 10/5/97, Stephenn
#76 Caen 24/4/97, Moyra
#78 Caen 24/4/97, Johnet
#82 Caen 24/4/97, Gerardl
#88 Caen 28/4/97, Johnet
#112 Caen 4/5/97, Eamonn
#114 Caen 4/5/97, Moyra
#135 Rencontre 9/5/97, Eamonn
Figure 3 Discussion in the on-line Café
Vol 8 No 1 May 1995
Thread 1
#122 les groupes basés sûr la deuxiéme syllable
13/8/97, Carolinet
#124 hangman 7/5/97, Moyra
#138 hangman 10/5/97, Stephenn
Thread 2
#183 Groupement 25/5/97, Eamonn
#186 mots bizarres 26/5/97, Miken
#196 Francais a l'ecole 28/5/97, Stephenn
Thread 3
#163 le Web francophone 19/5/97, Moyra
#164 L’obligation dramaturgique 20/5/97,
Davidw
#167 L'obligation de contexte 21/5/97,
Marienoelle
#176 L'obligation de contexte 22/5/97,
Moyra
#177 Pas quebecoise! 23/5/97,
Marienoelle
#178 obligation dramaturgique 24/5/97,
Miken
Figure 4 Student-to-student interaction on language isssues
for her by the decision not to do any overt correction of the French. Although the tactic of
reflecting questions back at the group had
some success, the well-attested difficulties of
generating student-student collaboration in
on-line tutorial discussion asserted themselves.
The content of the on-line discussion about
language
Discussion about language issues, the main
focus of the work, mainly occurred through
students responding to questions from the
tutors. Topics included the Lexica tools (dictionary, concordancer and grouping tool), a
small amount of discussion about language
form, issues of word meaning and context, and
the French Web. Although the project set out
to promote talk about vocabulary and vocabulary-learning, the discussion data shows that it
focused less on successes and failures with the
Lexica program, and much more on language
in general, on meaning in particular, and
implicitly on the students themselves as users
of French.
The dictionary was referred to a lot, with
its offerings quoted, and evaluated. This was
perhaps because it is a familar tool, and its
73
Learning to learn a language: R Goodfellow and M-N Lamy
way of looking at language is implicitly
understood. The message below, for example,
supplies – in excellent French – a good diagnosis of the shortcomings of the dictionary’s
approach.
C’est évident que, pour certains mots, un dictio nnaire ne peut proposer qu’une proportion des
contextes possibles.[...] Dans ce cas, Robert ne
nous offre pas ‘sous les allures'. (message 95)
It’s clear that, for certain words, a dictionary
can't provide more than a proportion of the possible contexts...in this case, Robert doesn’t give
us ‘sous les allures’
The concordancer, despite being an unfamiliar
tool, captivated students. They understood the
way it worked and were keen to use it, but
quickly became aware of its own shortcomings, which were mainly due to the small size
of the corpus it was working on (50,000
words).
Je croie que j’ai choisi des mots trop specialisés
parceque j’ai trouvé trop peu des références
dans la concordance. (message 85 )
I think I’ve chosen words that are too specialised, as I found too few references in the
concordancer.
The grouping task gave some students problems, which they set about surmounting. The
quote below shows a student facing difficulties caused by polysemy, and offering a solution.
Je trouve que ce n’ est pas facile de décider ou
le mettre. J’ai un groupement que j’appelle les
gens ou je mets les mots qui décrivent les émo tions humaines. Peut-être il faut mettre allure la
dedans. Il y a tout une gamme des mots comme
ça, par exemple squelettique ou racoleur qui ne
sont pas trop facile de placer de catégorie. Une
solution est de mettre les mots dans deux ou
trois groupements. (message 113 )
It’s not easy to decide where to put them. I’ve
got a grouping which I call ‘people’ where I put
74
words which describe human emotions. Perhaps
I should put ‘allure’with them.There's a bunch of
words like, for example, ‘squelettique’ or
‘racoleur’which are not easy to categorise. One
solution is to put words into two or three groupings.
Despite such self-help, a few participants
found the grouping task challenging, and had
some questions about its relevance. This was
symptomatic of a general disinclination to
engage with language relationships of a more
abstract kind, e.g. lexical classification, morphological relationships, suffixation, issues of
word frequency. For some, this may have been
the result of their unfamiliarity with the metalanguage, but we believe that there may be a
more fundamental objection that such things
are only of interest to expert linguists, not to
people who ‘just want to use’ the language.
Nevertheless, when pushed, some of them
showed that they were capable of reflecting at
this level. The quote below shows a student
rejecting an avenue of research suggested by
the tutor:
Par contre, des mots se terminants en ‘-ière’ne
me paraissent pas aussi prometteurs. Ce suffixe
me semble dénoter (toujours, quelquefois?) un
récipient, ce qui contient quelque chose: du thé,
de la marne, des taupes etc. Mais le sens d’un
mot se trouve dans sa racine, n'est-ce pas?
(message 123 )
On the other hand, words ending in ‘-ière’ don't
seem as promising to me. This suffix seems to
me to mean (always, sometimes?) a container,
something that contains something, such as tea,
marl, moles etc.But you find the sense of a word
in the root don't you?
In this message, the student displays a good
grasp of the semantic functioning of suffixes.
One might be tempted to say that he ‘betrays’
this knowledge: earlier in the conversation, he
had not revealed the extent of his language
awareness. He does it as a result of arguing
with his tutor.
The bulk of student-to-student interaction
on the Forum was about the meaning of particReCALL
Learning to learn a language: R Goodfellow and M-N Lamy
ular words and expressions, particularly in
terms of translation and context. It was around
these topics that the discussion showed most
signs of becoming self-sustaining. In the quote
below, a student is asking for assistance and
offering her peers a suggestion (as a gesture of
thanks perhaps, or in anticipation of their
help). Her call is answered by the author of the
message that follows, which focuses on the
issue of context.
Alors, j’ai choisi un texte qui m'intéresse beau coup et dans lequel on se trouve la phrase
‘obligation dramaturgique’.Il y a personne qui en
connaît la signification? Le texte concerne
l’élection françsaise qui va bientôt. Une autre
phrase que je trouverai très utile, je le pense,
bien que la traduction ne soit pas difficile, c’est
‘les précautions oratoires’. J’espère que vous la
trouverez utile, aussi. (message 163 )
So I chose a text I’m very interested in, in which
the phrase ‘obligation dramaturgique’ appears.
Does anybody know what it means? The text is
about the forthcoming French election. Another
phrase which I think would be useful, though it's
not difficult to translate, is ‘les précautions oratoires’. I hope you find it useful too.
Je suggère que cette phrase veut dire «le
besoin d’être vu de faire quelque chose ou le
besoin de faire un récit mimé d'un rôle» mais on
désirerait d’avoir plus d’information en ce qui
concerne le contexte de cette phrase. Est-ce
que ma suggestion saisit la signification de votre
phrase dans son contexte? (message 164)
I suggest that your phrase [obligation dramaturgique] means ‘the need to be seen doing
something, or the need to tell a story in mime’,
but it would be good to have more information
about the context of that phrase. Does my suggestion capture the meaning of your phrase in
its context?
Exchanges about translation and context were
ways of discussing language which was familiar to all, they were in line with students’need
to cling to their own language or to familiar
referents, and they were also currencies for
Vol 10 No 1 May 1998
social exchange because there were enough
peer ‘experts’ among the group so they could
swop valuable contributions. This contrasted
with discussions on groupings or linguistic
structures: there were no expert linguists
among them, so a discussion of suffixes would
have been no way to make friends. There was
no ‘social’advantage to pursuing those topics.
For a student learning a second language,
talking about that language is an activity
through which identity is constructed. Not
only is proficiency revealed, but education,
experience and other aspects of personal background too. It is not surprising that some find
such discussions in the face-to-face context
threatening. What an on-line forum offers is
the chance to be much more in control of this
process. Contributions can be thoroughly prepared, an absence of response is less likely to
be marked. There is also more opportunity to
observe and assimilate norms of group interaction. A contribution such as the one shown
below would serve as a model, to be studied at
will, of how to engage in the relatively unfamiliar territory of a social discussion in and
about French. It is a report on some translation
work which a student took it upon himself to
do after finding a text on the Web. We are told
what procedure was followed. The student
also communicates his feelings about the task
(delectation), offers a translation and justifies
his choice, and then starts a discusssion of the
metaphors associated with the semantic field,
and their etymology – all unbidded:
J’ai choisi un texte politique parce que c'était
une semaine très importante en France. J’ai
cherché tous les journaux et enfin j’ai trouvé un
debat entre Laurent Fabius et Alain Madelin en
Liberation.Ici on trouve plus de phrases et mots
intéressants. En particulier c’est difficile à
traduire les mots qui expliquent les idées –
comme par exemple ‘ultralibéralisme’. à mon
avis on peut utiliser en anglais ‘Thatcherism’
parce que pour nous le mot ‘liberal’ est toujours
associé à les idées de la centre-gauche et pas
avec la droite, comme le RPR. Je me délecte à
trouver des expressions très métaphoriques,
comme – ‘Là démocratie est bonne fille, mais
elle n’est pas sotte’, ou, – ‘Il ne suffit pas d’agiter
75
Learning to learn a language: R Goodfellow and M-N Lamy
un chiffin rouge devant la France pour qu’elle
perde la tête’.
J’ai trouve aussi des mots intéressants tels que,
‘berner’(‘to fool’or ‘to hoax’ but also ‘to toss in a
blanket!’). Est ce quelqu'un qui peut m’expliquer
l'origine de cette seconde signification? Pendant
faisant du surf j’ai pris plaisir à lire ‘le virtual
baguette’ Ici on a lu l’explication de la guillotine
avec beaucoup d’expressions humoristiques.
(Merci Stephen pour le renseignement. Je l’ai
trouvé cette page par ‘Yahoo’ très facilement
comme tu as dit).
En résumé j’aime bien le WWW en français et je
continuerai flâner là après le fin de Lexica.Enfin,
merci John et Moyra. les traductions que vous
avez suggestées semble exactes à moi. Vrai ment les phrases et les idées politiques sont dif ficile à traduire! (message 204)
I chose a political text because it was a very
important week in France. I searched all the
newspapers and I finally found a debate
between Laurent Fabius and Alain Madelin in
‘Libération’. There were interesting words and
phrases there. Words expressing ideas are particularly difficult to translate – for instance ‘ultralibéralisme. I would think ‘Thatcherism’ could be
used as a translation, because the English word
‘liberal’ is always associated with the ideas of
the centre-left and not with the right, as is the
case with the French RPR. I delight in discovering metaphors like ‘Democracy may be prepared
to put up with a lot but it’s no fool’ , or, – ‘Showing France a red rag won't be enough to make
her lose her senses’.
I also found interesting word s like ‘berner’ (‘to
fool’ or ‘to hoax’ but also ‘to toss in a blanket!’)
Can anyone explain to me the origin of the second meaning? While surfing I really enjoyed
reading the ‘virtual baguette’. There I read the
explanation of the ‘guillotine’, and found many
humorous phrases. (Thank you, Stephen, for the
info. I found the site very easily via Yahoo, as you
had suggested).
In summary, I really like the francophone Web,
and I’ll keep on roaming it after the Lexica project has ended Finally, thanks John and Moyra. I
think that the translations you suggested are
76
good. But political phrases and ideas are really
difficult to translate, aren’t they?
This student has achieved a position of fully
engaged member of the learning community,
and is declaring this to the group, in French.
Evidence of the re-use of language in the
on-line discussion
An implicit assumption underlying the attempt
to promote discussion in L2 is that some new
language may be learned either in a considered
way, as a result of correction, or in a more
osmotic way, via imitation of a model, from a
tutor, a peer or an authentic stimulus. Partly
through shortage of time, and partly because of
the abstentionist error-correction policy in this
project, accuracy in French was not discussed
by students or tutors, so re-use of language arising from correction does not figure in the discussion data. Re-use of the second type, of
vocabulary and structures encountered in the
tutors’ contributions, in each others’messages in
Web texts or in the project guide, is a subject of
continued investigation. The clearest evidence is
of re-use of Web-related terminology and
phraseology – we assume that phrases like faire
une recherche, charger un texte dans, grâce au
moteur de recherche Ecila, le forum or
télécharger, all of which appear in student message text, have come from the dedicated glossary given with the project guide, as such terms
do not appear in (even recent) conventional dictionaries or in the electronic one which students
were using. The search for evidence of more
subtle kinds of ‘osmotic’ re-use is an important
research issue. The question whether it happens
in on-line discussion, and if so, how it can be
detected, poses a challenge to our methods of
analysis and interpretation of on-line discussion
data, as well as to our theories of language
acquisition. In the post-project questionnaires
the students claim to have learned a lot of
French, but how can this be demonstrated? The
relatively small amount of discussion data generated by this project is unlikely to yield much
in the way of evidence of re-use of a more general kind. This particular area of research will be
one of the objectives of a scaled-up version of
the project, planned for 1998.
ReCALL
Learning to learn a language: R Goodfellow and M-N Lamy
3. Summary – what has been
learned
open learning of languages on-line, and focus ing on the following key issues:
Distance learners are able to use the Lexica
program as effectively as those who have faceto-face support. The activities of the program
provide an appropriate framework for a strategic approach to the learning of vocabulary, and
the on-line discussion forum is an efective
platform supporting reflective discussion of
issues arising out of the application of these
strategies. The forum and the program together
provide the means and the rationale for the
exploitation of texts found on the World Wide
Web. On-line conversation by students proceeds initially on the basis of questions
deemed ‘worth asking’by the group. Such topics with value for reflective learning practices
include talk about translation and discussion
about context – including personal experience.
Exchanges of the latter type may be favoured
by the characteristics of on-line discussion
which afford participants more control in the
presentation of themselves and the assimilation
of group norms. Discussion areas with which
students are less likely to engage initially are
those concerned with linguistic form; this is
probably because it is considered to be of interest only to expert linguists. Initially, the role of
tutors is likely to be a reflection of the conventional classroom model of tutor-student conversation, but self-sustaining discussion by students can be promoted by the tactical use of
‘bouncing’ questions back, and by focusing on
areas of discussion which they themselves
have introduced. Students will take up and reuse relevant terminology, but the search for
evidence of more implicit types of acquisition
is problematic.
In general, the responses students gave in
the post-project questionnaires was positive
and enthusiastic, reflecting the work they had
put into it. They felt that this project represented an enhancement of their language
learning experience and were keen that it
should be incorporated in a more extended
form into their OU course. Further development of the approach is now underway, in the
context of a research programme funded by
the Open University, looking at principles of
•
Vol 10 No 1 May 1998
•
•
Promotion of student-student on-line inter action. It is necessary to understand how
the social dimensions of the construction
of personal and group identity in an online L2 discussion, affect the involvement
of individuals. Strategies for supporting
learners working together should take into
account the needs that different individuals
have for security in the presentation of
themselves. This work will take account of
experience in on-line language learning
elsewhere (e.g. the MERLIN project,
http://www.hull.ac.uk/langinst/merlin).
Promotion of reflective discussion of linguistic issues. Student resistance to ‘expert
linguist’ topics needs to be defused, if the
full benefits of reflection on learning practice are to be realised. The tutor’s role is
involved in this, especially in the development of metalanguage, as is the representation of these types of relation in the design
of computer-based tools (such as the Lexica program) and of on-line documentation
and study guides. Implicated also are questions related to the formal aspects of
coursework, for example the issue of
assessment. Reference will be made to
existing criteria for the assessment of live
conversational interaction developed at the
Centre for Modern Languages.
Investigation of re-use of ‘new’ language.
This is both a theoretical and a methodological issue, involving the development
of techniques for identifying specific
examples of language use in a database of
L2 on-line discussion. Data is being examined from a number of sources, including
different types of computer conference and
email discussion. Pedagogical considerations will arise from any evidence that can
be found of sytematic re-use by learners or
modelling by learners and tutors.
The next stage in the development of the Lexica On-line project will be a re-designed and
larger-scale version of the course, to be run
with OU French students in the spring of 1998.
77
Learning to learn a language: R Goodfellow and M-N Lamy
References
Goodfellow R. (1995a) A computer-based strategy
for foreign language vocabulary learning.
Unpublished PhD thesis, Institute of Educational Technology, Open University.
Goodfellow R. (1995b) ‘AReview of Types of Programs for Vocabulary Instruction’, ComputerAssisted Language Learning 8, 2–3.
Ebbrell D. and Goodfellow R. (1997) ‘Learner,
teacher and computer – a mutual support system’. In Kohn J., Rüschoff B. and Wolff D.
(eds.), New Horizons in CALL– Proceedings of
EUROCALL 96, Szombathely: Berzsenyi
Dániel College, 207–221.
The Open University (1994) L120 Ouverture: a
fresh start in French, Milton Keynes: The Open
78
University.
The Open University (1997) L210 Mises au point:
French language and culture, Milton Keynes:
The Open University.
Robin Goodfellow is a lecturer in New Technology
in Teaching at the Open University's Institute of
Educational Technology. His research interests in
foreign language learning are in lexical acquisition
and learning via asynchronous networks.
Marie-Nolle Lamy is a senior lecturer in French at
the Open University's Centre for Modern Lan guages. Her research interests are in French lexi cology and syntax and student strategies for dis tance-learning of French.
ReCALL
ReCALL 10:1 (1998) 79–85
Les outils de TALN dans SAFRAN
Marie-Josée Hamel
UMIST
Le projet SAFRAN a pour objectif le développement d’une interface dédiée à l’enseignement du
français assisté par ordinateur, interface dans laquelle sont intégrés progressivement des outils de traitement automatique des langues naturelles. Ces outils, dans le cadre du projet SAFRAN, sont l’analyseur
syntaxique et le synthétiseur vocal FIPSvox, le dictionnaire électronique conceptuel FR-Tool et le conjugueur FLEX. Ils permettent l’accès à des ressources linguistiques riches et variées, favorisent l’expérimentation et enfin, offrent un support au diagnostic. Notre article fait le compte-rendu de deux années
d’activités scientifiques pour lesquelles nos efforts ont porté sur le développement d’un module sur l’enseignement de la phonétique du français qui intègre les outils de TALN mentionnés supra.
1. Objectif et hypothèse de
recherche
Le projet SAFRAN (Système pour
l’Apprentissage du FRANçais) s’inscrit dans
le paradigme des systèmes d’enseignement
intelligemment assisté par ordinateur (EIAO)
et a pour objectif le développement d’un
système d’enseignement du français oral et
écrit. L’intérêt de ce système concerne
l’utilisation à des fins didactiques d’outils
issus des recherches dans le domaine du
traitement automatique des langues naturelles
(TALN), tels que le dictionnaire électronique
conceptuel, le synthétiseur vocal, les outils de
conjugaison, l’analyseur syntaxique, etc. Avec
SAFRAN, nous entendons montrer que
certains de ces outils ont maintenant atteint un
Vol 10 No 1 May 1998
stade de développement suffisant pour être
intégrés à des applications d’enseignement des
langues où ils contribueront à augmenter la
flexibilité de l’interface, notamment au niveau
de la gestion et du traitement des ressources
linguistiques ainsi qu’à enrichir les
environnements
d’enseignement
et
d’apprentissage.
2. Les systèmes d’EIAO
Les systèmes d’enseignement intelligemment
assisté par ordinateur (EIAO) partagent leur
architecture avec celles des tuteurs intelligents
(Yazdani 1987, Frasson et Gauthier 1990). Ils
ont pour mission l’intégration de techniques
empruntées à l’intelligence artificielle dans les
79
Les outils de TALN dans SAFRAN: M-J Hamel
systèmes d’EAO et ce, dans le but de les
rendre mieux adaptés au profil et aux besoins
de l’apprenant. Ils comportent généralement
un module expert, un module de l’apprenant,
un module tutoriel et une interface usager. Le
module expert emmagasine l’ensemble des
connaissances sur le domaine à enseigner,
c’est-à-dire les données factuelles et
procédurales qui décrivent ce domaine. Le
module
de
l’apprenant
emmagasine
l’ensemble des connaissances sur l’apprenant,
c’est-à-dire des informations sur son profil
(antécédents, style d’apprentissage, etc.) et sur
l’état de ses connaissances du domaine expert.
Le module de l’enseignant (tuteur) réunit des
informations concernant le choix, le
découpage
et la reformulation des
connaissances sur le domaine à enseigner et
ce, en fonction d’objectifs et de stratégies
d’apprentissage déterminés à partir des
connaissances sur l’apprenant. Finalement,
l’interface-usager sert de canal de distribution
et de collecte des connaissances, gérant ce que
l’on appelle le dialogue utilisateur-machine
(apprenant-expert).
Les systèmes d’EIAO sont des systèmes en
principe dynamiques; dans une application
donnée chacun des modules s’enrichit au fur
et à mesure des connaissances accumulées
dans les autres modules. La mise à jour des
informations est par conséquent constante.
3. Outils de TALN dans les
systèmes d’ELIAO
Les applications d’EIAO pour l’enseignement
et à l’apprentissage des langues (ELIAO) ont
de particulier qu’elles intègrent une ou
plusieurs composantes de TALN. C’est au
niveau du module expert des applications
d’ELIAO qu’on les retrouve. Le but de ce
module, rappelons-le, est de modéliser le
domaine des connaissances à enseigner.
Quand le domaine en question est celui d’une
langue, les connaissances à modéliser sont des
connaissances linguistiques, définissables sur
la base des niveaux de représentation suivants:
phonétique, lexique, syntaxe, sémantique et
pragmatique.
80
3.1 Fonctions des outils de TALN
Les outils de TALN se destinent à plusieurs
fonctions1 dont l’analyse et la génération
automatique de segments de la langue orale et/ou
de la langue écrite. Lorsqu’il s’agit de traiter de
segments de la langue orale, l’outil d’analyse
porte le nom de système de reconnaissance de la
parole alors que celui de génération porte le nom
de synthétiseur de la parole. Pour le traitement de
l’écrit, l’outil d’analyse est le parseur (on dit
aussi l’analyseur); celui de génération est le
générateur de phrases et/ou de textes.
3.1.1 Traitement de l’écrit
Le parseur est l’outil de TALN qui a été
privilégié jusqu’ici dans la recherche en
ELIAO. On le rencontre dans des prototypes
d’applications tels que LINGER (Yazdani,
1991), Alice (Lawler et Yazdani, 1987),
GPARS (Loritz, 1992), STASEL (Payette et
Hirst, 1992), COALA (Pieneman et Jasen,
1992), MSLE (Frederiksen, Donin et Décary,
1995), CALLE (Rypa et Feuerman, 1995) et
BRIDGE (Sams, 1995). Dans ces applications,
le parseur sert essentiellement d’outil d’aide
au diagnostic. Sa fonction est de fournir une
représentation de l’input écrit de l’apprenant
(mot/phrase/texte), laquelle est ensuite
comparée à une représentation produite par le
système expert dans les mêmes conditions. Le
résultat de la comparaison sert à établir un
diagnostic, lequel en général repose sur la
notion d’erreur (c.-à-d. sur l’écart qui existe
entre l’input de l’apprenant et celui du
système expert), mais peut aussi reposer sur la
notion de compréhension du message (c.-à-d.
sur ce que les représentations de l’apprenant et
du système expert ont en commun).
L’intégration d’outils de génération
automatique de phrases et/ou de textes dans
les systèmes d’ELIAO est moins courante. Le
système ILLICO (Ayache et al., 1997) en est
cependant un exemple. Dans ce système,
l’apprenant est invité à créer ses propres
phrases aidé d’un générateur qui l’assiste au
fur et à mesure dans sa composition en lui
proposant des choix de mots suivant l’état des
contextes syntaxique et sémantique. Le
système opère dans un micro-monde (c.-à-d.
un environnement linguistique fermé).
ReCALL
Les outils de TALN dans SAFRAN: M-J Hamel
3.2 Traitement de l’oral
Du point de vue strictement TALN, on peut
dire que les applications d’ELIAO sont encore
silencieuses: les systèmes actuels comportent
en effet peu ou pas de composantes de
traitement de la parole, tant au niveau de sa
reconnaissance que de sa génération. Un
projet connu dans le domaine, le projet
SPELL (Hiller et al., 1994), qui visait
l’intégration d’un système de reconnaissance
de la parole dans une interface d’ELIAO pour
l’anglais, n’a pas donné suite. Pour ce qui est
des produits commerciaux qui intègrent une
composante de reconnaissance de la parole
(Prof de Français2, Dynamic English3, Talk to
Me 4, etc.), les résultats obtenus s’avèrent plus
ou moins satisfaisants (la parole est traitée en
segments non-continus, les problèmes de
bruits et d’accents pertubent toujours
l’analyse, etc.). Il est un fait que la recherche
dans le domaine de la parole est moins
avancée que dans celui de l’écrit. Il faudra
sans doute attendre encore quelques années
avant de pouvoir penser à une intégration
pleine et fiable d’outils de reconnaissance de
la parole dans les systèmes d’ELIAO.
La synthèse de la parole reste la grande
négligée des technologies de TALN en
ELIAO. Ceci est d’autant plus surprenant
qu’elle offre des outils qui possèdent leur
utilité dans le domaine de l’enseignement des
langues (Dutoit, 1997) et qui sont en fait plus
fiables, plus économiques et plus accessibles
que les outils de reconnaissance (Last, 1989).
À notre connaissance, aucun prototype
courant d’applications en ELIAO n’intègre de
véritables outils de synthèse de la parole.
Deux raisons majeures semblent expliquer
cette lacune. La première, et la plus
importante, réside dans le fait que les outils de
TALN qui exploitent cette technologie sont
rares donc peu disponibles. La deuxième est
que ces outils, lorsqu’ils sont manifestement
disponibles, n’ont parfois pas atteint la
maturité nécessaire à l’ELIAO. Or, la maturité
tient dans la robustesse d’un outil de TALN,
c’est-à-dire dans le fait qu’il offre une
couverture exhaustive, fiable et constante des
phénomènes linguistiques qu’il a fonction de
décrire. C’est le cas, nous le pensons, des
Vol 10 No 1 May 1998
outils de TALN dans SAFRAN et, en
particulier, du synthétiseur FIPSvox.
4. Les outils de TALN dans
SAFRAN
Comme nous l’avons mentionné au début de
cet article, notre objectif principal avec le
projet SAFRAN est de montrer les avantages
qu’offre pour l’enseignement des langues
l’intégration d’outils de TALN. De plus, et
dans la mesure du possible, nous avons
cherché à réutiliser des outils développés à
l’origine pour d’autres applications. C’est
ainsi que l’analyseur syntaxique intégré à
SAFRAN a été originellement développé pour
un système de traduction automatique, le
synthétiseur, pour un système de lecture
vocale, etc. Nous nous proposons maintenant
de décrire brièvement ces outils.
4.1 Le synthétiseur vocal FIPSvox
FIPSvox est un système de synthèse vocale du
français (Gaudinat et Wehrli, 1997). Construit sur
la base de l’analyseur syntaxique FIPS
(Laenzlinger et Wehrli, 1991), il ajoute à ce
dernier une base de données phonétiques, un
module de phonétisation et une sortie vocale. Très
schématiquement, ce système fonctionne comme
suit: le texte d’entrée est tout d’abord soumis à
une analyse syntaxique détaillée, qui permet, entre
autres, de lever pratiquement toutes les
ambiguités lexicales impliquant des homographes
hétérophones (mots qui ont une même
orthographe mais qui possèdent des
prononciations distinctes, ex. “président”:
substantif ou verbe?). Les structures analysées
sont ensuite phonétisées, grâce aux informations
lexicales (base de données phonétiques), et à un
système expert5 chargé de phonétiser les mots
inconnus. Certaines règles d’ajustement
phonétique s’appliquent ensuite, traitant la liaison,
l’élision, la dénasalisation, etc. Enfin, une
composante prosodique6 intervient, chargée de
déterminer les valeurs de fréquence fondamentale
et de durée pour chacun des segments.
4.2 Le dictionnaire conceptuel FR-Tool
FR-Tool
(FRench-Tool) (Hamel, Nkwenti81
Les outils de TALN dans SAFRAN: M-J Hamel
Azeh et Zahner, 1996) est un dictionnaire
électronique conceptuel qui fournit pour
chacune des entrées au lexique une
représentation non-linéarisée des connaissances
linguistiques reliées au domaine de l’entrée
sélectionnée. La base de données du FR-Tool
est organisée à l’intérieur de champs
obligatoires et de champs facultatifs. Les
champs obligatoires correspondent plus ou
moins aux champs des dictionnaires
traditionnels (mot-vedette, catégorie lexicale,
domaine du lexique, définition, traduction).
Les champs facultatifs sont de deux types:
champs grammaticaux (morphologie, souscatégorisation) et champs sémantiques (syn/
antonyme, hyper/hyponyme, mag/antimag,
relié, dérivé, usage, idiomatique, etc.). La
recherche qui entoure la définition des champs
sémantiques s’inspire des travaux de Mel’cuk
(1982) sur les fonctions lexicales. La base de
données totalise à date un ensemble d’environ
7000 mots-concepts.
4.3 Le conjugueur FLEX
FLEX (FLEXion) est un système de
conjugaison des verbes français qui permet la
consultation de n’importe quel verbe français
figurant dans le lexique, à tous les temps et
tous les modes. Développé en Modula sur la
base de règles de dérivations morphologiques,
FLEX est capable de conjuguer, en principe,
n’importe quel verbe de la langue française.
Sa base de données totalise à date plus de
15,000 verbes.
5.
SAFRAN
Nos deux premières années d’activités de
recherche ont porté sur l’élaboration d’un
module destiné à l’enseignement et à
l’apprentissage de la phonétique du français.
Dans un premier temps, nous avons vu à
l’élaboration de contenus didactiques,
contenus qui prévoyaient l’exploitation des
outils de TALN que nous venons de décrire.
Dans un deuxième temps, nous avons
développé un environnement multimédia pour
accueillir contenus et outils de TALN. Voici
un bref compte-rendu de nos travaux.
82
5.1 Élaboration de contenus
didactiques
Chaque unité de leçon développée (il y en a
dix) traite d’un aspect théorique de la
phonétique du français et propose, en
parallèle, des activités d’expérimentation et
de pratique visant à une meilleure
compréhension et à un renforcement de la
matière enseignée. Les unités ont été prévues
en fonction des besoins d’un public
d’apprenants:
des
universitaires
bulgarophones inscrits dans un programme
de didactique de l’enseignement du français
langue étrangère. L’aspect théorique couvre
ainsi des notions qui se rapportent à la
description des phénomènes segmentaux
(voyelles, semi-voyelles et consonnes) et
supra-segmentaux (liaison, ‘e’ instable,
prosodie) qui caractérisent la phonétique du
français. La présentation de la matière
comporte de nombreux rappels, définitions et
illustrations. La partie expérimentale est
consacrée quant à elle aux démonstrations.
Elle met l’apprenant dans des situations où il
doit faire appel à ses connaissances
personnelles et à son intuition pour résoudre
des problèmes en rapport avec des
phénomènes
observés
(ambiguité
catégorielle,
dénasalisation,
épenthèse,
relation prosodie-syntaxe, etc.). Enfin, la
pratique, par l’intermédiaire d’exercices
variés, met l’accent sur le travail de
discrimination auditive et de répétition mais
aussi sur les relations qu’entretiennent entre
elles les formes phonétique et graphique.
Chaque unité de leçon est introduite par une
série d’objectifs d’apprentissage et se termine
par une synthèse présentée sous forme de
graphes récapitulatifs et de mots-clés. Des
conseils pour le futur enseignant ont été
prévus ainsi que de courtes fiches
biographiques
et
une
bibliographie
d’ouvrages consultés.
5.2 Design d’interfaces
SAFRAN comporte trois interfaces qui gèrent
le dialogue entre le système et l’usager. Ces
interfaces sont: SAF-tuto, SAF-exo et SAFdev. Elles ont été développées avec le langage
de programmation Delphi.
ReCALL
Les outils de TALN dans SAFRAN: M-J Hamel
5.2.1 SAF-tuto
SAF-tuto (SAFRAN-tutoriel) est l’interface qui
interprète, sous forme de scénarios
hypermédias, les unités de leçon du module de
l’enseignant grâce à un système sophistiqué de
liens hypermédias. Ces liens ouvrent des
fenêtres secondaires, identifient des zones
sensibles à l’intérieur de graphiques, font
apparaître des définitions textuelles et/ou
entendre des définitions sonores (avec FIPSvox)
en indice sous des zones de texte, lancent des
applications multimédias (animation, vidéo,
audio), des programmes externes (SAF-exo,
courriel, etc.). SAF-tuto s’occupe aussi de la
gestion et de l’intégration des outils de TALN
(FIPSvox, FR-Tool, FLEX).
5.2.2 SAF-exo
SAF-exo (SAFRAN-exercices) est l’interface
qui gère les exercices du module de
l’enseignant et les bases de données
(phonèmes, paires minimales, mots du
lexique, etc.) du module expert qui y sont
associées. L’interface comporte quatre volets,
chacun proposant à l’apprenant sous une
forme plus ou moins ludique des activités
entourant la pratique des sons du français. Le
premier volet porte sur la discrimination
auditive et le second sur la répétition. Le
troisième, le volet graphie-phonie, porte sur la
relation et le transfert de la forme graphique à
la forme orale alors que le dernier volet, le
volet phonie-graphie, porte lui sur la relation
inverse c’est-à-dire le passage de la forme
orale à la forme écrite.
5.2.3 SAF-dev
SAF-dev (SAFRAN-développement) est la
composante la plus récente de SAFRAN.
C’est une interface qui sert à la saisie et à la
gestion des bases de données de SAF-tuto et
de SAF-exo. Sa fonction principale est de
permettre à l’utilisateur-enseignant la mise à
jour des bases de données du module expert
(les lexiques) et de l’enseignant (les contenus
des tutoriels et des exercices). SAF-dev
permet de plus de modifier les liens
hypermédias déclarés dans les contenus
textuels et graphiques de SAF-tuto et de SAFexo.
Vol 10 No 1 May 1998
6. Intégration des outils de TALN
dans SAFRAN
6.1 FIPSvox dans SAFRAN
i. Un outil de référence
FIPSvox peut synthétiser n’importe quel
segment du français écrit. Dans SAFRAN,
FIPSvox sert à l’apprenant d’outil de
référence phonétique car il lui permet
d’entendre à tout moment la prononciation des
mots ou des phrases qu’il aura lui même
sélectionnés dans l’interface de SAF-tuto.
L’utilisation d’un synthétiseur tel que FIPSvox
dans SAFRAN a l’avantage d’être
économique puisqu’il ne demande aucun préenregistrement, et surtout offre une flexibilité
maximale, puisque tout énoncé peut être
synthétisé.
Les possibilités de synthèse de FIPSvox
ont aussi permis la création d’un outil de
recherche de mots par le biais de leur forme
phonétique. Cet outil permet à l’apprenant de
poursuivre une recherche lexicale sur un mot
dont
l’orthographe
serait
déficiente
(“rancontrer”, “astronote”, “aporter”). Cette
application répond en particulier à un besoin
de l’apprenant bulgare lequel a du mal à gérer
la correspondance entre les alphabets
cyrillique et latin ce qui lui cause entre autres
des problèmes d’orthographe.
ii. Un outil de démonstration et
d’expérimentation
Les unités tutorielles du module de phonétique
prévoient l’utilisation de FIPSvox pour
illustrer,
dans
des
situations
de
démonstrations, des phénomènes variés tels
que la liaison, la chute du ‘e’ instable,
l’ambiguité phonétique créée par les
homographes hétérophones, les différents
patrons prosodiques, etc. Les exemples
entendus dans les démonstrations proviennent
d’une intervention de FIPSvox sur des
segments textuels rangés au niveau de la base
de données de SAF-tuto. Ces segments
peuvent être facilement modifiés avec SAFdev.
L’apprenant est par ailleurs aussi invité à
tester le synthétiseur en lui soumettant ses
propres exemples. FIPSvox se prête bien à ce
83
Les outils de TALN dans SAFRAN: M-J Hamel
genre d’activités d’expérimentation puisque sa
couverture phonétique est relativement
exhaustive. En effet, à l’exception des liaisons
facultatives qui sont traitées par défaut comme
des liaisons défendues (ce qui de toute façon
n’est pas une faute en soi), FIPSvox couvre
tous les phénomènes caractérisant la
phonétique du français.
iii.Un outil d’aide à l’auto-évaluation et
de support au diagnostic
La détection automatique d’erreurs, nous en
avons parlé plus haut, est une composante
importante de l’ELIAO. Or puisque nous
n’utilisons pas d’outil de reconnaissance de la
parole dans SAFRAN, le signal sonore de
l’apprenant ne peut être analysé en tant que
tel. Pour compenser, nous avons automatisé
certaines tâches liées à la correction en
utilisant les possibilités de FIPSvox. Dans
cette perspective, nous avons cherché à fournir
à l’apprenant des moyens de s’auto-évaluer.
C’est ainsi que FIPSvox intervient dans
tous les volets de SAF-exo. Dans les volets
discrimination, répétition et graphie-phonie,
l’output de FIPSvox sert de modèle de
comparaison, modèle qui s’accompagne
d’explications, s’il y a lieu. Dans le volet
phonie-graphie, la gestion des réponses écrites
de l’apprenant se fait par le biais d’une
recherche phonétique qui s’effectue sur la
réponse de l’apprenant, celle-ci étant en
général un mot. Suite à cette recherche, les
équivalents lexicaux retrouvés par FIPSvox
sont affichés à l’écran. L’apprenant peut ainsi
comparer sa réponse à celles du synthétiseur.
Dans le cas d’homonymie, tous les équivalents
sont présentés.
6.2 FR-Tool et FLEX dans SAFRAN
i. Des outils de référence
Le FR-Tool est un dictionnaire électronique
conceptuel qui permet à l’apprenant d’avoir
accès en ligne à des ressources lexicales riches
et variées. Sa vocation première dans
SAFRAN est celle d’outil de référence
lexicale. Son adaptation pour ce projet a
consisté à fournir une traduction bulgare pour
chacun des mots-vedettes de la base de
données. L’ajout d’une centaine de termes
84
extraits du dictionnaire didactico-thématique
français-bulgare de Decoo et Vessélinov
(1995) a de plus permis d’élargir ce lexique
conceptuel. Ces termes se rapportent à des
thèmes choisis (existence, temps, espace,
quantité, qualité, relations, etc.) qui font partie
du programme d’études de l’apprenant
bulgare.
Le rôle du conjugueur FLEX dans
SAFRAN est aussi celui d’un outil de
référence
grammaticale.
Le
travail
d’adaptation entourant son intégration dans
SAFRAN a consisté principalement en un
travail de transfert des données sur plateforme PC et de réécriture de l’interface en
Delphi. Le conjugueur, quoiqu’indépendant du
lexique conceptuel, peut désormais être activé
à partir de celui-ci.
7. Travaux en cours et futurs
En ce qui a trait au module de phonétique, nous
nous intéressons présentement à la
représentation du signal sonore. Nous visons
au développement de supports visuels qui
viendront se superposer, en temps réel, au
signal de synthèse et à la transcription
phonétique produits par FIPSvox. Le premier
projet est un projet de modélisation
articulatoire (animation de coupes sagitales).
Le second projet concerne la représentation du
signal prosodique. Tous deux utilisent comme
point de départ les données résiduelles de
FIPSvox
(correspondance graphèmephonème, calcul de la durée et de la fréquence
fondamentale des phonèmes). Compte de plus
parmi nos projets futurs, la définition de
contenus et d’exercices pour le module de
grammaire de SAFRAN. Ce travail
s’accompagnera d’une réflexion sur le rôle du
parseur comme outil de TALN en EL(I)AO et
verra plus particulièrement à l’intégration de
l’analyseur syntaxique FIPS dans le système
SAFRAN.
Mes collaborateurs dans ce projet sont Eric
Wehrli (LATL, Université de Genève), Zoltan
Pinter (Université de Pecs) et Dimitar
Vessilinov (Université de Sofia). Le projet
ReCALL
Les outils de TALN dans SAFRAN: M-J Hamel
SAFRAN bénéficie d’une subvention de
FRANCIL, réseau membre de l’AUPELF.
Notes
1. Comptent aussi parmi les fonctions des outils
de TALN la recherche par mot-clé, l’extraction
d’information. Ce sont des fonctions qui
relèvent du domaine de la dictionnairique.
2. Soft Collection, Micro Application, 20-22 Rue
des Petits-Hôtels 75010 Paris.
3. Dyned International, Language Development
Courseware, [email protected]
4. Talk To Me, Auralogue, 12 Av. Jean Bart 78960
le Bretonneux, France.
5. Ce système expert, le Mbrola, a été développé à
l’Université de Mons par l’équipe de Thierry
Dutoit.
6. À l’heure actuelle, FIPS-vox utilise un module
de prosodie développé par le LAIP (Université
de Lausanne).
Références
Ayache L., Godbert F. and Pasero R. (1997) ‘Deux
systèmes d’aide à l’apprentissage du langage
écrit’, JST 97: Actes, Avignon: AUPELFUREF, 267–270.
Decoo W. and Vessélinov D. (1995) Dictionnaire
didactico-thématique français-bulgare, Sofia:
Daniéla Oubénova.
Dutoit T. (1997) An Introduction to Text-to-Speech
Synthesis, London: Kluwer.
Frasson C. and Gauthier G. (1990) Intelligent
Tutoring Systems, Norwood, New Jersey: Ablex
Publishing Corporation.
Frederiksen C.H., Donin J. and Décary M. (1995)
‘A Discourse Processing Approach to Computer-Assisted Language Learning’. In: Holland, V.M., Kaplan, J.D. and Sams, M.R. (eds.),
Intelligent Language Tutor, Mahwah, New Jersey: Lawrence Erlbaum, 99–120.
Gaudinat A. and Wehrli E. (1997) ‘Analyse syntaxique et synthèse de la parole: le projet
FIPSvox’, Rapport interne, Geneva: LATL.
Gougenheim G. (1958) Dictionnaire fondamental
de la langue française, Paris: Didier.
Hamel M.-J. (1997) ‘NLP Tools in CALLfor Error
Analysis’, CAAL Journal, 19 (1). Sous presses.
Hamel M.-J., Nkwenti-Azeh, B. and Zahner, C.
Vol 10 No 1 May 1998
(1996) ‘The Conceptual Dictionary in CALL’,
EUROCALL 95: Actes, Valence: Presses de
l’UniversitÈ, 509–518.
Hiller S., Rooney E., Vaughan R., Eckert M., Laver
J. and Jack M. (1994) ‘An Automated System
for Computer-Aided Pronunciation Learning’,
CALL, 7 (1), 51–63.
Laenzlinger C. and Wehrli E. (1991) ‘FIPS: un
analyseur interfactif pour le français’, TA Infor mations, 32 (2), 35–49.
Last R.W. (1989) Artificial Intelligence Techniques
in Language Learning, Chichester: Ellis Horwood.
Levins L., Evans D. and Gates, D.M. (1991) ‘The
Alice System: A Workbench for Learning and
Using Language’, CALICO, 9 (1), 27–56.
Loritz D. (1992) ‘Generalized Transition Network
Parsing for Language Study: the GPARS System for English, Russian, Japanese and Chinese’, CALICO, 10 (1), 5–22.
Mel’cuk I. (1982) ‘Lexical Functions in Lexicographic Description’, 8th Annual Meeting of the
Berkeley Linguistics Society: Actes, Berkeley,
California: BLS Press, 427–444.
Payette J. and Hirst G. (1992) ‘An IntelligentAssistant for Stylistic Instruction’, Computers
in the Humanities, 26, 87–102.
Rypa M. and Feuerman K. (1995) ‘CALLE: An
Exploratory Environment for Foreign Language Learning’. In: Holland V. M., Kaplan J.
D. and Sams M. R. (eds.), Intelligent Language
Tutor, Mahwah, New Jersey: Lawrence Erlbaum, 55–76.
Sams M.R. (1995) ‘Advance Technologies for Language Learning: The BRIDGE Project Within
the ARI Language Tutor Program’. In: Holland
V. M., Kaplan J. D. and Sams M. R. (eds),
Intelligent Language Tutor, Mahwah, New Jersey: Lawrence Erlbaum Associates, 7–22.
Yazdani M. (1991) ‘The LINGER Project: An Artificial Intelligence Approach to Second-Language Tutoring’, CALL, 4 (1), 107–116.
Marie-Josée Hamel est maître de conférence à
UMIST où elle enseigne la linguistique du français.
Ses travaux de recherche portent sur l’intégration
des outils de traitement automatique des langues
naturelles dans les systèmes d’enseignement assisté
par ordinateur.
Marie Josée Hamel
Email: [email protected]
85
ReCALL 10:1 (1998) 86–94
Two conceptions of learning and
their implications for CALL
at the tertiary level
Mike Levy
The University of Queensland
Though it may not be expressed explicitly, any CALL design reflects a particular conception of teaching
and learning. A broad division may be made between learning that focuses on the individual learner,
and learning that emphasises social interaction. The first orientation is represented by the work of
Piaget, whose conception of learning is individualistic, whereas Vygotsky is the prime example of a theoretician who has focused on social factors. The two perspectives imply widely differing classroom
practices, research agendas and techniques. This paper will detail the theoretical underpinnings of the
two approaches, and will explore their implications as they relate to research and practice in CALL,
with a particular focus on the tertiary level.
Introduction
With recent developments in computer networking and the use of computer-mediated
communication techniques, CALL approaches
that involve telecollaboration and cooperative
learning are becoming widely accepted (see
Warschauer 1995a, 1995b, 1996; Debski et al.
1997). In contrast, more traditional CALL
techniques, where the computer structures the
learning environment for the individual student, have been criticised (see Hartog 1989;
Shneiderman 1997). For example, Shneiderman (1997: vii) says, “We are rapidly moving
away from ‘computer-based instruction’ and
‘intelligent tutoring systems’in which the narrow choices for students sooner or later make
them the victim of the machine”. He believes
86
there is a move away from ‘agentive’ to
‘instrumental’ uses of the computer, or from
the role of the computer as tutor to the role of
the computer as tool (see also Levy 1997).
Nevertheless CALL researchers such as Goodfellow (1995: 223), counter that instrumental
uses of the computer are “deficient for learning purposes”, and the need for CALL “to
adopt a principled approach to providing tutorial support ... is paramount”.
The debate has been explored in some
detail for children learning their first language
using computers in formal school settings (see
Scrimshaw 1993). However, though both orientations are clearly evident in the tertiary sector for adults learning a second or foreign language, the debate is only just beginning to
emerge in this context. Examples that present
ReCALL
Two conceptions of learning: M Levy
a social perspective are given in edited works
by Warschauer (1996) on telecollaboration,
and Debski et al. (1997) on social computing.
At the same time an individual conception of
learning is apparent in the considerable interest in learner autonomy, learning strategies
and Intelligent CALL (see Gremmo and Riley
1995; Holland et al. 1995).
This paper aims to explore this debate in a
little more detail. In particular, it looks at two
theoretical positions, advocated by Piaget and
Vygotsky, that underpin individualistic and
social conceptions of learning, both within a
constructivist framework. It considers how
these views of learning are being interpreted
in CALL and discusses their implications for
research and practice.
Two conceptions of learning
Though it may not be expressed explicitly, any
CALL design presupposes a particular conception of teaching and learning. In contemporary CALL, a broad division may be made
between learning approaches that focus on the
psychological mechanisms of the individual
learner, and those that emphasise social factors.
These two positions are well-represented
by the work of Piaget and Vygotsky who provide a rich theoretical base for thinking about
learning in individual and social contexts (see
Piaget 1980; Vygotsky 1978; Phillips 1995).
Piaget has already exerted a strong influence
on theory and practice in education and software design, and the indications are that
Vygotsky is currently exerting a comparable
influence (see Jones and Mercer 1993; Renié
and Chanier 1995). Piaget and Vygotsky represent fundamental positions on teaching and
learning and are most helpful in distinguishing
key differences in perspective, particularly in
the ways in which the roles of the teacher and
the computer are perceived. Further they are
both regarded as constructivist, and as this orientation currently represents the dominant
approach in educational multimedia design,
their views are of special interest (Boyle 1997:
83). There is insufficient space to present their
Vol 10 No 1 May 1998
views in detail here, but the key elements of
their positions will be sketched and significant
assumptions and implications considered.
Both Piaget and Vygotsky are concerned
with how the individual learner learns and
constructs knowledge using his or her own
cognitive apparatus. They are both seen as
constructivist because of their emphasis on the
ways in which the learner constructs his or her
own understanding and makes sense of the
surrounding environment. However, beyond
this area of agreement, Piaget and Vygotsky
differ considerably in how they see a learner’s
cognitive mechanisms working.
Piaget (1896–1980), generally regarded as
the founder of constructivism, typically sees the
learner as a lone, inventive scientist trying to
make sense of the world (Piaget 1980; Phillips
1995). He argues that knowledge does not simply result from the passive recording of observations, but that it comes from “a structuring
activity on the part of the subject” (Piaget 1980:
23). In this way, he stresses the fact that the
learner is both mentally and physically active in
adapting to the complexities of the world
(Jones and Mercer 1993: 19). His conception of
the learner is highly individualistic and pays little attention to social processes.
In contrast, Vygotsky (1896–1934) suggests that such a view of learning is inadequate, and that social transaction not solo performance is the fundamental vehicle of
education (Bruner 1985). Vygotsky and his
followers emphasise the social factors that
influence learning. Vygotsky did not consider
that learning arose out of acting on and adapting to some impersonal world, as did Piaget,
but rather that it resulted from engagement
with others (Vygotsky 1978: 131).
Vygotsky also emphasised the role that language plays in cognitive development and in
mediating the learning process. Acquiring a
language enables the learner to think in new
ways by providing a cognitive ‘tool’ for making sense of the world through interaction.
The notion of language as a cognitive tool for
mediation is one of the most profound insights
of Vygotsky (1978), an idea derived from the
work of Engels (Haas 1996). Engels posited
that humans interact with the environment
87
Two conceptions of learning: M Levy
using material tools which mediate the interactions that occur. Through the interaction both
the environment and humans are transformed.
Vygotsky extended this notion to include language – especially speech, but also writing and
other sign systems – as a psychological tool
that provides the “mediational means by which
higher psychological functions develop.”
(Haas 1996: 14). Though Vygotsky’s understanding of tool is intended to be purely
metaphorical, Haas extends this idea yet again
to include technological tools as well. She
argues that “Vygotsky’s theory of mediation
helps us see tools, signs, and technologies as ...
systems that function to augment human psychological processing.” (Haas 1996: 17).
Vygotsky died at an early age and his work
took a considerable time to reach the west: it
was only in 1962 that the seminal work,
Thought and Language, was translated (see
Vygotsky 1986). Since that time his followers
have developed and extended his work under
the headings of ‘neo-Vygotskian theory’, ‘cultural psychology’, ‘communicative learning
theory’ and ‘sociocultural theory’: the latter
term is now perhaps the most common and is
used here (see Jones and Mercer 1993: 21).
An important and immediate corollary of
these two conceptions of learning concerns the
role of the teacher. Within the Piagetian view,
the student is seen to be working alone. The
teacher’s role is to provide ‘rich learning environments’ within which learners may make
discoveries for themselves (Jones and Mercer
1993: 22). On the other hand, within the Vygotskian view, the teacher is an active, communicative participant in the learning process. The
teacher acts as a support to help the student
until the time comes when he or she is able to
operate independently. As Bruner (1985: 24–5)
puts it, the tutor functions as “a vicarious form
of consciousness”. These differences in the
role of the teacher are profound.
Before turning to the implications of these
two conceptions of learning for CALL, it is
worth noting that both Piaget and Vygotsky
were concerned with how children learned
their first language. On the whole they did not
make claims about how adults might learn a
second language. In this regard, Laurillard and
88
Marullo (1993) provide a detailed critique of
the extent to which a Vygotskian perspective
can be sustained for students learning a second
rather than a first language.
Piaget: implications for CALL
According to Piaget, people grow through
play and constructive activity by alternately
changing themselves and the world around
them. Where computers have potential in this
context is in extending our ability to transform
and manipulate the world through simulation.
For Holland (1995: xiv) simulation technologies offer ways to “buttress lived experience”.
In other words we can learn and practise on
our own in a simulated world before having to
deal with an unpredictable real world environment. A well-known implementation of this
concept is seen in the work of Seymour Papert
who was once a student of Piaget.
Papert developed a microworld called
Mathland which was designed in such a way
that certain kinds of mathematical thinking
could be facilitated (Papert 1980). Papert
describes the microworld as “a ‘place’ ...
where certain kinds of ... thinking could hatch
and grow with particular ease’, ‘an incubator’,
and ‘a ‘growing place’for a specific species of
powerful ideas or intellectual structures.”
(Papert 1980: 125).
The microworld concept has also been
investigated in the field of language learning,
in CALL and in Intelligent CALL (ICALL)
(see Higgins 1982; Holland et al. 1995). 1 In a
recent interpretation of the microworld idea,
Hamburger (1995) describes a second language tutoring system called FLUENT (Foreign Language Understanding Engendered by
Naturalistic Techniques). One graphical presentation of a microworld in FLUENT is
called Kitchen World. Here the learner manipulates a human figure with a moveable hand
which is employed to manipulate objects in
the kitchen. Activities require learners to produce words, phrases, and sentences to achieve
simple goals, and the system responds appropriately at each stage. Further manifestations
of the microworld concept are realised through
ReCALL
Two conceptions of learning: M Levy
virtual worlds created in cyberspace, where
users, and potentially language learners, can
engage in exploring and making sense of a
simulated environment.
Beyond the microworld concept in particular, most ICALL systems involve a single
human learner and a computer tutor (Tomlin
1995: 221). Typically, these systems feature a
student model to guide the sequencing and
manner of the material presented, and utilise a
parser to enable natural language to be
processed (Harrington 1996). They contrast
with traditional CALL programs which tend to
avoid dealing with student input and evaluation beyond the word level. The dynamic
nature of the control structures in ICALL systems, natural language processing and the student model enable student feedback to be
dynamic and flexible; with traditional CALL,
feedback tends to be pre-packaged and formulaic, or it is not given at all (Harrington 1996:
7). But in both ICALL and traditional CALL,
the focus has tended to be on the individual
learner working at the computer without a
teacher present.
The teacher’s role within the Piagetian
conception of learning lies firmly in the background. Since the computer tutor is designed
to function as a substitute teacher, the human
teacher’s role becomes separated. Here the use
of the computer to ‘free’ the teacher from the
‘more tiresome labours’ of language teaching
arises (Skinner 1954: 96). Many CALL writers since Skinner have referred to the use of
the computer for freeing the teacher from the
more ‘mundane’ aspects of language teaching
(see Levy 1997). A division of labour is
implicit here, with the computer or technology
looking after certain aspects of language
learning, vocabulary extension for example,
while the human teacher caters for other
aspects, those that necessitate human interaction and involvement.
learning: cooperative or collaborative learning
(Light 1993; Warschauer 1995a); teachers
working with students on purposeful activity
(Jones and Mercer 1993; Kern 1996; Barson
1997); learning in social groups (Kern 1996;
Debski 1997); and a communicative, culturally oriented conception of language learning
(Jones and Mercer 1993). Of course, numerous other projects use a collaborative
approach or cooperative learning techniques,
or encourage learning in a social context with out making explicit reference to Vygotsky (see
Warschauer 1995b). As McDonell (1992: 56)
observes, Vygotsky’s theory supports a collaborative approach and cooperative learning,
because it “analyses how we are embedded
with one another in a social world”, and
because it is consistent with a view of teaching where the process of mediation is central.
A learning environment that embodies
many of these ideas is the goal-oriented
framework described by Barson and Debski
(Barson and Debski 1996; Barson 1997; Debski 1997). Barson and Debski (1996) describe
a Global Learning Environment (GLEn). The
GLEn is fundamentally collaborative in nature
and “models the system of access to resources
and the necessary links between users of the
system, thus providing a mental construct and
a plan of action.” (Barson and Debski 1996:
62). Learning is defined as “managed action”
and the motivation derives from project goals
and activities negotiated between students, or
students and the teacher.
As far as CALL is concerned, a Vygotskian
or sociocultural view of learning has been
given a boost by recent advances in networking technology. Now collaboration is not limited to the classroom and the same physical
space, but may be extended to include collaboration at a distance. In essence, collaborative
work involving computers may occur in at
least three different ways, each of which
involves social processes:
Vygotsky: implications for CALL
1. students may collaborate and interact by
working together at a computer;
2. students may interact through the machine
by networking, conferencing or electronic
mail, for example; or
In CALL, an appeal to Vygotsky’s work has
been made to support the following techniques
and approaches in language teaching and
Vol 10 No 1 May 1998
89
Two conceptions of learning: M Levy
3. the computer may act as a partner in some
way in an ICALLprogram.
Piaget and Vygotsky differ greatly in their
interpretation of the teacher’s role. For Piaget,
the learner essentially works alone, though the
teacher can help organise discovery environments, on and off the computer, that are accessible, and appropriate for the student’s level
and need. But for Vygotsky the teacher’s role
is central to the learning process. In this
regard, he introduced the well-known theory
of the zone of proximal development, which
posits that learners benefit most from tasks
that are just beyond their own individual capabilities. Learners are not able to complete such
tasks on their own, but with the help of a more
knowledgeable and experienced individual
they are able to accomplish them. Thus, a role
for the teacher is warranted, in helping learners over the gap between what they can do
alone and what they can manage with the help
of others.
In further elaborating the role of the
teacher in the CALL context, Debski (1997:
48) describes the teacher as a “facilitator, an
inseminator of ideas, and a force maintaining
the proper level of motivation of students”;
and, again, for Barson and Debski (1996: 50),
the role of the teacher is to “trigger and support student enterprise as it manifests itself,
often in unexpected ways (contingency principle).” For Kern (1996: 108), the teacher,
“rather than delegating to the computer certain
aspects of language instruction (e.g. drills and
practice), becomes an integral participant in
students’ computer-mediated communication
and learning.” In combination, these descriptions of the teacher’s role see the teacher as an
involved, adaptive individual guiding and
motivating student-directed work.
Piaget: critique of individualistic
learning
Traditional CALL and ICALL programs that
attempt to address the needs of individual
learners have received some unfavourable criticism of late, particularly ways in which the
90
computer might inhibit or constrain the learner
through inappropriate or inflexible control
mechanisms (see Hartog 1989; Debski 1997;
Shneiderman 1997). Nevertheless, the computer has strengths in its flexibility to provide
language learning opportunities when a
teacher is not available and at the learner’s
convenience. For students attending a regular
language class, the computer tutor can provide
valuable supplementary work, especially extra
language practice.
In ICALL, improved models of learning
have the potential to provide more effective
CALL learning programs and environments.
Increased sophistication of systems promises
richer, more efficient and hence more enjoyable language learning experiences. Whilst the
sophistication of CALL tutors is limited at the
moment, there is no reason to believe that their
functionality will not steadily improve in the
future.
Where the student is generally working
alone without the teacher, the computer has to
reliably give the student the right kind of guidance and advice every time the program is
used; there is no second wave of feedback that
can come with a teacher’s presence to act as
backup. The computer program must be completely reliable. If for some reason the student
is provided with incorrect or incomplete feedback in answer to a question, and the deficiency is not made known to the student, then
serious problems can result. The success,
therefore, of the computer in the tutorial role,
hinges on how reliably the program manages
the student’s learning and on how timely,
accurate and appropriate is the feedback, help
and advice given. This point is supported by
Kenning and Kenning (1990: 34) who argue
that “the shortcomings only loom large if the
computer-learner dialogue constitutes the sole,
or main, component of a learning experience,
as in the case of a tutorial package used on a
self-access basis.”
As far as simulations are concerned, the
potential threat of isolation and mere vicarious
experience need to be considered. Virtual
worlds, for example, might isolate or distance
the individual from the real world. Such experiences, whilst having the potential to simulate
ReCALL
Two conceptions of learning: M Levy
real communicative situations, nevertheless
remain illusory. Obviously, to become confident and proficient language users, the goal of
the majority of language learners is to be able
to interact with people face to face in the same
physical space using language to accomplish
real world tasks.
Vygotsky: critique of cooperative
learning
Of the implications that derive from a Vygotskian perspective, one of the most important is
that collaborative or cooperative learning is
advantageous. A useful critique of cooperative
learning has been given by Anderson, Reder
and Simon (1996). Their analysis of cooperative learning is presented in the context of a
broader investigation of the claims of Situated
Learning, a view of learning that is currently
exerting a strong influence on educational
thinking. They examine four major claims
(1996: 6):
1. Action is grounded in the concrete situation in which it occurs.
2. Knowledge does not transfer between
tasks.
3. Training by abstraction is of little use.
4. Instruction needs to be done in complex,
social environments.
The discussion here will focus on the last
claim because it involves cooperative learning
and is consistent with the view that learning is
inherently a social phenomenon, the position
taken by Vygotsky.
Anderson et al. argue that though one must
deal with social aspects, especially as far as
preparation for a job is concerned, this is not
in itself sufficient reason for demanding that
all skills need to be learnt in a social context.
To illustrate the point they give the analogy of
the violinist who plays in an orchestra. For the
violinist, there are times when independent
learning and practice are essential. Here the
individual is free to choose the focus, and can
concentrate on problems that are personally
relevant without distraction. Also, it would be
Vol 10 No 1 May 1998
impractical for the whole orchestra to always
meet together as a group. That said, clearly
there are skills that can only be acquired by
actually playing in the orchestra. It is likely,
for example, that there are distinct contextual
factors that impinge on learning and only arise
in the group context. The orchestra analogy
also suggests other elements needed for successful cooperative learning. All members of
the orchestra must share a commitment to the
goals of the group, they need to be comparable
in terms of their knowledge, skill and experience, and they must be willing to work
together under the leadership of the conductor.
Arguably, factors such as individual levels of
commitment, equitable levels of ability and
experience, clear and agreed goals, and confidence in the leader’s ability are all essential
ingredients for success in cooperative learning
activities.
Anderson et al. (1996: 9) continue that relatively few controlled studies have successfully argued the case for cooperative as
opposed to individual learning, and that many
comparative studies report ‘no differences’.
Of course one needs to look very carefully at
the research design and goals of these studies
to see exactly what they were to designed to
investigate and how they were carried out.
Even so, they do point to a number of potentially detrimental effects in cooperative learning such as ‘free rider’ and ‘ganging-up’
effects. Anderson et al. also suggest that a
very large number of practitioner-oriented
studies tend to overlook the difficulties
involved. They conclude:
The evidence shows, then, that skills in complex
tasks, including those with large social components, are usually best taught by a combination
of training procedures involving both whole
tasks and components and individual training
and training in social settings. (1996: 10).
Directions for research
The two conceptions of learning discussed in
this paper imply widely differing research
agendas and approaches. On the one hand, one
91
Two conceptions of learning: M Levy
might focus on the cognitive aspects of the
individual, and experimental techniques may
be appropriate; on the other hand, a sociocultural perspective might be taken and ethnographic classroom- or network-based research
techniques may be required to identify and
assess key factors in the learning process.
Debski (1997: 62) argues for more ethnographic studies, in addition to the quantitatively oriented research in second and foreign
language acquisition that has dominated the
field so far. Tella (1992) agrees, and follows
an ethnographic approach in investigating
email exchanges between high-school students. His approach includes interviews,
observations, analysis of text messages and
meticulous tracking of all interactions over
time looking at the nature of the messages.
Crook contends that the majority of evaluative
studies of computer-based activity are, like
most of the practice they seek to evaluate,
based uncritically on an individualistic model
of learning. (Crook 1991).
In drawing their discussion on possible
research directions together, Anderson et al.
(1996: 20) conclude that the fundamental issue
is whether the most productive research path
is one that takes individual or social activity as
the principal unit of theoretical focus. This
group argue that whilst not denying the importance of the social, only by breaking things
down and focussing on the individual can real
progress be made.
Conclusion
In part, of course, the question of whether
learning should consist mainly of social or
individualistic activity is an ideological issue:
is education to be seen as primarily for the
development of the individual, or is it perceived as essentially a cooperative venture?
(Light 1993: 41). With technology there is perhaps the additional fear of the dehumanisation
of education, with thoughts of individual students working alone at the computer, and with
the teacher’s role largely marginalised. Here
the Vygotskian view appears to present us with
a solution, where the computer is utilised as a
92
non-directive tool rather than a tutor, where
the teacher is actively and intrinsically
involved in the learning process, and where
collaboration is encouraged whenever possible.
In order to help resolve some of these
issues, I believe there needs to be a greater
sensitivity to factors that emerge from the
learning context. Key elements derived from
the educational setting, and the nature and
goals of the learners are often overlooked, or
their influence understated. Circumstances and
approaches differ according to the conditions.
For instance, are we considering: children or
adults; first, second, or foreign language learning; the primary, secondary or tertiary sector;
compulsory or voluntary classes; vocational or
academic goals; and in-class or out-of-class
activity? Other significant factors might
include class size, contact hours, language
teacher availability, and the educational background of learners.
My own particular interest is in adult second/foreign language learners in a university
context. Here, due to low contact hours, access
to CALL opportunities outside scheduled class
times is potentially beneficial. Further, it
underpins the notion of learner autonomy. This
is not only important for adult learners, but for
all learners if, ultimately, they are going to be
able to operate confidently on their own outside the classroom without the teacher. There
are also aspects of language learning that may
usefully be extended or practised in selfaccess mode, for example vocabulary learning
and listening comprehension practice. There is
simply not enough time for many important
aspects of language learning such as these to
be covered entirely in the classroom with a
teacher present. Further, current limitations in
ICALL applications are not, I believe, sufficient justification for devaluing or abandoning
this work. There is a role for the computer as
tutor, but researchers and practitioners must be
cautious and conscious of limitations and
make learners aware of them.
Finally, as far as Piagetian and Vygotskian
positions on language learning are concerned,
Boyle (1997: 81) believes they may be seen as
either challenging or complementing one
ReCALL
Two conceptions of learning: M Levy
another. The individualistic view is important
because of the invariant features that we share
in our biological makeup and in the physical
world we interact with. There are fundamental
patterns of cognitive development common to
all. On the other hand, Vygotsky’s perspective
helps account for social factors in learning,
and the role of the teacher in supporting classwork. In my opinion, both theoretical positions have the potential to inform research and
practice in educational computing and in
CALL.
Note
1. ICALL systems
are not necessarily
microworlds, or Piagetian in conception.
ICALL projects that utilise certain kinds of
coaching or configure the computer to be a
conversational partner, in a sense, follow
Vygotsky. Certain kinds of simulated social
interaction, and work that follows a collaborative apprenticeship model also might be
included. However, when the partner is a computer tutor rather than a human tutor there are
important and very significant qualitative differences that must be taken into account before
a true Vygotskian perspective could be
assumed. Certainly for Vygotsky, the intention
was that the partner in the interaction would be
human.
References
Anderson J., Reder L.M. and Simon H.A. (1996)
‘Situated learning and education’, Educational
Researcher, 25 (4), 5–11.
Barson J. (1997) ‘Space, time and form in the project-based foreign language classroom’. In
Debski R., Gassin J. and Smith M. (eds.), Lan guage learning through social computing,
Occasional Papers Number 16, Melbourne:
ALAA and the Horwood Language Centre,
1–38.
Barson J. and Debski R. (1996) ‘Calling back
CALL: technology in the service of foreign language learning based on creativity, contingency, and goal-oriented activity’. In
Warschauer M. (ed.), Telecollaboration in for eign language learning, Hawaii: Second Language Teaching & Curriculum Centre, 49–68.
Vol 10 No 1 May 1998
Boyle T. (1997) Design for multimedia learning,
London: Prentice Hall.
Bruner J. S. (1985) ‘Vygotsky: a historical and conceptual perspective’. In Wertsch J. V. (ed.), Cul ture, communication and cognition: Vygotskian
perspectives, Cambridge: Cambridge University Press, 1–32.
Crook C. (1991) ‘Computers in the zone of proximal development: implications for evaluation’,
Educational Computing, 17 (1), 6–18.
Debski R. (1997) ‘Support of creativity and collaboration in the language classroom: a new role
for technology’. In Debski R., Gassin J. and
Smith, M. (eds.), Language learning through
social computing. Occasional Papers Number
16, Melbourne: ALAA and the Horwood Language Centre, 39–66.
Debski R., Gassin J. and Smith M. (eds.) (1997)
Language learning through social computing,
Occasional Papers Number 16, Melbourne:
ALAAand the Horwood Language Centre.
Goodfellow R. (1995) ‘A review of the types of
CALL programs for vocabulary instruction’,
Computer Assisted Language Learning 8 (2–3),
205–226.
Gremmo M.-J. and Riley P. (1995) ‘Autonomy,
self-direction and self-access in language teaching and learning’, System 23 (2), 151–164.
Haas C. (1996) Writing technology: studies on the
materiality of literacy, Mahwah, NJ: Lawrence
Erlbaum.
Hamburger H. (1995) ‘Tutorial tools for language
learning by two-medium dialogue’. In Holland
V. M., Kaplan J. D. and Sams M. R. (eds.),
Intelligent language tutors: theory shaping
technology, Mahwah, NJ: Lawrence Erlbaum
Associates, 183–200.
Harrington M. (1996) ‘Intelligent computerassisted language learning’, On-CALL, 10 (3),
2–9.
Hartog R. (1989) ‘Computer-assisted learning –
from process control paradigm to information
resource paradigm’, Journal of Microcomputer
Applications, 12, 15–31.
Higgins J. (1982) ‘The Grammarland Principle’,
Bulletin Pedagogique, 80-1 (44–5), 49–53.
Holland V. M., Kaplan J. D. and Sams M. R. (eds.)
(1995) Intelligent language tutors: theory shap ing technology, Mahwah, NJ: Lawrence Erlbaum.
Jones A. and Mercer N. (1993) ‘Theories of learning and information technology’. In Scrimshaw
P. (ed.), Language, classrooms and computers,
London: Routledge, 11–26.
Kenning M.-M. and Kenning M. J. (1990) Comput 93
Two conceptions of learning: M Levy
ers and language learning: current theory and
practice, New York: Horwood.
Kern R. (1996) ‘Computer-mediated communication: using email exchanges to explore personal
histories in two cultures’. In Warschauer M.
(ed.), Telecollaboration in foreign language
learning, Hawaii: Second Language Teaching
& Curriculum Centre, 105–119.
Laurillard D. and Marullo G. (1993) ‘Computerbased approaches to second language learning’.
In Scrimshaw P. (ed.), Language, classrooms
and computers, London: Routledge, 145-165.
Levy M. (1997) Computer-assisted language learn ing: context and contextualization, Oxford:
Oxford University Press.
Light P. (1993) ‘Collaborative learning with computers’. In Scrimshaw P. (ed.), Language, class rooms and computers, London: Routledge,
40–56.
McDonell W. (1992) ‘Language and cognitive
development through cooperative group work’.
In Kessler C. (ed.), Cooperative language
learning, London: Prentice Hall, 51–64.
Papert S. (1980) Mindstorms, London: Harvester
Press.
Phillips D. C. (1995) ‘The good, the bad, and the
ugly: the many faces of constructivism’, Educa tional Researcher, 24 (7), 5–12.
Piaget J. (1980) ‘The psychogenesis of knowledge
and its epistemological significance’. In Piattelli-Palmarini, M. (ed.), Language and learn ing Cambridge, MA: Harvard University Press.
Renié D. and Chanier T. (1995) ‘Collaboration and
computer-assisted acquisition of a second language’, Computer Assisted Language Learning,
8 (1), 3–29.
Scrimshaw P. (ed.) (1993) Language, classrooms
and computers, London and New York: Routledge.
Shneiderman B. (1997) ‘Forward’. In Debski, R.,
Gassin, J. and Smith, M. (eds.), Language
94
learning through social computing, Occasional
Papers Number 16, Melbourne: ALAA and the
Horwood Language Centre.
Skinner B. F. (1954) ‘The science of learning and
the art of teaching’, Harvard Educational
Review, 24, 86–97.
Tella S. (1992) Talking shop via email: a thematic
and linguistic analysis of electronic mail com munication (Research Report No. 99), Helsinki:
University of Helsinki, Department of Teacher
Education.
Tomlin R. S. (1995) ‘Modelling individual tutorial
interactions: theoretical and empirical bases of
ICALL’. In Holland V. M., Kaplan J. D. and
Sams M. R. (eds.), Intelligent language tutors:
theory shaping technology , Mahwah, NJ:
Lawrence Erlbaum Associates, 221–242.
Vygotsky L. S. (1978) Mind in society: the develop ment of higher psychological processes, Cambridge, Mass.: Harvard University Press.
Vygotsky L. S. (1986) Thought and language,
Cambridge, Mass.: The MITPress.
Warschauer M. (ed.) (1995a) Computer-mediated
collaborative learning: theory and practice,
NFLRC Research Notes #17, Hawaii: Second
Language Teaching & Curriculum Centre.
Warschauer M. (ed.) (1995b) Virtual connections:
online activities and projects for networking
language learners, Hawaii: Second Language
Teaching & Curriculum Centre.
Warschauer M. (ed.) (1996) Telecollaboration in
foreign language learning, Hawaii: Second
Language Teaching & Curriculum Centre.
Dr Michael Levy has been writing and researching
in CALL for the last 12 years. He is editor of OnCALL, the Australian Journal of CALL. His most
recent book is ‘Computer-Assisted Language
Learning: context and contextualisation’ published
by Oxford University Press, 1997.
Email: [email protected]
ReCALL
ReCALL 10:1 (1998) 95–101
Designing, implementing and
evaluating a project in tandem
language learning via e-mail
David Little and Ema Ushioda
Centre for Language and Communication Studies, Trinity College Dublin
Tandem language learning is based on a partnership between two people, each of whom is learning
the other’s language. Successful tandem partnerships observe the principle of reciprocity (“tandem
learners support one another equally”) and the principle of learner autonomy (“tandem partners are
responsible for their own learning”) (Little and Brammerts 1996: 10ff.). This paper begins by exploring
some of the theoretical implications of tandem language learning in general and tandem language
learning via e-mail in particular. It then reports on the pilot phase of an e-mail tandem project involving
Irish university students learning German and German university students learning English.
1.
Introduction
For a number of years e-mail has been used to
support second language learning both formally and informally. At the formal end of the
spectrum there have been projects of various
kinds linking language classrooms in different
countries (see, e.g., Eck et al. 1995); and at the
informal end, language learners with individual e-mail accounts have sought pen-friendships with native speakers of their target language.
More recently there has been a surge of
interest in the use of e-mail for tandem language learning, thanks largely to the work of
the International E-Mail Tandem Network,
co-ordinated by Helmut Brammerts at the
Ruhr-Universität Bochum (see Little and
Vol 10 No 1 May 1998
Brammerts 1996). Inevitably, the Network’s
first concern has been to establish reliable
infrastructures so that tandem language learning by e-mail can actually take place. But
members of the Network recognize that longterm progress depends on elaborating appropriate theories, using the theories to shape
pedagogical experiments, and subjecting those
experiments to empirical evaluation. This
paper is a preliminary contribution to that
process. It first explores some of the central
issues of principle that arise from the concept
of tandem language learning in general and its
e-mail version in particular, and then reports
on the pilot phase of an e-mail tandem project
involving Irish university students learning
German and German university students
learning English.
95
Tandem language learning via e-mail: D Little and E Ushioda
2. Tandem language learning
The practice of tandem language learning is
well-established, though perhaps not as wellknown as it might be (see, e.g., Calvert 1992,
Brammerts 1993). Essentially, it entails a partnership between two people, each of whom is
learning the other’s language. Effective language learning in tandem depends on the principles of reciprocity and learner autonomy
(Little and Brammerts 1996: 10ff.). The principle of reciprocity insists that tandem learners
must support one another equally. In practice
this means (i) that they must devote the same
amount of time to each language, and (ii) that
each must support the other’s learning explicitly and without reservation. The principle of
learner autonomy insists that tandem learners
are responsible for their own but also for their
partner’s learning.
2.1 Face-to-face tandem learning
In its canonical form tandem language learning happens face-to-face. Partners work
together at the same time and in the same
place; and although their learning activities
may involve reading and writing, the basis of
their partnership is oral communication. Faceto-face tandems may provide the organizational basis for a formal course of language
learning; or they may support a formal course
as an optional extra; or they may be the partners’only mode of language learning.
Perhaps the most obvious attraction of
face-to-face tandems is that they offer regular
opportunities for communication in the target
language – it is, after all, beyond dispute that
frequent involvement in purposeful communication plays a crucial role in the development
of oral proficiency. If the native speaker is to
provide maximum benefit to the learner in
each tandem exchange, it is essential that he or
she concentrates on supporting the learner’s
efforts to communicate and does not try to be
a teacher. This does not rule out the provision
of corrective feedback; on the contrary, in naturalistic as well as in formal contexts, feedback is one of the most important stimuli to
learning (and from time to time it may appropriately take the form of grammatical explana96
tion). But if on both sides of the partnership
the native speaker maintains an appropriately
supportive role, tandem encounters should not
degenerate into two (probably rather ineffective) language lessons.
A less obvious benefit of tandem language
learning is its capacity to help learners
develop new perspectives on their own and
their target language, precisely because they
communicate with their partner bilingually.
These new perspectives can facilitate language
learning but also language use, for instance by
making the learner aware of lexical similarities or syntactic contrasts between his mother
tongue and the target language. They may
develop and be exploited both explicitly, as
part of the conscious tool-kit that we apply to
language learning and language use, and
implicitly, as part of the network of largely
unconscious intuitions we follow, especially in
spontaneous communication.
Probably the most widespread difficulty
that tandem language learners have to overcome is an undeveloped capacity for
autonomous learning behaviour. This is hardly
surprising, since the aspiration of our curricula
to produce independent, self-managing learners has by and large failed to reform the pedagogies that are institutionalized in our educational systems (for a brief discussion of
learner autonomy in relation to tandem language learning, see Little and Brammerts
1996; Little (1991) provides a fuller treatment
of theoretical and practical issues in the development of learner autonomy). The obvious
solution to this difficulty is for tandem partners to be provided with plenty of advice and
support, especially in the early stages of their
partnership – advice on how to prepare for and
manage meetings, how to select appropriate
learning activities, how to behave as (i) learner
and (ii) native speaker, and in the second role,
how to provide feedback; support in recognizing and overcoming the linguistic and affective problems that any new approach to language learning is likely to generate. The
development of appropriate support structures
and counselling techniques has become a central concern in tandem language learning
schemes (see, e.g., Lewis et al. 1996).
ReCALL
Tandem language learning via e-mail: D Little and E Ushioda
2.2 Some basic issues in tandem
language learning via e-mail
Face-to-face tandem partnerships are not
always easy to arrange, not least because in
many formal learning environments native
speakers of one of the two languages in question are in short supply. This, together with the
general desire to find new ways of using information systems to support language learning,
gave rise to the idea of tandem language learning via e-mail. Inevitably, the change of
medium has consequences for the organization and structure of communication between
tandem partners, and thus for the language
learning process.
The first consequence arises from the fact
that e-mail is a channel of written rather than
oral communication. It is true that in many
quarters e-mail exchanges are conducted in a
very informal style, so that their syntax and
vocabulary have more in common with speech
than with formal written registers. And it is
also true that when the partners in an e-mail
correspondence are on-line at the same time,
they can exchange a succession of brief messages that look very like the turns in a conversation. But these facts must not be allowed to
obscure the fundamental difference between
oral interaction and all forms of written communication: whereas in oral interaction meaning is negotiated between the participants, in
written communication it must be produced by
the writer working alone. As regards tandem
language learning, this means that the native
speaker can only ever provide the learner with
linguistic support after the event; in the actual
formulation of each message the learner is
inevitably on his or her own.
The transposition of tandem learning from
face-to-face to e-mail mode also has consequences for the bilingual structure of the partnership. Whereas in face-to-face tandems each
meeting should be divided fifty-fifty between
the two languages, in e-mail tandems each
message should be so divided. This may seem
to imply that messages should be written in
two halves – first mother tongue and then target language, or vice versa. In the early stages
this is probably the easiest way to proceed, but
in due course partners may well develop techVol 10 No 1 May 1998
niques of code-switching, mixing the two languages within paragraphs and sentences.
The capacity of tandem learning to promote the development of new perspectives on
the learner’s mother tongue and target language, is especially pronounced in the e-mail
version. This is because written communication provides us with texts that we can analyse
and reflect on, whereas the linguistic substance of oral communication remains only
fleetingly in our short-term memory (unless of
course we arrange to record it). In principle,
tandem learners communicating via e-mail
should find it all but impossible to avoid
drawing explicit comparisons and contrasts
between mother tongue and target language.
The requirement to provide feedback to
one’s partner should play a decisive role in
this process; and here again, the e-mail
medium has particular consequences. When
native and non-native speakers communicate
face-to-face, feedback takes two distinct
forms. It can be implicit and indirect, arising
spontaneously in the course of communication, for example via the operation of conversational repair. Alternatively, it can be explicit
and direct, as when the native speaker interrupts the flow of communication to explain
that one doesn’t use that word in this particular way. In e-mail tandems feedback can likewise take two forms: reformulation of defective structures, and explanation of mistakes by
reference to lexical definitions, grammatical
rules, norms of usage, and so on. But although
the first of these forms of feedback may be
based on the native speaker’s intuitions rather
than explicit linguistic analysis, it has the
same intentional character as the second form.
It is not an involuntary product of the communication process, and in the written medium
the native speaker always knows when he or
she is providing feedback.
How exactly the development of learner
autonomy is to be promoted within an e-mail
tandem scheme depends on the institutional
and perhaps curricular structures in which the
particular scheme is embedded. Learners may
be offered face-to-face advice, for example, or
they may seek the help of an adviser via email. But whatever practical arrangements are
97
Tandem language learning via e-mail: D Little and E Ushioda
made, the e-mail medium has two important
implications, one for the way in which counselling is organized, and the other for the focus
of advice. Precisely because face-to-face tandem partners learn together in the same place
at the same time, it is possible to advise them
together, in a three-way encounter. The obvious advantage of this arrangement is that it
enables learners to define problems and
explore possible solutions collaboratively, so
that the process of seeking and accepting
advice is an integral part of the tandem
process. In the case of e-mail tandems, faceto-face advice can only be available to each
partner separately, and the best the e-mail
medium itself can offer is an asynchronous
discussion forum. However, it is possible for
both tandem partners to share in the counselling process in the virtual communication
space provided by a MOO (Multiple user
domain, Object Oriented) such as the Tandem
Language Centre at Diversity University (see
Schwienhorst 1998:123). As regards the focus
of advice, it seems likely that all tandem learners will need help with organizational and
affective problems, but that e-mail tandem
learners will need particular support in developing control of the intentional procedures
that are so much more central to written than
to oral communication.
3. The project: an interim report
The e-mail tandem project to which we now
turn has largely been shaped by consideration
of these issues of general principle. In its
design and implementation, the project seeks
to address a number of important practical
questions to which these issues of principle
give rise: in particular, what kinds of organizational and pedagogical structures are needed
for tandem language learning via e-mail to
work successfully as part of a larger course of
study; and what measures can be taken to monitor and evaluate its effectiveness. To this end,
the first year of the project (1996–7) has been
devoted to developing robust organizational
structures and establishing appropriate monitoring and evaluation procedures. These will be
98
used as the basis for conducting a full-scale
empirical investigation in 1997–8. In this part
of the paper, we present our findings to date
from the pilot phase, and conclude by looking
forward to the second phase of the project.
3.1 Organizational structures
In the pilot phase, Irish university students
learning German at Trinity College Dublin
were twinned with German students learning
English at the Ruhr-Universität Bochum. The
courses of study in Dublin and Bochum share
a number of features in common, including an
emphasis on the development and use of communication skills, a focus on similar topic
areas, and a cycle of project work. In both
institutions, moreover, the courses are taken as
optional extras by non-specialist students (i.e.,
students who are not studying a foreign language for their degree). The joint scheme thus
offers an appropriately controlled context
within which to conduct an empirical evaluation of tandem language learning via e-mail.
Equally, this joint scheme is underpinned by
our own firm conviction that an institutional
partnership of this kind is a prerequisite for the
successful implementation of this mode of
learning in a course of study. Evidence from
the pilot phase of the project strongly supports
this conviction, pointing to the need for very
close institutional co-operation through the
planning and implementation stages.
At the most basic organizational level,
institutional collaboration is required to
address the practical problem of securing tandem partners for all students. Partners are normally assigned through the central dating
agency of the International E-mail Tandem
Network. In practice, however, the demand for
partners in certain target languages outstrips
supply. Thus there is no guarantee that all students in a course of study will be assigned a
tandem partner. In the pilot year of the project,
there were between 25 and 30 active partnerships. This figure accounts for less than 20%
of the total number of Irish students initially
enrolled in our German courses in 1996–7.
Although other factors may have played a part
(e.g. technical problems using e-mail, lack of
interest, etc.), there is little doubt that failure
ReCALL
Tandem language learning via e-mail: D Little and E Ushioda
to secure a partner was by far the largest factor
determining the relatively low number of
active partnerships.
As the year progressed, steps were taken to
twin students through direct contact between
our two institutions, a system that will be
exclusively adopted in the second year of the
project. In addition, it is intended that students
will be ‘double-dated’, so that each is assigned
two tandem partners, in an effort to overcome
the difficulties caused by loss of partnerships
through student withdrawal (a problem faced
in both institutions because the language programmes are extra-curricular). A further measure will be the setting up of a bilingual e-mail
discussion forum between the two institutions,
in order to provide a back-up channel of communication for students whose partners drop
out or fail to write.
3.2 Pedagogical organization
Beyond such basic practical arrangements,
close institutional co-operation is also required
at the level of pedagogical organization, in
terms of the role that tandem learning plays in
each course of study and the kinds of support
that are provided. During the pilot phase, the
relatively low number of students with working
partnerships made it difficult to assign more
than a superficial role to tandem learning in the
design of the course. Students were simply
encouraged to exploit their tandem partnerships
in order to learn and communicate in German
and get help with project work. Empirical evidence suggests that some students did indeed
engage in productive and effective tandem
learning. However, for the scheme to be successful as a whole, we realize that tandem
learning must be assigned a more central role in
the design of the courses in both institutions,
and not simply left to students themselves to
pursue in a haphazard and unfocused way.
Our own course design for 1997–8 will
thus include correspondence with tandem
partners as an integral part of the project work
that students will be engaged in. This correspondence will moreover form part of the
work submitted by students in fulfilment of
their course requirements. The fact that students in both institutions will be working on
Vol 10 No 1 May 1998
similar kinds of projects at the same time
should provide plenty of scope for mutual
learning support and exchange of information
and ideas. It should be noted that the joint
planning of the two courses has also had to
take into account the differences in the term
and semester structures between the two institutions, so that active use is made of those
periods in the year when regular communication can be guaranteed.
The integration of tandem learning into the
course design in this way should ensure a
much tighter organizational structure that will
facilitate the effective exploitation of this
medium of learning. In addition, however, we
recognize the importance of providing adequate induction and ongoing support to students. As indicated in the earlier part of this
paper, effective language learning in tandem
depends on the principles of reciprocity and
learner autonomy. While partners may be conscious of these principles and vaguely attuned
to each other’s language learning needs, it is
unlikely that they will put the principles into
practice without appropriate support and
guidelines. Feedback from students in the
pilot phase suggests, for example, that partners in the early stages of correspondence may
find it difficult to decide what to write about
or how to correct each other’s errors. Detailed
guidelines drawn up by the two institutions are
thus needed to help tandem partners agree on
procedures for formulating exchanges, correcting errors, working on tasks and providing
mutual learning support.
3.3 Empirical evaluation procedures
Through tighter organizational structures of
these kinds, it is envisaged that students will
be able to exploit their tandem partnerships in
a more effective and principled manner. The
degree to which they succeed in doing so will
of course be subject to empirical evaluation.
In this respect, success is to be measured in
terms of perceived benefits for students in
their learning experience, as well as in terms
of learning outcomes and linguistic outcomes.
The pilot phase of the project was devoted to
developing appropriate procedures for evaluating these various outcomes.
99
Tandem language learning via e-mail: D Little and E Ushioda
A simple open-ended questionnaire was
devised to elicit students’ perceptions of their
tandem learning experience, and administered
via e-mail twice during the pilot year. For this
exploratory phase of the project, it was considered important to obtain as broad a picture as
possible of how tandem learning via e-mail is
perceived by students. The questionnaire was
thus administered to all students enrolled in
our foreign language courses for non-specialists and not just those with tandem partners in
Bochum. Evaluations from students who
responded were on the whole positive and
encouraging, suggesting that those with working partnerships found tandem learning via email both appealing and different from other
forms of language learning they had experienced.
Its principal attractions seemed to be that it
was more relaxed and informal than classroom
learning; it involved purposeful communication with a native speaker; it offered regular
exposure to the target language, to useful
vocabulary including colloquial idioms, and to
first-hand information about the target language country and culture; and it gave students the freedom to decide how, what and
when they wanted to learn. Perceived difficulties were either of a practical kind (e.g. a partner’s failure to respond, or problems finding a
computer free), or related to problems in
establishing mutually agreed procedures
between partners (e.g. a partner’s tendency to
write in English all the time, or difficulty
knowing what to write). These latter findings
underline the importance of monitoring tandem learning activity and providing appropriate support and guidelines for both partners.
Monitoring tandem learning requires not
only an analysis of how students evaluate their
experience, but also an analysis of their correspondence with their tandem partners in order
to evaluate the kinds of learning process and
linguistic outcome that are reflected in these
exchanges. Our experience in the pilot year
makes it clear that rigorous procedures must
be implemented in order to assure both quantity and quality of data. In the pilot phase, the
data collected were limited to samples of
exchanges volunteered by a few students, too
100
small a corpus to yield more than some very
general indications about content, sentence
length and the proportions of English and German used. In the second year, the integration
of tandem learning via e-mail into the design
of the course will enable us to gather data in a
more efficient manner, since students will be
required to submit a series of tandem
exchanges as part of their course work. Moreover, the project work that students will be
involved in will provide tandem partners with
a concrete focus and purpose for these written
exchanges.
4. Conclusion: the next phase
Our objective in 1997–8 is to conduct a fullscale empirical evaluation of tandem language
learning via e-mail. This will entail an evaluation of the organizational structures developed
through the pilot phase. It will also entail a
detailed evaluation of students’ use of tandem
learning, focusing on both the affective dimension of their learning experience and the kinds
of process and product that are reflected in
their written exchanges. For a sub-sample of
the students, the evaluation will extend to their
experience of real-time text-based communication with their tandem partners in the virtual
environment of the object-oriented multiple
user domain (MOO) at Diversity University.
As a result of this research we hope to find
ways not only of refining our pedagogical procedures but of further clarifying and extending
our understanding of the theoretical issues
briefly addressed in the first part of this paper.
References
Brammerts H. (1993) ‘Sprachenlernen im Tandem’.
In Fachverband Moderne Fremdsprachen
(eds.), Fremdsprachen für die Zukunft – Nach barsprachen und Mehrsprachigkeit. Beiträge
zum Bundeskongreß in Freiburg (1992) des
Fachverbandes Moderne Fremdsprachen, Saarbrücken: Universität des Saarlandes (SALUS –
Saarbrücker Schriften zur Angewandten Linguistik und Sprachlehrforschung 12), 121–32.
Calvert M. (1992) ‘Working in tandem: peddling an
ReCALL
Tandem language learning via e-mail: D Little and E Ushioda
old idea’, Language Learning Journal 6, 17–
19.
Eck A., Legenhausen L. and Wolff D. (1995)
Telekommunikation und Fremdsprachenunter richt: Informationen, Projekte, Ergebnisse,
Bochum: AKS-Verlag (Fremdsprachen in Lehre
und Forschung 18).
Lewis T., Woodin J. and St John E. (1996) ‘Tandem
learning: independence through partnership’. In
Broady E. and Kenning M.-M.(eds.), Promot ing Learner Autonomy in University Language
Teaching, London: Association for French Language Studies in association with CILT, 105–
20.
Little D. (1991) Learner Autonomy 1: Definitions,
Issues and Problems. Dublin: Authentik.
Little D. and Brammerts H. (1996) ‘Aguide to language learning in tandem via the Internet’,
Vol 10 No 1 May 1998
CLCS Occasional Paper No. 46, Dublin: Trinity College, Centre for Language and Communication Studies.
Schwienhorst K. (1998) ‘The “third place” – virtual reality applications for second language
learning’. In Blin F. and Thompson J. (eds.),
Where Research and Practice Meet, Proceed ings of EUROCALL’97, Dublin, 11–13 September 1997, ReCALL, 10(1), 118–126.
David Little is Director of the Centre for Language
and Communication Studies (CLCS), Trinity Col lege, Dublin; his principal research interests are
learner autonomy and the use of new technologies
in language learning. Ema Ushioda is a research
fellow in CLCS whose principal research interest is
the study of motivation in language learning.
101
ReCALL 10:1 (1998) 102–108
Why integrate? Reactions to
Télé-Textes Author 2, a CALL
multimedia package
Liam Murray
University of Warwick
As one branch of CALL research moves further into the analysis of software integration into second language courses, this paper deals with many of the issues involved in the successful integration of a piece
of multimedia software into a language curriculum designed for non-specialist advanced undergraduate students of French. After discussing the background and context of the research, the paper describes
the software used and the surrounding integration aspects as well as the future use and development of
the software as adequate courseware and research material.
1. Background and context
The practical purpose of this research was to
attempt to introduce and make full use of a
multimedia CALL package in an advanced
level French non-specialist course. This was to
be carried out over a two year period with two
groups of students. It involved training students in the use and evaluation of the package.
Students were then set the task of creating and
designing their own materials for use within
the package. The first year of the exercise
served as a practice run, with integration theories being assessed and matched against the
practical experiences, particular resources and
teaching environment of the course designers.
It proved to be a valuable learning period for
this author/tutor. At the end of this year a full
102
evaluation of the integration was carried out
with the students and subsequent necessary
changes were made in preparation for the new
cohort of students arriving in the second year.
These changes were few in number and
mostly concerned student time management,
access to the software and an upgrade of the
software. An important lesson learned at the
end of the first year dealt with the students’
attitude to working within what was for most
of them a new learning environment. This lesson is discussed in greater depth later in the
paper.
It remained true that at the beginning of
each year, many students displayed an unfamiliarity with even the most basic elements of
CALL software, despite the apparent proliferation of medium and lower-level CALL mateReCALL
“Why integrate?”: L Murray
rials such as, for example, Fun with Texts. It
was therefore no surprise that none of the students had seen or used an advanced level
CALL package. In their defence, it must be
stated that there are currently very few multimedia CALL materials available that are
aimed at advanced learners. The commercial
reasons for this situation are obvious but in the
end, the author was quite fortunate in finding a
small group of software developers and
designers who had already produced a suitable
package, namely Télé-Textes, which was itself
based upon an older multimedia German
CALL package called TV und Texte. The
designers were very willing to co-operate and
discuss ideas informally on the use and development of an upgraded version of the software. The newer version of Télé-Textes,
namely Télé-Textes Author 2 (TT2), was not a
joint commercial venture but it did represent
for this author a valuable opportunity to discuss introduction and integration issues with a
software producer both during and after the
upgrading of the original teaching package.
1.1 The course
Integration meant a redesign of the original
course. As it now stands, the course is made
up of two hours per week class time over a
twenty five week period. It is aimed at a post
A-level or equivalent standard with the stated
aims of deepening students’ understanding of
French civilisation and extending their writing
and speaking skills. The students’ multimedia
project is intended to focus their attention on
the whole learning process through their production of materials for TT2 which would, at
the appropriate time, be included in TT2 and
used with future groups of students. Such an
inclusion will be carried out using the new
‘Tendril’ software, the most recent addition to
TT2, when this software has been fully tested
and becomes available to us. This will allow
the tutor to take the separate elements of the
project materials, include them within TT2 as
legitimate content and finally write and cut a
separate student TT2 CD-ROM. Given the
limited exposure time in class, TT2 is introduced during the first term and used in class.
TT2 is also made available for general student
Vol 10 No 1 May 1998
use in the Open Access area of the Language
Centre throughout the whole of the year. The
presentations and inductions were conducted
during twenty-minute periods with groups of
three students facing the PCs. The various
aspects and subject items of TT2 are discussed, their future project is described to
them and guidelines and deadlines are given.
In the second term, the students are divided
into workgroups and the project work begins
in earnest. The search for their own video
news clips and materials begins and there is a
noted upsurge in the use of TT2 outside of the
class. The compilations of the materials are
finished and near the end of this term students
make formal presentations to their peers
where criticisms are sought and given. In the
final term, the projects are submitted and
marked.
It must be remembered that the multimedia
component is but one part of several other
resources that are used on this course. Language and culture textbooks, debates, presentations, grammar discussions, video news clip
realia as well as other CALLmaterials such as
GramEx and GramDef are also used. In addition, students with their limited previous
exposure to CALL had to be introduced in a
planned and careful way to the software. This
occurred right from the beginning of term 1,
where students were interviewed prior to
course enrolment about their opinions and any
concerns they may share or have had about
using the multimedia project as a learning
resource. As noted elsewhere, students’ learning objectives had to be redefined: “... if the
new medium is to achieve its pedagogical
potential and offer a new kind of learning
experience to students” (Laurillard 1995:
179).4 It must be reported that student opposition was, for the most part, minimal. The vast
majority of the learners had already used computers in some respect and the major questions
and worries surrounded tutoring in and access
to our chosen teaching software. During the
remainder of the first term students were,
according to their submitted ‘diaries’ or log
books, using the software on a weekly basis
inside and outside of classroom time. This
resulted in longer peer discussions of the soft103
“Why integrate?”: L Murray
ware contents and in questions to the tutor on
many of the subject issues relating to contemporary France as they were studied using traditional materials and TT2.
1.2 The students
Student motivation on this course was rarely a
problem. A brief profile reveals that these
learners are non-specialists in language learning. This French course is an academic option
on their main degree programme and they hail
from a broad mixture of backgrounds. They
are very much aware of the need for other life
skills in enhancing their future CV documents
and the prospect of being content providers
and designers on a multimedia project
appealed to each of them. In approaching an
advanced French language course, these students also had certain needs and expectations.
They expected no French literature analysis
and to practise and polish their language skills
to a high degree and to deepen their understanding of contemporary France and its institutions.
1.3 Needs and aims
In tailoring the course to these specific needs
the general course content needed to be linguistically challenging for their level of
French with ample practice of all four intralingual skills. TT2 also had to service the same
needs. This was achieved thus: the listening
skill was practised and tested through the
video work; the reading skill through work on
the video transcription and the language exercises; the writing skill through work on their
own transcription, précis, stratégie, exercises
and diary; and the speaking skill through the
presentation and defence of their project. The
course itself was conducted, for the most part,
in the target language. A noted major adjustment for several students was in learning to
participate and contribute to a working project
group. With a maximum of three students per
project group, the division of labour’ choices
were left entirely to the individual students.
No problems were reported in this respect.
The learners had to be set a goal and a
challenge. In our case, it was the long evalua tion and mimicking of the original designers’
104
learning model and the generation and assembly of their own material and exercises with
other students on a final project. This is what
Plowman (1988: 29) calls ‘cognitive
enhancers’, in which the user is given the tools
to ‘repurpose’ existing materials (in his example it was with multimedia essay writing).
With TT2 this idea was taken further and students were allowed to have their own choice
of video news footage in the full knowledge
that they had to devise competent language
exercises and tests for use with the video clip.
In this way the project activities force the students to think carefully about what they are
compiling and what they are offering to future
users.
2. The software
TT2 combines the traditional language learning elements of video, audio, textbook and
cahier d’exercices onto a self-access and easy
to use CD-ROM. It uses eighteen short video
news clips from the French television station
TF1 and offers a fair range of contemporary
and cultural subjects for study, as can be seen
from the screen dump (Figure 1). These clips
are to be found within seven themed Dossiers.
Each Dossier has at least two video clips and a
number of exercises and tasks specifically
written for use with the chosen subject.
The TT2 workspace also provides a video
scrollbar to control the video clip, along with
optional transcripts, a user notepad and a Stu-
Figure 1 TT2 screen dump
ReCALL
“Why integrate?”: L Murray
dio facility for voice recording and multimedia role-playing with the learner in the guise
of a television journalist. TT2 allows the tutor
or teacher to build their own specific sets of
exercises and tasks and is designed for use
with the original Télé-Textes video clips. The
Dossiers are listed in a file card display, as
indeed are the news clip sections. As well as
the spoken short introductions, every news
section offers a concise written introduction
and a stratégie on how to approach and use
the particular news items. The language skills
are tested by a set of tasks classified under
twelve headings, but are usually limited to a
choice from six headings per news clip. The
six offered headings are pertinent to the particular news item and form part of the already
mentioned stratégie, as the second screen
dump shows (Figure 2). Answers to the many
questions and exercises presented to the student may be entered either by typing them in,
in the conventional fashion (for accented characters, there is a character bar always available on the screen) or by using the different
answer boxes by clicking on them when the
cursor is placed in the correct blanked area of
the question.
2.1 Student reactions to and
evaluations of the software
Student reactions to the software were
recorded on questionnaires at bi-monthly
intervals throughout the whole of year. Their
comments range from initial enthusiasm for
the software to later more critical and searching comments. Points were made in the first
year of use concerning the need for student
exposure to other types of CALL software and
the development of critical skills in this area
of evaluation. GramEx and GramDef were
thus incorporated more extensively within the
students support resources during the second
year of use. Despite general acceptance of
TT2 as legitimate courseware, some students
still had to be eased into the process of learning and producing content for a multimedia
environment. The course designers recognised
the need to overcome all ‘prejudices’ and convince students of MM learning-values at the
beginning of and during the course. This lesVol 10 No 1 May 1998
Figure 2 TT2 screen dump
son, learnt during the first year trial, is
employed in each and every subsequent year
of the project.
2.2 Tutor’s evaluation of student
projects
The following comments are typical for the
two years of the study:
•
•
•
•
•
the video newsclip search was sometimes
a reflection of their own interests, e.g. the
politics students chose a clip on a national
bus strike;
the video transcriptions, though found by
many students to be most difficult task,
were generally very good examples of subjects include: avalanche; exploratrice;
grève; contamination de sang;
the précis writing ranged from very good
to average as this essential skill still continues to be taught erratically in UK
schools. For some students this is an
entirely new skill and it must be taught
each year to all of the students as early as
possible in the first term. A fact recognised
by both students and tutor;
the stratégie writing was quite good and
again students spent a lot of time on them
as it forced them to think about how to be
an instructional designer. Some efforts
were, however, overlong and reflected a
similar level of difficulty as in the précis
writing;
the language exercises were quite varied
and offered room for some creativity. The
105
“Why integrate?”: L Murray
•
list of student compiled tasks includes:
Préparez-vous; vrai et faux; synonymes;
compréhension; cochez la bonne réponse;
l’essentiel (with certain trick questions);
vérification; remplir les blancs; expres sions-clés; and Conclusions which were
subjects for general discussion and formal
debates. However, it must be stated that
linguistic errors did occur here. Indeed,
this is where the majority of the writing
mistakes tended to appear;
the work diaries or log books, although
regularly recorded also held many writing
mistakes, perhaps reflecting their low position of importance in the students’ minds
in the overall project.
3. Integration issues
There are many issues surrounding the question of software integration into language curricula; the following represent the most pertinent topics in our situation:
• although the tutor workload initially
increased through the preparation and
redesign of the course and students’
demands on the tutor outside of classroom,
the situation has eased somewhat to the
level of a ‘typical’language course;
• technical problems have been minimal due
to the working relationship with the
designer and programmer, the institutional
support, and some tutors’expertise;
• when and where it is appropriate to use the
software. These important issues ranged
from the practical (having access to TT2 in
an open access area and using it in class
with headphones having to be provided) to
the theoretical. Hemard (1997: 19) states
that we should: “Clearly identify the language learning or teaching goals to be
achieved by the application” and Hammond (1993: 53), in reminding us that
human learning is extraordinarily varied,
points out that: “Generic prescriptive
guidelines for educational [hypermedia]
design have only limited utility, and the
author must take account of many of the
106
•
characteristics of the learning situation and
how people are likely to learn from the
artefact in question”. Educational technologists such as Hammond tell us that in utilising hypermedia we should be adding
other tools to the repertoire of learning
styles and not exclude other styles learned
and exploited earlier in the student’s life.
This would form part of the oft-quoted
‘enhancement of the learning process’, but
it also must encapsulate a relatively new
learning process for some students unfamiliar with hypermedia and, as stated earlier, some students need to be ‘encouraged’
in this learning process, with the benefits
being made very clear to them right from
the start in explaining why exactly they are
doing certain things in their projects. In
combining other skills that they are acquiring and practising such as ‘reading’ news
images, looking into news presentation
styles and contents, it must be reported that
this enthused students. For example, one
group did a project on Laurence de la
Ferrière, the female explorer, and carried
out significantly more background
research than was required on how the
French televisual news presented the
explorer’s adventures and how the various
printed news publications did it. The group
then distilled their gathered information
and included it innovatively as part of their
own exercices. In the larger context, this
formed a small part of the students’ finished project. However the group,
although fully aware of this fact, pursued
this ‘mini’ project because they were
deeply interested in the subject. Language
learning and practice, the achieved ultimate goal, almost became secondary in
these instances. Here we find echoes of
Hammond’s (1993: 59) comments: “...that
the more the learner thinks about the material, the better they will remember it –
where ‘more’ does not mean simply for
longer but in greater depth and variety”;
promotion of effective learning. In this situation, it is arguable that this is partly
achieved through the generation of teaching materials for someone else. Gardiner,
ReCALL
“Why integrate?”: L Murray
•
(1989) has described this as the generation
effect where one learns more from the
material that one has produced oneself. As
a concept, this is a continuation of the
enactment effect, or learning by doing,
which has been extensively documented
and proven elsewhere;
redesigning of a course curriculum to
account for the multimedia element. The
course designers had many discussions
both during and at the end of each year on
the development of an already established
language course and on any necessary
changes to the course. The changes to the
original course mainly included the setting, production and assessment of the student project work. The marking scheme
itself is under continual evaluation but currently stands thus: the project represents
15% of the overall mark for the course; 8%
is for ‘correct language usage throughout’,
including the diary report; 4% is for the
concepts used in the mixture of students’
own exercises and other components; 3%
is for an overall impression mark. During
the presentation of this paper, some commentators thought this accreditation to be
rather low given the amount of work
required of the students (even within a
groupwork project) and this author agrees
somewhat with those sentiments. However, as the project work establishes itself
as a legitimate and advantageous learning
and teaching scheme within the course and
satisfies the requirements of the course
examiners and designers, the accreditation
may well increase in the future.
4. Future use and development of
the software as adequate
courseware and research
material
The software itself will be continually developed by the original designers and from our
perspective it is intended to:
•
integrate the use of Tendril and other
authoring tools for student use;
Vol 10 No 1 May 1998
•
•
•
•
carry out further research on précis and
stratégie writing as practised by students
within a multimedia environment;
begin research on student implicit and
explicit knowledge where learning from
images occurs when viewing video clips;
continue research on individual language
learning styles, their differences and student metacognitive skills;
move on from the use of news clips to
include sections from films, televised discussions and documentaries.
5. Conclusion: why we should
integrate
This conclusion is positive for the most part as
the overall learning process is believed to be
motivating, challenging and new. It was felt
by students to have incorporated and encouraged the development of different transferable
skills and greatly helped their understanding
of certain aspects of French civilisation and
their confidence in understanding, writing and
speaking the French language. In spite of the
amount of effort required of the students
throughout the year given the accreditation
scheme (the perhaps ‘negative aspect’), the
production of their own Dossiers did offer
them a source of satisfaction and pride in their
work, especially in the knowledge that it
would be used by future groups of learners.
It cannot be claimed that the integration
freed up time for the tutor and this may always
be the case with newer courses, but it can be
claimed that the project work did enthuse
these students, they learned and they did enjoy
the challenge. A generation effect was discerned from the returned sets of questionnaires and from interviewing the students individually.
In some ways, the question “Why integrate?” is a fatuous and superfluous one.
Expectations are there (and rising) from many
quarters to use CALL software in our teaching
at tertiary level as much as possible. Whilst
welcoming these ‘pressures’ to a large degree,
it must be remembered that we, the tutors,
should attempt to obtain and maintain as much
107
“Why integrate?”: L Murray
as we can, knowledge about and control over
the choice (and even the creation) of the software and its use and appropriate pedagogical
integration.
References
Barker P. and King T. (1993) ‘Evaluating interactive multimedia courseware – a methodology’,
Computers and Education 21(4), 307–319.
Oxford R.L. (1995) ‘Linking theories of learning
with intelligent computer-assisted language
learning’. In Holland V. M., Kaplan J. D. and
Sams M. R. (eds.), Intelligent Language Tutors:
Theory shaping technology , Mahwah, NJ:
Lawrence Erlbaum, 359–369.
Laurillard D. (1995) ‘Multimedia and the changing
experience of the learner’, British Journal of
Educational Technology 26(3), 179–189.
Plowman L. (1988) ‘Active learning and interactive
video: a contradiction in terms?’, Programmed
Learning and Educational Technology 25,
289–293.
108
Hemard D. P. (1997) ‘Design principles and guidelines for authoring hypermedia language learning applications’, System 25(1), 9–27.
Hammond N. (1993) ‘Learning with Hypertext:
Problems, principles & prospects’. In
McKnight C., Dillon A. and Richardson J.
(eds.), Hypertext: A psychological Perspective,
Chichester: Ellis Horwood, 51–69.
Gardiner J. M. (1989) ‘A generation effect in memory without awareness’, British Journal of Psy chology 80, 163–168.
Svensson T. and Nilsson L.-G. (1989) ‘The relationship between recognition and cued recall in
memory of enacted and non-enacted information’, Psychological Research 51, 194–200.
Liam Murray has been working in CALL since
1989. He has developed “Hypermots” for literary
study of Sartre’s “Les Mots”; he is now researching
areas of software integration, Web pedagogy and
writing methodologies in a multimedia environ ment.
Email: [email protected]
ReCALL
ReCALL 10:1 (1998) 109–117
Using the Internet to teach English
for academic purposes
Hilary Nesi
University of Warwick
The paper describes how networked self-access EAP materials have been developed at Warwick University since 1992. The current package of materials (The CELTE Self-Access Centre) can be freely
accessed from the World Wide Web, and aims to provide some basic training in Information Technology alongside more conventional language and study skills activities. Problems of development and distribution are discussed, including the resistance of those EAP practitioners who have little experience of
the Internet in an educational context, and the unwillingness of users to interact with unknown task setters.
Background
Like most English-medium Higher Education
Institutions, Warwick University provides
courses in English for Academic Purposes
(EAP) for students whose first language is not
English. Such provision includes a presessional programme, a full-time certificate programme, and a programme of lunch-time and
evening classes. Yet these courses alone do not
entirely meet the need for English language
and study skills support. Exchange students
and visiting academics often come to Warwick
during periods when no EAP courses are on
offer, and full-time students often find that
lessons clash with departmental timetables.
Some learners also have language needs that
differ from those of the majority of particiVol 10 No 1 May 1998
pants, and which therefore cannot be catered
for in class.
We needed a safety net for those who were
missing the opportunity to develop their language and study skills by more conventional
means. A Self-Access Centre was the answer,
but before it could be created several practical
problems concerning location, staffing and
space had to be resolved. It was difficult to
decide on a location for a Self-Access Centre,
because Warwick University is spread across a
three-part campus covering 500 acres. Many
university campuses are far more extensive, of
course, but we still felt that some potential
users would find it difficult to reach a single
centre, wherever it was placed. We did not
have the resources to staff multiple centres,
and indeed it would have been costly to staff
109
Using the Internet to teach English: H Nesi
even a single centre outside conventional
office hours. As at most British universities,
supplementary EAPat Warwick is financed by
a small central fund and is provided free of
charge to students; making them pay for the
extra expense of staff time would have
defeated the purpose of a scheme designed to
increase the availability of English language
support. A final consideration was the practical problem of finding the space for all potential users and a sufficient quantity of
resources. No permanently available resource
room on campus would have been large
enough to accommodate all the self-access
learners who might wish to use it at any one
time.
As a solution to these problems, in 1991
we began to develop a collection of computerbased English language learning exercises, to
be accessed via the hundreds of networked
workstations on campus. Using the University
network cost virtually nothing, as the network
existed independently of any use made of it
for teaching purposes. We were, however, provided with a small grant from the University
to cover the purchase of software, and to print
questionnaires and publicity leaflets.
Our self-access materials were created
using six well-known authoring programs:
Eclipse (John Higgins) and Choicemaster,
Gapmaster, Matchmaster, Pinpoint and Vocab
(Wida Software). A front-end menu was created at Warwick, which presented materials
not according to exercise type but according to
learning purpose. It had four main categories
(Functions, Grammar, Topics and Vocabulary),
and a range of Topic subcategories (British
Life, University Life, Places, Jobs and Colloquial English). A pilot version of the package
was trialled in 1992 (Tsai 1992), and in
response to feedback from students, EAP
tutors and subject tutors an expanded version
was made available in 1993 (Nesi 1993,
1995).
Nowadays a number of British universities
provide a similar service for overseas students,
and in some respects our early efforts were no
different from those of many other institutions.
The Warwick package was, however, exceptional in two important respects: it was
110
extremely accessible (workstations were
located in every area of the campus, and many
were available 24 hours a day, all year round),
and for twelve months, from February 1993 to
February 1994, we used a network monitor to
record every detail of its use.
The network monitor we used was Sabre
Meter, a software package which recorded use
in terms of the activity and program the user
selected, the time and the date when the activity was accessed, the amount of time spent on
the activity, and the user’s department and status (undergraduate, postgraduate, researcher or
staff). Details of the Sabre Meter data are
given in Nesi (1996).
In the twelve-month monitoring period
Sabre Meter recorded 391 different users of
the learning materials from 33 different
departments, and 1668 instances of use for
longer than four minutes. Although we were
not reaching all the students we had hoped to
provide for, we found that the package was
being accessed by a broader range of users
than we would have been able to meet face-toface. We also registered many instances of use
at unexpected times of the day and night; a
number of users, for example, had a predilection for the very early morning hours that
could never have been satisfied by conventional timetabled classes. These two findings
seemed to justify our choice of the University
network as a medium for learning materials.
We also noted that users seemed to treat
our materials as a kind of light relief from
other computer-based activities that were
unrelated to English language learning. The
average duration of access was only fifteen
minutes, and matching and selecting exercises
tended to be far more popular than writing
activities, which demanded greater effort, concentration and time.
It may be that the users of our networked
materials were fairly typical of self-access
learners in general. A network monitor records
unobtrusively, without interfering in any way
with normal user behaviour, and for this reason it may reveal patterns of use that are not
evident from the questionnaires, report cards
and diaries that are used to evaluate learner
activity in a conventional self-access centre.
ReCALL
Using the Internet to teach English: H Nesi
Perhaps most self-access learners are disinclined to work on difficult language production tasks when they receive no human support or feedback, and when they do not even
meet the task setters. However, our students’
rather fragmented approach to the learning
package may also have been due to the fact
that, despite our attempt to create coherence
by grouping activities into categories, the
exercises were all free-standing and could be
accessed in any order rather than along specified “self-access pathways” (Kell and Newton
1997). However we went on to improve selfaccess provision at Warwick, it was clear that
we would have to explore ways of providing
greater guidance and more personalised support for our students.
Why the Internet?
By 1995 our self-access package was looking
rather old-fashioned. The programs were not
Windows-based or even mouse-driven, and
the menu which had looked so up-to-date in
1992 no longer impressed those users with
previous IT experience. We also felt that commercial authoring programs were too rigid for
our purposes; it was often impossible to provide all the information we wanted to provide,
in the right format, at the right juncture, and it
was difficult to establish direct links between
different programs and thereby encourage
users to follow a designated learning path.
While retaining all the advantages of the
internal network at the University, the Internet
seemed to offer the additional advantages of
flexibility and accessibility. We could create
activities in all manner of formats, link them
in whatever way we chose, and render them
accessible to students off-campus as well as
all those who used the workstations at Warwick.
We also had a further motive for choosing
to site our materials on the Internet. We were
becoming conscious that the provision of EAP
in British universities was failing to keep pace
with developments in learning technology.
Whilst subject departments increasingly
required students to search for information on
Vol 10 No 1 May 1998
the Internet, EAP tutors seemed to refer
almost exclusively to paper-based resources,
and there seemed to be few if any study skills
materials which trained non-native speakers in
Internet use.
A survey of thirty presessional EAP
courses at British universities in the summer
of 1995 (Jarvis 1996) presented evidence to
support this view. Jarvis reports that although
most courses provided training in wordprocessing and the use of on-line library catalogues, only 24% included training in E-mail
use, and only 13% taught students how to
access the World Wide Web. This suggests a
serious failure to meet learner need, perhaps
due to the fact that many EAP course designers and tutors, although highly qualified in
other respects, are themselves unable to use
the Internet and do not recognise its importance for university study.
One of the stated aims of our new selfaccess materials, therefore, was to provide
some basic Information Technology training.
We hoped that an EAP Website would introduce learners to some of the conventions of
the Internet, encourage e-mail communication, and also provide a springboard to other
sites on the World Wide Web.
The development of an EAP
self-access centre on the Internet
In 1995 we applied for and received funding
for a small two-year project to develop EAP
materials on the Internet. As with our previous
project, we began with very small scale trialling of materials. Access was limited to
password holders only, and the overall design
of the package was subject to continuous
change as we responded to feedback from
tutors and users. After talking to Warwick
University Computing Services we accepted
that our initial plans for audio and video materials were not viable; many of the Warwick
computers did not have sound cards, and the
University could not provide headphones for
use at those workstations where audio reception was possible. We therefore concentrated
on creating reading skills materials instead.
111
Using the Internet to teach English: H Nesi
Our first attempts were dogged by a number of problems; we were refused copyright
permission for many textbook, newspaper and
magazine excerpts that would have made
excellent EAP reading passages, and we found
that some of the activities we were creating
were actually more successful on paper than
on screen, being too long, or lacking the links
and textual commentary normally associated
with computer-based materials . Later versions
of the site featured reading activities based
around shorter texts, and a new section containing writing and editing tasks.
We moved on to the second stage of development in September 1996 when we introduced the materials to students attending the
final phase of our three-month summer presessional programme. At this stage, when we had
hoped to examine use of the materials on a
larger scale, we encountered some difficulty in
publicising our product. The final phase of a
presessional programme is always very intensive because students have to produce large
amounts of coursework upon which their
admission to the university may depend. This
means that they and their tutors have less time
for experimentation than at earlier stages in
the course. Although we printed leaflets advertising the site, many of the presessional tutors
did not pass the leaflets on to their students as
we had hoped, and many students who were
unfamiliar with the Internet also encountered
difficulties with the system of password
access. Nevertheless we recorded 119 hits in
September, followed by 173 in October when
demand for English language support is usually at its greatest. After that use settled down
to an average of 46 hits per month until Easter
of the following year. (By the summer term
access to the site was negligible.)
In the meantime we were redesigning and
expanding our materials, so that at the beginning of the 1997 presessional course we were
ready with a much more extensive, open-access
Website (http://warwick.ac.uk/EAP). Because
for the first time we were offering the materials
to anyone with access to the Internet, our opening page informed first-time users of their
rights and obligations respecting the materials.
From the main menu, users had access to five
112
separate sections (About the course, Editing
Skills, Study Skills, Working with Texts, Guest book and Bulletin Board), and in each of the
three main sections, Editing Skills, Study Skills
and Working with Texts, an attempt was made to
lead users along pathways of progressively
more difficult tasks. To help users navigate their
way across sections a Back to Main Menu button was provided on every page.
The materials made use of a wide variety of
exercise types, including prediction and comprehension questions, unjumbling tasks, note
taking and the editing of grammatical errors in
student writing. Most feedback was instant,
via answer buttons, pop-up comment screens
and hypertext links to model answers, but in
some activities we provided learners with the
opportunity to send their answers on to us. We
‘personalised’ the materials further by posting
photographs of ourselves on several pages,
and creating e-mail links at the bottom of each
page, inviting users to contact us with comments or questions. A selection of users’ comments were displayed in the Guestbook section.
One further way in which we exploited the
flexibility of the medium was to present materials in a variety of formats, with hand-written
sections, for example, to demonstrate the way
a skilful reader might take notes, and the judicious use of colour photographs and illustrations (although with these we were sparing,
because we did not want to waste users’ time
downloading unimportant images).
Unlike the package of EAP materials we
had created for the internal University network
in 1992–4, our Self-Access Centre on the
Internet aimed to present Study Skills information as well as to provide language practice.
Presentation and practice can be integrated
much more successfully on a Web page than
within a suite of commercial authoring programs. Thus we used the Study Skills section
to describe, categorise and list a large number
of dictionaries for language learners and subject specialists, and to explain academic conventions concerning the compilation of bibliographies. In both of these cases, our materials
addressed IT issues that are almost completely
neglected by published EAP textbooks.
ReCALL
Using the Internet to teach English: H Nesi
Acknowledging the role of IT in modern
Study Skills, we discussed the use of electronic dictionaries (www.warwick.ac.uk/EAP/
study_skills/dictionaries/elect.htm), and how
to cite electronic sources (www.warwick.ac.uk/
EAP/study_skills/compiling_a_bibliography/
www.htm).
The Bulletin Board, designed to provide
information about presessional trips and local
places of interest, offered links beyond Warwick to a number of relevant tourist and train
timetable Websites. Although we did not want
to ‘lose’all our users in the vast expanse of the
Internet, we felt that sample links such as
these, with the option of instant return to the
Bulletin Board section, would provide some
training for novice Internet users and encourage them to conduct further searches on their
own.
Evaluation
Unfortunately, the kind of close monitoring of
user behaviour we had been able to achieve
with Sabre Meter on the local area network
could not be replicated on the Internet. Moreover, having decided to allow open access to
our site we had no means of identifying our
visitors by password; our server statistics only
made the simple distinction between those that
were ‘local’ and those that were ‘remote’. We
were, however, able to monitor response to
our materials by three means: weekly user statistics, questionnaires to tutors and students on
the Warwick presessional course, and comments sent to us via the Website pages and the
Bulletin Board.
General user statistics for the two-month
period from August 10 to October 12 1997 are
presented in Table One below. The enormous
increase in use at the beginning of September
is largely due to an influx of visitors from outside Warwick. During this period I wrote to
presessional Course Directors at 72 British
universities, inviting them to examine our
materials and advertise them to their students.
Many new remote visitors must have been
introduced to our site as a result of this letter,
although the number of hits decreased in the
Vol 10 No 1 May 1998
following weeks. Staff on some presessional
courses looked at our materials as soon as they
were notified of their existence, but then simply printed and photocopied pages to hand out
to their students rather than urging them to
access the site for themselves. It goes without
saying that the interactive nature of the materials was lost in hard-copy format.
It should be noted that Table 1 only shows
the number of occasions that http://www.warwick.ac.uk/EAP was accessed. At present we
have no means of knowing how many individual visitors accessed the site, nor how long
they spent on different activities. It is possible
that some individuals progressed no further
than the opening page, and equally likely that
others by-passed the opening page by going
directly to another page they had previously
bookmarked.
Table 2 gives a simplified summary of use
in each section of the Website. The two components of the largest section, Study Skills, are
listed separately. The table provides a rough
measure of the relative popularity of the sections by showing the total number of hits to
the opening page of each section or subsection. Some visitors accessed later pages in the
section directly, but with our present monitoring system we were unable to track the
progress of individuals from page to page.
The two Study Skills sections dealt with
some rather neglected Study Skills issues.
Study Skills: dictionaries, for example, gave
up-to-date information about recently pubTable 1 Asummary of weekly server statistics,
August 10 - October 12 1997
Week beginning:
Local
Remote
Total
August 10
August 17
August 24
August 31
September 7
September 14
September 21
September 28
October 5
37
142
216
201
144
198
81
182
103
13
23
1395
3885
841
779
521
478
512
50
165
1611
4086
985
977
602
660
615
1304
8447
9751
Total
113
Using the Internet to teach English: H Nesi
Table 2 Asummary of use in six areas of the
Website
Section
Total number of hits
Editing Skills
Working with Texts
Study Skills: compiling a
bibliography
Study Skills: dictionaries
Guestbook
Bulletin Board
313
682
652
756
176
198
lished dictionaries in book and in electronic
form, and Study Skills: compiling a bibliogra phy dealt with the problems that can arise
when the authorship of a source is unclear.
Perhaps these sections were consulted more
often because they filled a gap in the Study
Skills literature. However, the frequency with
which these subsections were accessed was
probably also affected by the quantity of information each section contained. The Study
Skills: dictionaries section, for example, consisted of many pages, whilst Editing Skills was
comparatively short. Guestbook and Bulletin
Board did not contain any learning materials
as such, but were intended to display topical
information and comment.
Despite the fact that 1019 hits were registered from Warwick before the end of the presessional course, we were unable to uncover
much evidence of use of the Website by presessional students. 218 students attended the
third (September) phase of the course, and we
sent out questionnaires to all of them in their
final (and most hectic) week. We received 67
replies, representing 30.7% of the total student
number. Findings are summarised in Table 3.
When asked to name WWW sites that they
were familiar with, many respondents cited
the library OPAC service (available as a link
from the University of Warwick homepage)
and e-mail. This suggests that student knowledge of the World Wide Web was not as extensive as might first appear. Some students
claimed to have gathered data for their projects from the Internet, but the most popular
sites seemed to be those containing information and news about the students’native countries. In some cases respondents who claimed
to have accessed newspapers in their first language on the Web explained that they had
been unable to use the CELTE Self-Access
Centre because they did not know how to
locate new Websites. Presumably such students were using the Web in a very limited
way, to access just one or two sites that compatriots had found for them.
The five students who had used the CELTE
Self-Access Centre seemed to have been
pleased with the materials that they found
there. Respondents’ reasons for not accessing
the site are summarised in Table 4.
Some of the students who gave lack of time
as an excuse commented that they intended to
access the site once the presessional course
was over.
It should be noted that the content and use
of the CELTE Self-Access Centre was demonstrated to all new students at each stage of the
presessional course. In theory all students
should have known how to find the site, and
how it related to their EAP studies, but this
kind of information also needed to be reinforced by presessional tutors, as newly-arrived
students are often overwhelmed by the quantity of information they receive in the first few
days of their course. In practice, very few
tutors were able to advise their students in this
respect. The responses of 19 out of the 25
tutors on the September presessional course
are summarised in Table 5.
Of the 11 respondents who claimed to have
Table 3 Presessional students’use of the World Wide Web
Question
Yes
No
Had you ever accessed the WWW before the presessional course?
Did you access the WWW during the presessional course?
Did you access the CELTE Self-Access Centre during the presessional course?
50
44
05
17
23
56
114
ReCALL
Using the Internet to teach English: H Nesi
Table 4 Presessional students’reasons for not
accessing the Website
Table 6 Presessional tutors’reasons for not
accessing the Website
Reason
Reason
Lack of time
Did not know existed /
did not know how to access
Did not think it would be useful
Not happy working with computers
Forgot
No. of students
18
15
7
2
1
mentioned the CELTE Self-Access Centre in
class, most admitted that their references had
been very general. In other words, although
they had told the students that the site existed,
they had not recommended specific sections
or pages, in response to specific needs.
Although all new tutors had been introduced
to the CELTE Self-Access Centre on their
induction day, it was not surprising that tutors
did not provide detailed information if they
had not examined the site for themselves. Reasons given for not accessing the site are summarised in Table 6.
Our final means of evaluating the success
of the CELTE Self-Access Centre was by
analysing the comments that users of the site
sent to us. We had at first been worried about
providing this facility, for fear that we would
be inundated by e-mail messages from learners around the world. In the period August 10
–October 12 1997, however, we only received
16 messages of this kind, and all of these were
from users who were known to us personally.
It would appear that despite our attempts to
‘personalise’ our materials by posting photographs of ourselves, and inviting comments,
users whom we had not met before were very
unwilling to make contact.
The comments were without exception
favourable, although some users suggested
No. of tutors
Lack of time
Forgot
Did not know how to access
Eyesight problems
3
3
1
1
that we should add more activities and information. It could be argued that our respondents would have been unwilling to criticise
us directly. Perhaps only politeness prevented
them from being more negative about some
aspects of the site. A sample of comments is
given below:
It is quite useful and practical for the material,
the point is clear listed so that someone can
improve the reading skill by reading in a short
time. But if there are more detail info in the other
pages, I think it will suitable for more students as
some students like to read more info in their
spare time.
The information is short but clear especially for
the areas of the study skills and reading skills. It
is because students usually do not like reading
the long passages.
I think the page about choosing and using dictionaries is useful. Because there are many
choices of dictionaries for us and always make
us confuseing.This page shows the information
of dictionaries and helps us a lot. It gives us the
most suitable and worthy choice to buy.
It is a great idea to get information from such a
network on computer that really save my time
instead of catching them from a thick, big book
which really scare me.
Table 5 Presessional course tutors’use of the Website
Question
Yes
No
Had you ever accessed the WWW before the presessional course?
Did you access the CELTE Self-Access Centre during the presessional course?
Did you refer the CELTE Self-Access Centre in class during the presessional course?
13
7
11
6
12
8
Vol 10 No 1 May 1998
115
Using the Internet to teach English: H Nesi
As all we can see from the pages, they are all
well organized and presented very well for the
users. Besides, there are always some beautiful
pictures and colored phases which draw my
attention easily and is my pleasure to read them.
Normally, I find reading very boring but now I
have love to.
Conclusion
We conclude that the CELTE Self-Access
Centre has been reasonably successful in its
first few months of full-scale operation, in
that it has attracted unexpectedly large numbers of visitors and favourable comments
from EAP learners. We were, however, disappointed that we could not integrate the use of
our site more completely into the fabric of our
presessional programme (and also into the
programmes of other universities). Busy EAP
tutors with little personal experience of the
Internet preferred to refer their students to
hard-copy materials, although many were
sympathetic to our project in principle.
Clearly more work needs to be done away
from the computer screen, by adding references to the Self-Access Centre in printed
presessional handbooks, and drawing attention to its role in supporting the aims and
objectives and the syllabus of the presessional
programme. EAP practitioners in general
(course designers, tutors and materials writers) need to become more aware the status of
IT in modern university studies if we are to
continue to provide relevant study skills training.
We were also disappointed (and paradoxically perhaps a little relieved) that so few of
our site visitors chose to send their work to us,
or interact with us in any way. We wonder
whether this is an inevitable consequence of
self-access study at a distance, regardless of
the medium of instruction. Users were probably shy to make contact, and also unwilling to
perform hard language production tasks when
these formed no part of their assessed programme of work. If we can persuade more
tutors to become involved we might be able to
change learner attitude in this respect.
116
The Internet encourages a more visual and
non-linear form of literacy (Fisher 1997), and
is not the best medium for some study needs,
such as extensive, linear reading of a single
text. Web-based materials are therefore limited
in their scope, and tend to be based around
short texts with plenty of opportunities for
learner intervention and interaction. Their creation requires rather different skills than those
of the hard-copy materials writer, and we are
still learning how to make the most of the
medium. Fortunately Web-based materials are
also organic; pages grow, or wither and die,
and the site that presents itself in the summer
of 1998 may well be very different from the
site in the autumn of 1997. We hope it will be
bigger, better, and ready for a new influx of
arriving students.
Acknowledgements
The projects described in this paper could not
have been completed without the help of two
University of Warwick postgraduate students.
The first package of EAP materials on the University network was developed and evaluated
with the help of Celia Tsai. The CELTE SelfAccess Centre (http://www.warwick.ac.uk/EAP)
is managed by Benita Studman-Badillo, who
also played a large part in its design and development.
Both projects were financed by grants from
the University of Warwick Teaching Innovations Fund.
Bibliography
Fisher P. (1997) ‘Death of book exaggerated’, The
Daily Telegraph, Tuesday October 28, 8–9.
Jarvis H. (1997) ‘The role of IT in English for Academic Purposes: a survey’, ReCALL 9 (1), 43–
51.
Kell J. and Newton C. (1997) ‘Roles of pathways in
self-access centres’, ELT Journal 51 (1), 48–53.
Nesi H. (1993) ‘Self-access system for English language support’, ReCALL 8, 28–30.
Nesi H. (1995) ‘Self-access for students you may
never meet’, CALL Review, March 1995, 14–
15.
ReCALL
Using the Internet to teach English: H Nesi
Nesi H. (1996) ‘The evaluation of a self-access programme on a university computer network’. In
Rea-Dickins P. (ed.), Issues in Applied Linguis tics: Evaluation Perspectives no 1, University
of Warwick, Language Testing and Evaluation
Unit, 41–48.
Tsai, C. (1992) The setting up and preliminary eval uation of the CELT package for English for
Academic Purposes on Warwick University
Computer Network. Unpublished MA disserta-
Vol 10 No 1 May 1998
tion, University of Warwick, Centre for English
Language Teacher Education.
Hilary Nesi is a lecturer in the Centre for English
Language Teacher Education at the University of
Warwick. She has an MSc in ESP from Aston Uni versity, and a PhD from University College
Swansea. Her research interests include dictionary
use, ESP/EAP vocabulary development, and the
discourse features of electronic text.
117
ReCALL 10:1 (1998) 118–126
The ‘third place’ – virtual reality
applications for second
language learning
Klaus Schwienhorst
Trinity College Dublin
Recently we have seen a shift of focus in using the Internet from often inappropriate human-computer
interactivity to human-human interaction, based on collaborative learning concepts like learner autonomy and tandem learning. The renewed discussion of interface design has provoked a reconsideration
of the traditional graphical user interface and a shift towards more intuitive interfaces like virtual reality,
mainly building on the concept of constructionism. The MOO (multi-user domain, object oriented) system provides a flexible, easy-to-use multiple user virtual reality that allows for the integration of language learning tools and resources in a common environment, a third place.
1. From interactivity to interaction,
from Graphical User Interface to
Virtual Reality
During the past few years the notions of how
the Internet can and should be used for language learning purposes have shifted. The
wealth of information, so often described by
the term ‘information highway’, has been
matched by notions of the ‘global village’,
emphasising the different communication
modes that have been enabled through recent
developments in Internet technology. In language learning terms, this has enabled educators and students to consider Internet-based
interaction between students in addition to
existing frameworks of student-computer
interactivity. I would like to separate the terms
118
interaction and interactivity to point out their
fundamental differences and the resulting
areas of application in language learning.
Interaction will subsequently refer to humanhuman communication, interactivity to
human-computer communication.
Student interaction over the Internet has
been facilitated and diversified by increasing
computer hardware and software performance
as well as ever-increasing bandwidth. There is
now a variety of communication modes available over the Internet, from text-based and
thus keyboard driven modes like e-mail to
multiple live audio and video conferencing
systems that enable whole groups or classes to
interact simultaneously with each other.
A second recent trend in computer technology has been the renewed emphasis on the
ReCALL
VR applications for second language learning: K Schwienhorst
computer interface, more specifically on the
design and nature of human-computer interactivity, an issue that has been closely linked
with increasing computer performance. Since
the first Graphical User Interface (GUI) was
developed at Xerox PARC and popularised by
Apple, there has been no basic change in
human-computer interactivity. The point-andclick button or menu interface has become the
standard for what is called interactive multimedia platforms. The user moves the cursor to
the appropriate place and clicks the mouse
which causes something to happen. The interactivity is limited, because “the machine can
only respond to an on-off situation: that is, to
the click of the mouse” (Stone 1995: 10).
However, interactivity should imply much
more than this. Andy Lippman from the MIT
Media Lab set forth a definition in the early
1980s which still has significance today. In it
he describes it as a mutual and simultaneous
activity on the part of both participants, usually working toward some goal. He went on to
name five corollaries to his definition that can
be summed up as follows: “interactivity
implies two conscious agencies in conversation, playfully and spontaneously developing a
mutual discourse, taking cues and suggestions
from each other as they proceed” (Stone 1995:
11). We all know that any current interface
and application in CALL is far from being
interactive in Lippman’s sense. The second
agency, the computer, fails to provide the
interactivity that we consider necessary to
support collaborative language learning models like the one developed within a tandem
language learning partnership. The computer
also very often fails to assist the student in
developing learner autonomy; for example,
research in artificial intelligence cannot yet
provide reliable and differentiated computer
counselling mechanisms to help the forms of
interplay that shape student-student or student-teacher interaction, like managing, monitoring and planning learning progress. We are,
at least for the moment, presented with severe
and fundamental restrictions on the concept of
human-computer interactivity. The current
limitations of artificial intelligence and linguistic research force us to reconsider the role
Vol 10 No 1 May 1998
of the computer in language learning. Many
see Virtual Reality (VR) as a more creative
and productive way of interacting with computers than the traditional interface (Halfhill
1996a, b).
Consequently, the focus of attention in
computer-assisted language learning has
recently shifted from the traditional definition
of the computer as tutor or equal partner and
its emphasis on human-computer interactivity
to computer-mediated communication that
sees the role of the computer as providing an
environment for human-human interaction
(see Warschauer 1996).
2. Tandem language learning
The principles of tandem learning have been
expressed in detail elsewhere (Little and
Brammerts 1996) and also during the EUROCALL 1997 conference (Little and Ushioda
1998). Here it will suffice to repeat its major
principles. Tandem learning is based on two
major principles: reciprocity and learner
autonomy. Reciprocity implies that both learners use both languages in equal amounts and
support each other equally in the language
learning process (e.g. corrections, target language input). Learner autonomy insists on the
responsibility for their own but also their partner’s learning. It works towards an
autonomous language user who determines
topics, activities, and working arrangements
and an autonomous language learner who
through analysis of linguistic input and output
can formulate short- and long-term learning
targets, methods to achieve them, and increase
metalinguistic awareness. For a more detailed
outline of the concept, see Little (1991).
Tandem learning originally refers to faceto-face tandems. This implies that, for
instance, a German learner of English and an
English learner of German meet for a number
of weeks, visit courses and counselling sessions together and learn English and German
from each other. In the last few years, this idea
has extended to Internet-based tandem language learning. Since 1994 an extensive number of bilingual sub-nets has been set up, co119
VR applications for second language learning: K Schwienhorst
ordinated from Bochum, Germany. In one of
these sub-nets, German-English, Englishspeaking students learning German and German-speaking students learning English are
matched via e-mail by a ‘dating agency’ and
are encouraged to collaborate via e-mail on a
number of tasks using both languages in
equal amounts. Within our project with
Bochum University, however, we decided to
pair the students ourselves (see Little and
Ushioda 1998). The tandem web site
(http://www.tcd.ie/CLCS/tandem/) and other
Internet resources provide a number of materials and task templates for students to work on.
The second component of the e-mail tandem consists of a bilingual e-mail discussion
list. The English-German discussion list, for
instance, can contain messages in English or
German, by either English or German native
speakers. On the list there is no restriction on
content other than general ‘netiquette’ and the
languages used. Topics usually deal with
sporting events, political or cultural events, or
simple questions of grammar and vocabulary.
Recently the e-mail tandem exchanges have
been complemented by text-based virtual realities.
3. Virtual Reality
Virtual reality can be defined as “the idea of
human presence in a computer-generated
space” (Hamit 1993: 9), or more specifically,
“a highly interactive, computer-based, multimedia environment in which the user becomes
a participant with the computer in a ‘virtually
real’ world.” (Pantelidis 1993). Transferring
these definitions to Internet-based VR, a distinction between immersive virtual reality
(hard VR) and desktop-based virtual reality
(soft VR) simplifies matters. Desktop-based
VR relies solely on traditional input/output
devices like monitor, mouse, keyboard, microphones, speakers, while immersive VR also
includes simulators, data gloves or body suits,
shared workbenches etc. Immersive VR is not
the ultimate tool for every application, and is
certainly at this moment not a feasible solution
for Internet-based VR. Desktop-based VR can
120
be structured according to technological
advancement and system-inherent properties
that make it more or less attractive for language learning purposes.
The first VR applications began as purely
text-based adventure games that were reprogrammed to be able to handle multiple-user
interaction via the keyboard in real time.
MUD 1 (multi-user dungeon) was created in
England in 1979 (Bartle 1990). A more flexible implementation was created by Pavel Curtis and others at Xerox PARC at the beginning
of the 1990s. Curtis’ LambdaMOO environment provided an object-oriented core that
could be reused to create any number of different text-based worlds. The continued revision
of the LambdaMOO core has made it one of
the most widely used foundations of textbased VR. More about the notion of MOO
(multi-user domain, object-oriented) later. As
a side note, chat systems were also created and
still exist today, that combine real-time communication with the notion of rooms (different
rooms may refer to different topics of discussion). As that is about the only feature in chat
rooms that refers to VR, they are not regarded
in the context of VR, but rather in the context
of conferencing modes via the Internet.
In recent years, the MOO concept has been
extended, first by introducing hypertext (for
example in Pueblo-enhanced MOOs). The
next developments were graphics and WWW
enhanced MOOs that contained pictures or
photographs to enhance the notion of 3D
space (see, for example, Cold Paradigm at
http://moo.syr.edu:5555 that creates a Spanish
VR). Then came the first implementation of
VRML 1.0, an example of which is the BioGate system for MOOs that is available for
any MOO now (Mercer 1997). VRML 1.0 still
had several drawbacks: objects could not be
manipulated in real time, thus representations
of humans (avatars) would not move in real
time. There is also a variety of proprietary
systems that include not only VRML 2.0based or proprietary 3D programming, but
also audio-conferencing, background noises,
individualised representations of characters
etc. (see, for instance, Onlive! Traveller
at http://www.onlive.com, Black Sun at
ReCALL
VR applications for second language learning: K Schwienhorst
http://www.blacksun.com, or WorldChat at
http://www.worlds.net/wc; compare Roehl
1996).
3.1 VR and interaction
Right from the beginning of VR, the idea of
using VR to create a place where people can
collaborate has been central to the issue. Randal Walser from Autodesk put it like this:
“Cyberspace, the medium, enables humans to
gather in virtual spaces. [...] Were it not for the
stipulation that cyberspace be computerbased, the definition [of virtual reality] would
admit many common forms of theatre, sports,
and games” (Hamit 1993: 144). The existence
and continued growth of hundreds of textbased environments derived from MUD 1 and
MOO is living proof of the fascination of collaboration with real humans in desktop based
environments. Chip Morningstar, one of the
creators of Habitat (a commercial text-based
but graphics-enhanced VR that is maybe the
biggest of its kind with over 10,000 subscribers), also emphasises the importance of
social interaction to create VR: “The essential
lesson that we have abstracted from our experiences with Habitat is that a cyberspace is
defined more by the interactions among the
[users] within it than by the technology with
which it is implemented”(Hamit 1993:74).
Multiple-user interaction is thus one of the
major factors in creating VR.
Interaction is also of central concern in the
concept of learner autonomy. Here, and hence
in the concept of tandem learning, interaction
does not only mean exchange between language students, or in tandem, language learner
and native speaker; tandem exchanges are no
penpalships. The concept of learner autonomy
contains the idea that learning arises essentially from supported performance, which is
central to the works of the Soviet psychologist
Vygotsky. Progression in learning, according
to him, was achieved through the idea of the
“zone of proximal development”, which he
defined as “the distance between the actual
developmental level as determined by independent problem solving and the level of
potential development as determined through
problem solving under adult guidance or in
Vol 10 No 1 May 1998
collaboration with more capable peers”
(Vygotsky 1978: 86). The relationship
between tandem learners is certainly one of
peers, of equal standing as language learners.
In addition to peers in ordinary classroom
interaction, however, they possess the expert
knowledge of native speakers to support each
other. This special learning relationship should
be particularly effective in VR, where the role
of the computer is first and foremost to foster
this interaction and act as mediator between
the learner and the target language speakers
and their culture.
3.2 VR and interactivity
We know that in any formal learning environment the capacity for autonomous behaviour
only develops gradually. VR provides an alternative to the formal environment of the institutionalised classroom, a third place that is
neither work nor home, or in language learning terms, neither the target language culture
nor our native speaker community.
VR applications have been used in language learning programmes before. Some of
the applications included an immersive environment for learning Japanese at the University of Washington (Rose and Billinghurst
1995), a QuickTime-VR based environment
on CD ROM (Trueman 1996), and virtual
models of Greek and Roman buildings to
enhance the traditional classroom (Zohrab
1996). These non-Internet-based applications
have at least one thing in common. They
emphasise the importance of virtual space, the
visualisation of information, giving students
the ability to ‘handle’ and work with information, navigate through it and manipulate it.
After all: “Virtual reality reduces the need for
abstract, extero-centric thinking by presenting
processed information in an apparent threedimensional space, and allowing us to interact
with it as if we were part of that space. In this
way our evolutionarily derived processes for
understanding the real world can be used for
understanding synthesized information”(Carr
and England 1995:1). From this viewpoint, the
space itself potentially becomes a flexible tool
to encourage and enhance learning activities.
In our notion of interactivity, however, this
121
VR applications for second language learning: K Schwienhorst
virtual place needs to be sufficiently flexible
to be influenced by all users, not simply by
passive consumption but by active collaboration with the environment.
Interaction not only between peers but also
interactivity between students and computer
environments has been the foundation of constructionism developed by Seymour Papert
and others (Papert 1993). Papert saw constructionism as a combination of two strands: first,
“it asserts that learning is an active process, in
which people actively construct knowledge
from their experiences in the world. [...] To
this, constructionism adds the idea that people
construct new knowledge with particular
effectiveness when they are engaged in constructing personally-meaningful products”
(Bruckman and Resnick 1995). These principles can be realised quite effectively in VR if
interactivity is not understood as multiple
paths through the same world (like, for example, Microsoft’s Encarta CD ROM) but as a
real opportunity to create meaningful artefacts
for language learning. Links to task templates,
language learning and authentic resources can
be built into VR to make them instantly accessible. In VR systems, especially text-based
MOO systems, it is quite easy for teachers and
students to create private ‘rooms’and ‘objects’
that can also serve as starting points for language learning activities. Activities and interaction in these rooms can then produce transcripts that students can exploit in a number of
ways. The wealth of target language input in
those files, together with e-mail exchanges,
can produce personal text books of much more
meaningful content for language learning than
those commercially available. David Little, in
his concept of learner autonomy, has repeatedly emphasised the importance of learners
devising their own learning materials; the
learners “experience the learning they are
engaged on as their own, and this enables
them to achieve to a remarkable degree the
autonomy that characterizes the fluent language user” (Little 1991:31). Thus, the existence of VR serves an important purpose: 3D
space and its built-in resources provide not
only a necessary interface, but an environment
that is neutral yet potentially controllable by
122
both partners and that offers the necessary
tools, resources and activities that provide
incentives and support for language learning.
A shared place and a shared set of activities
form a common point of reference, the third
place, which helps to create social interaction
and facilitates collaborative language learning
à la tandem.
4. The MOO system
I have already mentioned some of the central
functions of the MOO environment, and it is
time to present its object-oriented nature in
more detail. It may be asked why the MOO
environment has been chosen as a platform for
collaboration while other systems, be they
advanced systems of 3D VR and/or synchronous communication (like, for instance, video
conferencing) seem the more ‘natural’or ‘real’
means of learner interaction via the net. There
are a number of reasons: when we introduce
new technology into the classroom, we want
to be sure it works, not once but all the time.
Some additional reasons lie in the lack of
interactivity that we find in systems like
video-conferencing as compared to VR as
described above. The programming environment goes back to 1979 and LambdaMOO has
been developed over the past six years. Regular updates and extensions and a wide distribution have consolidated its reliability. Textbased communication is extremely low
bandwidth and thus very fast, an important
feature for institutions that rely on slow
modems. It is free of charge, relies on free
software, while its hardware requirements are
minimal and cross-platform. It allows for
multi-user access and shared applications (i.e.
working on the same document in real time); it
is always available. It facilitates participation
by its members, any student can extend the
VR and construct his or her own learning
space. The MOO system together with the
recently developed BioGate system (Mercer
1997) features a powerful Java-based interface
incorporating text, WWW, and/or VRML
interfaces to the same VR. Any medium available on the WWW, be it plug-ins or helper
ReCALL
VR applications for second language learning: K Schwienhorst
applications, can be incorporated into BioGate. The modular and flexible nature of the
BioGate system means that it can be adapted
to any user’s needs (the BioGate system has
been tested extensively and is now available to
any MOO that wants to employ it). MOO
communication can be instantly recorded into
so-called Log Files and thus produces instant
transcripts of any session.
5. Diversity University – a
MOO-based VR
Following adventure MOOs and social MOOs
in the early 1990s, the last few years have seen
the emergence of a variety of more or less
focused educational MOOs. Diversity University (http://www.du.org), although not specifically created for language learning purposes,
offers a number of useful tools and environments built around the idea of a virtual campus and has always been at the forefront of
implementing new technology. Programmers
from Diversity University (DU) developed the
BioGate system and some useful tools for
learners. Students and teachers can participate
in DU and create their ideal learning environments.
When connecting to a MOO like DU, the
normal procedure for the first time is to connect as a guest. This means that a random
name is assigned to you, and you will not be
allowed to build anything. If you want to participate actively in DU, you are requested to
register. This involves giving your name and
e-mail address, stating your research interests,
and explaining the purpose of your request.
The board of managers at DU will then send a
user name and password to your e-mail
address, both of which you subsequently need
in order to connect. Then, if you want to build
something, you have to show one of the managers that you know how, what, and where in
this VR you want to build. This takes the form
of an interview within the MOO. Becoming a
builder involves understanding the nature of
the OO in MOO. Everything within DU, as
indeed in any other MOO, is object-oriented.
Thus, your character is an object with properVol 10 No 1 May 1998
ties, a room is an object, so is a notice board, a
tape recorder, and a robot. Each object takes
up some room in the DU database. If you
were to create two tape-recorder objects, they
could take up more space in that database
than one ballroom object. If you want to start
building, a manager will give you a certain
quota that defines how many objects you can
build. This quota depends on what you want
to use it for. Building a room ‘costs’ a quota
of 1. Building a room is as easy as using an
old DOS text editor and is facilitated by an
extensive help system that guides you through
the process. You can then select an area
within the virtual campus, and a manager
connects your room to it.
Maybe the most important object from a
teacher’s point of view is the Visiting Student
Player Object (VSPO). This allows any
teacher to introduce whole classes to the
MOO, distribute user names and passwords
among them, give them rights to build or program objects, etc. In effect, it makes any
teacher a sub-manager in DU. For the DublinBochum project, DU management gave me
two VSPO groups for one year that I could
freely use. By giving them a consistent user
name with a certain ending, for instance DB,
they become instantly recognisable to anybody as members of the same group, which is
a great advantage for ongoing meetings
between partners. They have all the rights of a
permanent character, including the right to
build or to program new objects, with the one
exception that their character only lasts for the
duration of the project.
Certain entry rooms have been assigned to
VSPOs. When they connect to DU for the first
time, they see a description of either a Dublin
or Bochum dormitory created by Jackie
McPartland, the Bochum organiser, and
myself. Stepping out of the dorms, they find
themselves in the Tandem Language Centre,
where a variety of language learning tools are
at their disposal: links to tasks and language
learning resources on the WWW that I developed, a conversation robot that could be programmed for simple vocabulary or grammar
questions, notice boards with tandem information, etc. Next door they can find the Tandem
123
VR applications for second language learning: K Schwienhorst
Counselling Office, where a tape recorder and
notes provide some tools for meetings by the
tandem network and tandem counselling sessions. Students are of course free to explore the
rest of the virtual campus, to go on a virtual
treasure hunt or visit neighbouring departments
to find out what else there is. Again, students’
interactivity with these rich environments provides a context for interaction: nobody can
control what rooms or objects students may
find interesting enough to work in or on.
During the last year I developed foreign
language resources on the WWW for German,
French, English and Italian (available at http://
www.tcd.ie/CLCS/languageresources.html).
These annotated resources, available in several
languages, make use of the latest HTML and
JavaScript technologies to provide a framework for students that does not irritate or disorientate them; ease of use and effective navigation were the main design objectives.
Recently the resources have been enhanced by
adding collaborative tasks that can be used
side-by-side with any authentic material from
the Internet. The tasks make use of WWW, email and MOO facilities and are formulated as
templates rather than specific assignments with
pre-defined resources. Students can modify
and adapt them and choose appropriate
resources themselves. Again, interface considerations played a major part in their design.
The integration into the MOO environment
makes it possible for tandem learners to collaborate on WWW resources while communicating in real time. DU is currently considering an
extension to their database by providing a multilingual interface. This means that by a simple
command users could change the whole interface to a different language. This has already
been realised on a related MOO, Open Forum,
and a Portuguese MOO, MOOsaico.
6. Integrating tandem MOO and
E-mail exchanges
As mentioned before, the integration of MOO
and e-mail into existing courses does not primarily involve technical considerations. The
more important problems are caused by the
124
very nature of synchronous and written communication and the differences in organisational frameworks between different cultures
and educational institutions (Little and Ushioda 1998).
Synchronous communication in a MOO
requires by definition that both partners are
working at the same time. Working arrangements have to be arrived at that suit both partners and are independent of classroom schedules, and they need to be tightly organised by
local co-ordinators. Synchronous communication has to be applied to other tasks than e-mail
and may work best in conjunction with it.
Within the tandem framework, this means that a
tandem pair in a MOO is able to support each
other immediately, as in a face-to-face situation.
A meaningful communication is constructed by
the collaboration of both students. E-mail does
not provide students with the opportunity to
give instant support for their partner and negotiate meaning. E-mail, in that respect, requires
much higher capacity for autonomous language
use in formulating a message alone, revising it,
and reacting to corrections.
Another difference between synchronous
and asynchronous modes lies in the fact that
each MOO message is usually very short.
Although there are tools to transmit longer
messages, even speeches (a variety of conferences have been held solely on MOOs), most
communication is limited by the speed of typing, and sentences tend to be rather short;
there is hardly any time for editing. Even
though there is hardly any time for planning,
revision and elaboration at the time of production, which cannot be said of e-mail and other
forms of written production, the MOO environment forces each student to look at his output on screen straight away. Even shortly after
the production of text, the student is confronted by his or her and the partner’s output
on screen, the transience of live speech is
transferred into the written word and thus
made visible. The parameters of the environment allow for collaboration on tasks and
material on WWW sites, extensive role plays,
the practice of debates and discussions, potentially also the creation of students’ own learning environments, where they find the tools
ReCALL
VR applications for second language learning: K Schwienhorst
they may consider most useful. Some of these
activities are part of our students’project cycle
and the final (face-to-face) presentations of
them are part of the assessment.
The nature of MOO also determines that
the major medium of communication is writing, although it is live. Textual communication
has a number of advantages over audio- and
video-based conferencing, apart from technical factors mentioned above. Audio and video
conferencing may provide even more authentic material than text-based MOO, and more
so as this material is in oral form and requires
proficiency in pronunciation and non-verbal
cues that are not yet present or replaced by
written cues in MOO. That also means, however, that in text-based communication the
students can focus on the elaboration of a
smaller subset of skills and develop others, for
instance pronunciation, on their own with
other systems like the language lab or with an
Erasmus student in class.
The major advantage of written communication is, as previously mentioned, the possibility for each learner to preserve the entire
communication and use it as a mixture of, on
the one hand authentic and personally meaningful material produced by a native speaker,
and on the other hand an enormous sample of
his or her own efforts in the target language,
always under pressure to produce meaningful
discourse to keep up and develop a conversation. The preservation of discourse is as simple as saving a text file (to stay within the traditional GUI) or using a virtual tape with a
virtual tape recorder (to use the VR interface).
Text files can include movements, actions,
non-verbal cues that have been written down.
These files can be printed out or saved to
floppy disk. Students can analyse them and
use them as the basis for e-mail activities,
working for instance on unknown vocabulary
used by the native speaker, focusing on serious interruptions caused by their own output,
listing essential phrases in a particular semantic field, assessing their own performance etc.
They can share and discuss the files with their
partners or classmates or within the e-mail discussion list, and over a few months students
can assess and literally watch their own
Vol 10 No 1 May 1998
progress. The wealth of material forms in
itself a future learning resource that can be
organised, structured, and used for reference
purposes by the students themselves.
Another advantage of the medium of writing is the development of metalinguistic
awareness, which is essential for the development of learner autonomy. Compared to
speech, writing in the MOO makes live communication instantly visible, and the work on
recorded log files with a tandem partner can
even better work towards the development of
metalinguistic awareness. The special combination of target language learner and native
speaker expert provides a constant model of
input and comparison; the written form
increases the distance from the event and the
time of production.
7. Conclusion
The importance of multi-user virtual reality is
only beginning to become a major factor in
education and thus language learning. In The
Great Good Place (Oldenberg 1989: 89), Ray
Oldenberg argues for the importance of third
places: “Third places exist on neutral ground
and serve to level their guests to a condition of
social equality. Within these places, conversation is the primary activity and the major vehicle for the display and appreciation of human
personality and individuality”. In terms of
learning, they bring together two of the major
learning concepts in recent years, the Vygotskian framework of interaction and collaboration and Papert’s constructionist framework of
interactivity within meaningful environments
and with meaningful learning tools created or
assembled by the learner himself. For language learning, multi-user VR can support
firstly the development of the autonomous
language user, because of the wealth of interactivity with the environment and the wealth
of interaction with native speakers, and secondly the development of the autonomous language learner, because of their own production of meaningful learning material and the
permanence and visibility of the written
medium.
125
VR applications for second language learning: K Schwienhorst
References
Bartle R. (1990) ‘Early MUD history’. Available at
http://www.utopia.com/talent/lpb/muddex/bar
tle.txt.
Bruckman A. and Resnick M. (1995) ‘The
MediaMOO Project: constructionism and
professional
community’.
Available at
h t t p : / / w w w. g a t e c h . e d u / f a c / A m y. B r u c k m a n /
papers/index.html.
Carr K. and England R. (eds.) (1995) Simulated and
Virtual Realities, London: Taylor & Francis.
Halfhill T. R. (1996a) ‘Agents and avatars’, Byte
Magazine, CD ROM version, February.
Halfhill T. R. (1996b) ‘GUIs get a facelift’, Byte
Magazine, CD ROM version, July.
Hamit F. (1993) Virtual Reality and the Exploration
of Cyberspace, Carmel, Indiana: Sams Publishing.
Little D. (1991) Learner Autonomy 1: Definitions,
Issues, and Problems, Dublin: Authentik.
Little D. and Brammerts H. (eds.) (1996) A Guide
to Language Learning in Tandem via the Inter net, Dublin: Trinity College, Centre for Language and Communication Studies.
Little D. and Ushioda E. (1998) ‘Designing, implementing and evaluating a project in tandem language learning via e-mail’. In Blin F. and
Thompson J. (eds.), Where Research and Prac tice Meet, Proceedings of EUROCALL '97,
Dublin, 11–13 September 1997, ReCALL 10
(1), 95–101.
Mercer E. (1997) ‘BioGate System, web server
package for MOO’, E-mail to [email protected], 19 August 1997.
Oldenberg R. (1989) The Great Good Place, New
York: Paragon House.
Pantelidis V. (1993) ‘Virtual reality in the classroom’, Educational Technology 33, 23–27.
126
Papert S. (1993) The Children's Machine: Rethink ing School in the Age of the Computer, New
York: Basic Books.
Roehl B. (1996) ‘Shared Worlds’, VR News 5 (8),
14–19.
Rose H. and Billinghurst M. (1995) Zengo Sayu: an
immersive educational environment for
learning japanese (Technical Report, electronic
version available at http://www.hitl.washington.edu/publications/r-95-4.html 4-95). Seattle:
Human Interface Technology Laboratory, University of Washington.
Stone A. R. (1995) The War of Desire and Technol ogy at the Close of the Mechanical Age, Cambridge, MA: The MITPress.
Trueman B. (1996) ‘QuickTime VR and English as
a second language’, Virtual Reality in the
Schools 1 (4), electronic edition available at
http://150.216.8.1/vr/vr1n4.htm.
Vygotsky L. S. (1978) Mind in Society, Cambridge,
Mass.: Harvard University Press.
Warschauer M. (1996) ‘Computer-assisted language learning: an introduction’. In Fotos S.
(ed.), Multimedia Language Teaching, Tokyo:
Logos International, 3-20 (electronic edition
available at: http://www.lll.hawaii.edu/markw/
call.html).
Zohrab P. (1996) ‘Virtual language and culture reality (VLCR)’, Virtual Reality in the Schools 1
(4),
electronic
edition
available
at
http://150.216.8.1/vr/vr1n4.htm.
Klaus Schwienhorst works as a research assistant
at the Centre for Language and Communication
Studies at Trinity College, Dublin, Ireland. His
main research interests lie in virtual reality, com puter-mediated communication, and learner auton omy.
ReCALL
ReCALL 10:1 (1998) 127–128
Seminar on Research in CALL
David Little
Centre for Language and Communication Studies, Trinity College Dublin, Ireland
For several years Dieter Wolff has argued that
EUROCALL should do more to promote the
development of a research culture appropriate
to CALL. The organizers of the Dublin conference responded to this by inviting Dieter
and myself to co-ordinate a seminar on
research in CALL. In the event, ill health prevented Dieter from attending the conference,
so it fell to me to run the seminar on my own.
The call for papers announced that the
seminar would focus on the design of good
research projects, the use of possible research
methods, and the definition of what constitutes research in CALL/TELL. A large box
file was left at the conference desk so that
intending participants in the seminar could
submit for discussion issues, questions, problems, possible solutions, and examples of
good research practice. By the end of the second day of the conference the box was still
empty, but any fears that that this betokened
lack of interest proved to be unfounded. Over
sixty conference participants attended the
seminar and engaged in lively and sustained
discussion.
In my introduction to the seminar I posed
two questions that I take to be fundamental.
First, how do we ensure that research in
CALL is possible in the first place? To outsiders this might seem to be an odd starting
point. After all, most work in CALL goes on
in universities, and universities are partly
defined by the central role that they accord to
Vol 10 No 1 May 1998
research. It thus seems entirely natural that
language learning, and especially language
learning stimulated and supported by informa tion systems, should be the focus of a sustained research effort. Yet in European universities there is a widespread bias against
research that concerns itself with processes of
teaching/learning, and many university language centres are specifically excluded from
the research requirements of the institutions of
which they are a part. The professional situation of many EUROCALL members is such
that they can engage in research only as a
hobby that their universities do nothing to
encourage, and in some cases actively discourage. Here it is worth noting that in a symposium on university teaching published in the
Times Higher Education Supplement of 27
June 1997, several contributors suggested that
more weight should be given to good teaching, but none of them argued that good teaching is parasitic on good research. It is also
worth noting that EU policies have tended to
confirm the traditional breach between teaching and research: designed to promote language teaching and learning, LINGUA and
related initiatives have excluded an explicit
research component, and in many cases have
also excluded the possibility of appropriate
empirical evaluation. Clearly, EUROCALL
has a role to play in re-educating policy makers, university administrations and EU decision makers.
127
Seminar on research in CALL
My second introductory question was in
two parts: What should be our primary
research focus, and what varieties of research
do we need to undertake? As regards the first
part, I recalled a point made by Nina Garrett in
her opening plenary address: that in the next
few years developments in information technology will necessarily reshape second language pedagogy. If this is the case, then
research in CALL must take the process of
language learning as its starting point, though
it will need to engage with other perspectives
too – for example, human-computer interaction, artificial intelligence, computational linguistics. As for varieties of research, I suggested that we need theoretical research in
order to provide ourselves with a basic orientation; empirical research in order to explore
in a disciplined way how language learners
actually use information systems, and to what
effect; and action research in order to ensure
that our research enterprise is not a linear but a
cyclical process, leading back into the teaching/learning situation. In her opening address,
Nina Garrett noted that a characteristic of
autonomous learners is the ability to research
their own learning. One might say the same
about autonomous teachers: action research is
a sign of teacher autonomy.
At the end of my introduction, I invited
participants to call out the topics they felt the
seminar should address. They produced the
following list: research on language, contrastive studies, transfer; learner autonomy;
quantitative versus qualitative research methods, student data, evaluation methodologies,
and ‘the fallacy of objectivity’; postgraduate
programmes and the selection and guidance of
research students; safety critical issues; ways
of dealing with technological change; the publication of research; the establishment of a
EUROCALL discussion forum. At this point
the seminar divided into six groups for fortyfive minutes’discussion.
The issues and proposals brought from the
groups to the plenary feedback session that
128
concluded the seminar fell into three broad
categories. First, there had been discussion of
the general orientations appropriate to
research in CALL. One group suggested that
we need to draw on the theories and research
practice of other disciplines, including linguistics, psychology, social sciences, anthropology
and education; while two groups noted that it
is important to be clear what kind of research
we intend to engage in and to adopt an appropriate methodology. Research takes time,
which costs money, and one group pointed out
that without research funding it is impossible
to undertake large-scale empirical projects.
Secondly, most groups spent some time discussing the implications of a fact to which
Nina Garrett drew attention in her opening
plenary: that most CALL applications allow
researchers to gather large quantities of data
with minimum effort. One report pointed out
that it is one thing to collect data and another
to know what to do with it, and several groups
emphasized the importance of good research
design. Thirdly, the groups addressed the role
that EUROCALL might play in helping to
develop a research culture appropriate to
CALL. It was suggested that EUROCALL
should establish a register of research activities and perhaps a special interest group for
research; join forces with CALICO to found a
world-wide electronic journal for CALL
research; seek funding to sponsor research
projects run by its members; organize a summer school on research in CALL; establish an
electronic discussion forum on research in
CALL; and lobby against the exclusion of
research from EU-funded programmes such as
SOCRATES and LINGUA.
The seminar was one of the liveliest events
at EUROCALL 97, no doubt because it gave
participants an opportunity to share and debate
some of the interests and preoccupations they
had in common. It is very much to be hoped
that EUROCALL will act on at least some of
the suggestions generated by the seminar before
the 1998 conference convenes in Leuven.
ReCALL
ReCALL 10:1 (1998) 129–132
President’s Report
EUROCALL Annual General Meeting
September 1997
1.
Introduction
It gives me great pleasure to present this
fourth President's Report on the activities of
EUROCALL over the past year. EUROCALL
has now been in existence as a formal professional association for three years.
2.
Executive Committee meetings
The Executive Committee met twice in
1997–98:
March 1997, Dublin City University, Ireland
September 1997, Dublin City University,
Ireland
Full minutes of these meetings are available
from June Thompson, so I will limit this part of
my report to just a few important observations.
3.
Publications
The ReCALL Journal, a fully-refereed academic publication, continues to be published by
the CTI Centre for Modern Languages, University of Hull, in association with EUROCALL. The quality of contributions to this
publication has improved steadily and it now
occupies a respected position among journals
devoted to IT and language learning and
teaching.
Vol 10 No 1 May 1998
The ReCALL Newsletter is available on the
Web:
http://www.hull.ac.uk/cti/pubs.htm
Back numbers of the ReCALL Journal are also
available on the Web in PDF format:
http://www.hull.ac.uk/cti/pubs.htm
The EUROCALL 1996 Proceedings, edited by
János Kohn, Bernd Rüschoff and Dieter
Wolff, have been published and distributed.
Selected papers from the EUROCALL
1997 conference will be published in a double
issue of the ReCALL Journal 10 (1).
4.
EUROCALL workshops
Members are reminded that a database of
EUROCALL members has been set up – all
specialists in different fields of CALL and
TELL – with a view to providing expertise in
the running of regional EUROCALL workshops. Any institution that wishes to host a
regional EUROCALL workshop can call upon
this expertise. It is expected that the EUROCALL member will provide his/her expertise
free of charge, subject to the payment of travel
and subsistence expenses by the local host.
Only one regional workshop took place
during the last year: at the University of
Timisoara, Romania, 13 March 1997. Special
thanks are due to Stephan Pohlmann for his
129
President’s Report
help in setting up this workshop and to János
Kohn and Mária Balaskó for their valuable
contributions.
5.
Electronic communications
EUROCALL’s WWW site is fully operational
and can be accessed at:
http://www.hull.ac.uk/cti/eurocall.htm
EUROCALL’s electronic discussion list can
be joined by any EUROCALL member. If you
have access to email facilities you can join the
discussion list simply by sending the following message to:
[email protected]
join eurocall-members yourfirstname
yourlastname
(For “yourfirstname” etc., of course, substitute
your own!)
6.
Special Interest Groups (SIGs)
Information on EUROCALL’s Special Interest
Groups can be obtained from the EUROCALL
website:
http://www.hull.ac.uk/cti/eurosig.htm
The position and role of SIGs are currently
under discussion in the Executive Committee.
It is planned to issue guidelines on setting up
and running SIGs within EUROCALL.
Learning (WELL). This proposal has now
been dropped, as a Web Enhanced Language
Learning project has now been awarded funding by the Higher Education Funding Council
for England (HEFCE) under the FDTL programme. EUROCALL does not wish to compete with this project but to offer its help and
collaboration. WELL aims to provide access
to high-quality Web resources in 12 languages,
selected and described by subject experts, plus
information and examples on how to use them
for teaching and learning. Its activities will be
of interest to all teachers and students of languages regardless of sector and location. The
WELL website is now accessible at:
http://www.well.ac.uk/
7. Links with other associations
and organisations
EUROCALL maintains close links with a
number of professional associations and organisations that promote technology enhanced language learning. We welcome the attendance of
representatives at our conferences and participation in our activities, and we endeavour to be
represented at their conferences and to collaborate in other ways. These are some of the key
associations, organisations, events and projects
with which EUROCALL members have been
associated during the last year. This list is not
intended to be all-inclusive, and I apologise for
any omissions.
CALICO
CAPITAL
CAPITAL is a joint SIG of EUROCALL and
CALICO, devoted to using computers in the
domain of pronunciation in the widest sense of
the word. To join the group, individuals or
institutions must be members of one of the parent organisations, CALICO or EUROCALL.
The European coordinators are: Philippe
Delcloque, University of Abertay Dundee; Ton
Koet, Hogeschool van Amsterdam.
We are pleased to welcome a number of CALICO members to EUROCALL 97. Cooperation between CALICO and EUROCALL is
becoming closer. Both associations offer
reductions in their conference fees to members
of either association. EUROCALL members
Philippe Delcloque, Matthew Fox and Jenny
Parsons attended the CALICO 97 conference
at the United States Military Academy, West
Point, New York. A report on CALICO 97 was
published in the ReCALL Journal 9: 2.
WELL
A proposal was made to set up a EUROCALL
SIG devoted to Web Enhanced Language
130
IALL & FLEAT III
We are pleased to welcome Nina Garrett, PresReCALL
President’s Report
ident of IALL, as our opening Keynote
Speaker at EUROCALL 97. I was delighted to
be able to attend the FLEAT III Conference,
University of Victoria, Canada, in August
1997. This was organised jointly by IALL and
LLA (Language Laboratory Association of
Japan). June Thompson and Jo Porritt managed a stand promoting EUROCALL, and
June Thompson and I gave a joint paper on
“National and international cooperation”. I
was also invited to take part in a panel discussion entitled “Futurewatch: language learning
and technology in a global context”. A report
on FLEAT III was published in the ReCALL
Journal 9: 2.
WorldCALL 98
Close contact has been maintained with
ATELL, Australia. The University of Melbourne will host WorldCALL 98, 13–17 July
1998, under the auspices of ATELL, the Australian Association for Technology Enhanced
Language Learning.
The WorldCALL organiser is June Gassin,
who is also present at EUROCALL 97. We
extend our warm welcome to her.
Graham Chesters, June Thompson and I
represent EUROCALL on the WorldCALL 98
Steering Committee, and I am also a member
of the WorldCALL 98 Scholarships Committee. I am delighted to have been invited to give
a keynote paper at WorldCALL 98 and to participate in a plenary panel event. I know that
several EUROCALL members are planning to
attend WorldCALL 98, so we will be well-represented.
Further information available about WorldCALL can be found at:
http://adhocalypse.arts.unimelb.edu.au/
~hlc/worldcall/welcome.html
Council of Europe
Several EUROCALL members have recently
been involved in the Council of Europe's
activities, particularly those promoting new
technologies and language learning. Bernd
Rüschoff is one of the joint editors of a recent
Council of Europe publication: Korsvold A-K.
and Rüschoff B. (1997) (eds.) New technolo gies in language learning and teaching, CounVol 10 No 1 May 1998
cil of Europe, Strasbourg, France. Bernd
Rüschoff, Lis Kornum and I have also been
involved as ‘animateurs’ in the Council of
Europe’s series of ‘New Style Workshops’:
Workshops 7A, 7B, 9A, 9B. Lis Kornum
attended the Council of Europe Conference in
Graz, April 1997.
European Language Council and the
Thematic Network Project (Languages)
These two linked projects, both of which are
funded under the SOCRATES Programme of
the Commission of the European Communities, include sub-groups on New Technologies
and Language Learning. Joseph Rézeau and I
are members of the Policy Group of the European Language Council’s Policy Group on
New Technologies and Language Learning
and of the Scientific Committee of the Thematic Network Project’s sub-group on New
Technologies and Language Learning. I am
also a member of the Board of the European
Language Council.
Further information about these projects
can be found at:
http://userpage.fu-berlin.de/~elc
Language Learning and Technology
Journal
I am a member of the Editorial Board of this
new journal, which is published exclusively
on the World Wide Web:
http://polyglot.cal.msu.edu/llt/
The Editor is Mark Warschauer, University of
Hawaii.
8.
Membership and recruitment
Graham Chesters, EUROCALL Treasurer, is
submitting his report, which includes information on current membership and membership
fees.
Conferences will in future be restricted to
EUROCALL members. In effect, this means
that non-members will pay higher conference
fees than non-members – which has always
been the case – but the higher conference fee
will now include the annual EUROCALL
131
President’s Report
membership fee. This has the advantage of
enabling us to transfer membership fees collected in this way direct to our current
account.
Membership figures show a net increase of
20 over last year's figures, but we must not be
complacent. Membership subscriptions are our
main source of income. As I have indicated in
my previous reports, please publicise EUROCALL at seminars, workshops and conferences,
among your colleagues at work – in short,
wherever you can. Publicity leaflets can be
made available to you if you can make use of
them, and attractive posters have been printed.
9.
Sponsorship
As I have indicated in my previous reports,
EUROCALLdesperately needs sponsorship as
membership fees alone are insufficient to
enable EUROCALL to embark upon exciting
ventures. I therefore urge all members to pass
on to the Executive Committee the names of
local firms that might be willing to sponsor
EUROCALL. When seeking sponsorship it is
important – especially when contacting a large
company – that we have the name of an indi-
132
vidual contact in the company, preferably
someone who is known personally to a
EUROCALL member. Once again, I have had
no reactions to my appeal last year.
10. Thanks
Many thanks are due once again this year. It is
difficult to single out every EUROCALL
member who has made a valuable contribution
to the success of our association, but my special thanks are due to the Executive Committee for being diligent and supportive throughout the year, especially June Thompson and
Graham Chesters for handling the increasing
burden of administration.
I wish to offer my personal thanks to
Françoise Blin and Jane Fahy as the main
organisers of EUROCALL 97, not forgetting
their enormous team of helpers. I also wish to
thank Dr Daniel O'Hare, President of DCU,
for offering to host EUROCALL 97 and providing such a friendly environment.
Graham Davies
President, EUROCALL
September 1997
ReCALL
ReCALL 10:1 (1998) 133–135
CILT Research Forum
6–7 January at Homerton College, Cambridge
Information technology: the pedagogical implications for language teaching and learning
This Research Forum brought over 100 participants from all levels of education to Cambridge, some from as far afield as Latvia, Sri
Lanka and Hawaii. The lively discussions
which soon developed showed the tremendous
interest in the potential of IT for language
learning and teaching but also the concern that
research and developments in this field –
including teaching, research into new methods
of teaching and the development of new materials – are not always taken seriously by colleagues in the old established areas of research.
The keynote speech was delivered by Mark
Warschauer from the National Foreign Language Resource Centre of the University of
Hawaii . In ‘CALL versus electronic literacy:
reconceiving technology in the language classroom’, he showed that CALL programs went
through a phase of ‘drill and kill’and were not
integrated into the work taking place inside
and outside the classroom. More serious even,
the ‘real’ language was conceived as taking
place away from these exercises. With the
advent of Hypertext and the Internet, reading
time is spent increasingly on the computer.
Computer-aided communication is written,
rapid and time/place independent. Mark
Warschauer maintains that the impact of the
new technologies on language learning and
teaching will be more far-reaching than the
printing-press and will socialize people into
communication groups. The computer is able
to bring an authentic environment into the
classroom and will stress on-line reading and
Vol 10 No 1 May 1998
writing and the interpretation of information
as the prevalent skills.
The concern about the validity of IT projects as compared to the more traditional
research was highlighted in another plenary
session, ‘The Research Assessment Exercise
and its message in relation to IT and foreign
languages’ delivered by Professor Richard
Towell from the University of Salford, a former member of a language RAE panel and the
chair designate of UCML. The speech outlined
the criteria for accepted research as:
•
•
•
•
Research builds on existing knowledge.
Research takes place within a definable
theoretical research framework.
Research follows a recognisable methodology which allows meaningful statements
to be made.
Good research moves the field forward by
providing replicable, verifiable, generalisable results.
New teaching materials and approaches per se
are not readily accepted as research; instead
projects need to:
•
•
•
be embedded in, for example, the theory of
second language acquisition;
clearly state the method of investigation;
analyse the results in such a way that general principles can be deduced.
Some recognition by the new researchers of
133
CILT Research Forum
the validity of these criteria and by RAE panels of the value of this new form of research is
therefore essential for the next round of the
RAE exercise.
There was a diverse range of optional sessions and unfortunately, it is only ever possible
to attend a selection, but the ones I managed to
attend provided much food for thought.
The first one was a report on the summative evaluation of TELL Consortium materials, currently in progress. Its aims are:
•
•
•
to find out the context in which materials
are used;
to establish how materials work in different environments;
to get feedback from staff and students and
compare this with the learning outcomes
which the packages were designed to assist.
The packages chosen for evaluation were
Encounters, GramEx and the TransIt Tiger
Authoring Shell, based on case studies at three
sites – the University of Warwick, Nene College Northampton and the University of Hull.
The tools used in the investigation were questionnaires, log sheets and semi-structured
interviews. At issue is how TELLproducts can
be integrated into modules and what bearing
this could have on pedagogy, learning and
teaching methods and the management of
resources.The results will be published in the
near future.
Nadine Laporte from the University of
Wales, Bangor, School of Psychology,
reported on research investigating the effectiveness of a spell- and grammar-checker for
learners of Welsh as a second language. One
group of learners had access to a version
which gave them immediate grammatical
feedback while the other version provided
only basic bi-lingual dictionaries. The results
are as yet not totally conclusive.
William Haworth from Liverpool John
Moores University is leading an FDTL-funded
project of a consortium of universities, with
the objective of spreading good practice in the
use of Web-based resources and the exploitation of these resources for the learning and
teaching of languages. As one important issue
134
he discussed staff and student development. A
very uneven picture emerged: while many
institutions have a positive attitude to webbased learning in the humanities, the provision
of equipment especially for staff lags far
behind those apparently positive attitudes.
Students in institutions are often better
equipped than staff, and staff and student
training takes place on a DIY basis in the
majority of cases. The conclusion drawn by
William Haworth so far is that at present the
potential of this new technology is not yet
fully recognized and exploited.
At a late hour and after the excellent conference dinner, Heather Rendall, an Independent Advisory Teacher from Worcestershire,
availed herself of a difficult task with great
enthusiasm, humour and expertise. She
reported on her findings regarding ‘The Effectiveness of CALL in Secondary Schools’. She
argued with great conviction that the understanding of structures, both of the mother
tongue and those of foreign languages, is the
key to storing meaningful patterns. She
pointed out very clearly how native English
speakers have to switch, constantly and confusingly, from invariables like ‘the’ and ‘you’
to variables like tu, vous, le, la and even more
variables when learning German.
Day 2 offered another set of options of
which I attended Elspeth Broady’s and Alison
Dickens’ (University of Brighton Language
Centre) talk on ‘Using electronic media to
support advanced grammar study in a selfaccess setting’. This talk was in a sense a follow-on from where we had left off with
Heather Rendall the previous evening. The
paper examined self-access computer-based
grammar support for students of French in
their first year at university. One important
finding of the investigation is to establish the
importance of researching learner attitudes
and perceptions. For those of us who are keen
to promote self-access and independent study,
it was essential to note that according to this
study’s findings
•
•
resources alone are not sufficient;
students are initially ambivalent in their
attitude towards working independently;
ReCALL
CILT Research Forum
•
self-access material is not used regularly
unless students are directed – at least during the first steps towards autonomy.
However, and this is positive, students were
increasingly encouraged to find things out for
themselves, e.g. to deduce their own grammatical rules which they are meant to check then
against the ‘official’ explanation in the grammar book.
The Research Forum was summed up very
comprehensively by Professor Chris Brumfit
from the University of Southampton.
Vol 10 No 1 May 1998
This was in the first instance a conference
reporting on work in progress and therefore
few definite results could be provided as yet.
However, the major issue seems to be how to
give all these projects the theoretical framework and the generally acceptable research
methodology required so that the next RAE
exercise can find it in its power to acknowledge the valuable work done for the promotion
of language learning and teaching through IT.
Annegret Jamieson
University of Hull
135
Software Review
PROF (Practical Revision of French)
Minimum system requirements: PC 386 or higher with 4MB RAM (8MB recommended), SVGA monitor,
Windows 3.1 (95 compatible), MS-DOS version 5.0, 8MB available hard disk space. Can be installed to run
over most networks, tested on Novell NetWare v3 and 4.
Available from: Institute of Computer Based Learning, Queen’s University, Belfast.
Price: £40.00 to UK HE institutions. £65.00 elsewhere
Description of software / intended
use
PROF is a CALL grammar package written
primarily for level one undergraduate learners
of French but it may also be useful at post
GCSE / Advanced level. The package is
intended to enable the learner to revise and
consolidate their knowledge of French grammar. It is divided into twelve chapters roughly
following the textbook Le français en faculté.
The structure of the chapters is straightforward; each contains an overview, a brief
description of the grammar point to be revised
followed by a dialogue, a longer grammar presentation and sets of exercises
Documentation / ease of use / screen
layout
The PROF package is quick and easy to install
from two floppy disks, with clear on-screen
instructions. The accompanying literature is
136
brief but informative with clearly detailed
instructions for installation and set-up. Once
installed, the user clicks the PROF icon and
enters the initial menu screen. This is a grey,
and somewhat unappealing screen entitled
Practical Revision of French followed by a list
of grammatical areas. More could be done to
improve the user-friendliness of this initial
interface. Page links are not clearly highlighted for novice users and it is only after a
succession of clicks on The Present Tense that
one stumbles upon sub-headings in order to
move through to the first chapter. The opening
screen for this initial chapter is much more
attractive and up-to-date. It clearly presents
the separate sections of the chapter, and once
one has established how to use the links in the
frame to the right of the screen, navigation
through the chapter does not present too many
difficulties. Each of the chapters is organised
into separate layers and although the package
is designed to be used in a linear fashion, it is
quite easy to move around in it and enter and
exit the various chapters and sub-sections. In
ReCALL
Software review
many cases, pages are multi-layered so a click
will add additional information at the learner’s
own pace. There is also an interesting use of
animated words to illustrate grammar points
such as endings and the dialogues in many
cases are accompanied by a cartoon image.
Initial impressions
Having read a couple of articles about the
package describing it as a ‘new way of revising French grammar’, I was very keen to have
a look since I have seen nothing as yet for
learners of French at post-GCSE level which
betters GramEx for straightforward grammar
consolidation purposes. The stated aim of
PROF is “to provide students with the opportunity of revising and practising (a variety of
identified grammatical concepts) in a lively
interactive way that will reinforce and
improve linguistic accuracy”. The choice of
grammar topics covered by PROF is very
much appropriate to this aim for students at A
level / undergraduate level one. However, I
must admit to being initially somewhat disappointed with the ‘dated’look of the static cartoon pictures which reminded me very much
of textbooks used in the 1970s (e.g. archetypal
French detective in hat in chapter 7). My own
experience of students today makes me think
they may find this aspect of the package rather
unsophisticated.
Pedagogical content
Dialogues
The dialogues within each chapter follow the
adventures of a young student, Robert, as he
spends a month in France. They gradually
evolve into a rather esoteric detective story
which ends with a mixture of Islamic terrorists
(culturally appropriate?) and French secret
agents! There are some nice touches of
humour in the dialogue, although a couple of
the more stereotypical references did jar
somewhat, e.g. une Porsche rouge conduite
par une belle blonde. This aspect of the package could be developed in many ways. It
Vol 10 No 1 May 1998
seems rather inauthentic for the learner to read
scripted dialogue on screen. The dialogues
could perhaps be recorded with the option of
viewing them. Technology and time permitting, perhaps the static cartoon images could
be animated or video clips used to make the
story more real. I liked the fact that the writers
had chosen a student for the principal character: why not on his year abroad? I particularly
liked the use of a map to chart Robert's journey, providing some cultural / geographical
input.
From a pedagogical perspective, my main
concern is the lack of real integration between
the dialogues and the aims of the package. In
the grammar explanations which follow each
dialogue, examples are taken from the dialogue out of context. I feel the dialogues need
to link more transparently to the grammar
point and explanation. For example, the
learner could be required to identify and highlight all examples of the grammar point in the
dialogue, or when they go through the explanation, a highlighted link could pull them back
to the grammar point in context. I wonder
whether students would feel they could relate
to and work with the dialogues enough to substantiate the claim of ‘interactivity’ for the
package. Moreover, most of the dialogues are
rather long and it is therefore vital that they
bring a pedagogical plus to warrant their
inclusion. This is a concern also expressed by
the writers of PROF who are reviewing the
relevance of the dialogues within the package.
Their own extensive evaluation initially
revealed that only 10% of students said they
found the dialogues useful / very useful. There
are some excellent features here though which
need to be built on. I like the fact that the
authors have deliberately chosen to include
grammar items already covered in follow-up
dialogues, e.g. use of depuis in the chapter on
the future tense. The hypertext linked glossary
is good, with appropriate vocabulary chosen
for translation into English - perhaps at this
level an explanation of the vocabulary could
be done in French though?
Grammar presentations
Following each dialogue is an unashamedly
137
Software review
traditional presentation of the grammar point
in English. The level of the presentation is
challenging but appropriate for revision purposes at this level. However, some thought
could be given to the rather heavy emphasis
on formal grammatical terminology e.g. ‘particle’, ‘elision’, ‘past participle’, ‘direct object’,
‘transitively’ etc. The terms are not always
fully explained and learners may often have
little experience of this terminology. A more
detailed explanation or linked glossary of
grammatical terms would be useful.
More importantly, perhaps some more
thought should be given to what the learner is
expected to do here. Not only are they
required to absorb the grammar rule but also
the associated metalanguage in English. Is
this the aim of the package? A nice touch was
the use of graphics layering one onto the next
in the explanations, e.g. measurements. I was
interested to learn, however, that students did
not always respond well to the moving
words, e.g. to explain the formation of past
participle ending. 45% said they found them
irritating.
Exercises
The exercise section of each chapter is comprehensive and tests each point presented with
a variety of strategies, mainly gap filling,
true/false, multiple choice. The explanations
to the exercises are detailed but it would also
be helpful for the learner to have an example
of what is required before attempting the
questions. One is normally given two
attempts at a question and feedback is supplied in English; either "try again" or "incorrect – the answer is .......". It is a shame no
explanation of the correct answer is given or a
link put back into the presentation of the
point, as this would support any learner who
is having difficulty as well as enhance the
claim of interactivity.
As with other traditional CALL packages
of this nature, one is required to type in an
absolutely correct answer; even a missing
apostrophe is classed as incorrect. This does
have the advantage of requiring absolute accuracy from the learner although it may be a little demoralising.
138
Overall value / conclusion
Overall, I found PROF offered an interesting
approach to an important need at this level of
language learning, the only other package I
know of which addresses this need being
GramEx. There are several areas of PROF
which could be re-appraised, in particular the
inclusion of dialogues as well as explanations
of grammatical terminology. The authors
appear to have included the dialogues in an
attempt to contextualise the grammatical
points covered. However, the links between
exercises and dialogues are tenuous and there
are problems with the dialogues themselves in
terms of presentation, length and content.
Indeed, one could ask whether students actually want or need their grammar practice to be
integrated into a 'fun' environment.
Within the institution which has piloted it,
the package is currently used during class contact time, with access to human feedback (in
fact, 69% of students surveyed, said they asked
for further grammar explanations during class
when using the package). Careful thought
needs to be given to how the package could be
developed and made more transparent in order
to function successfully in a self-access environment. It is encouraging that many of students surveyed enjoyed and preferred this way
of revising grammar. Initial evaluation also
suggests they made some improvement in
grammar test scores. As the authors say,
“PROF clearly justifies its title since many
learners felt that, though they may have
learned little from it, it gave them the opportunity to revise a great deal of grammar”.
Sheridan Graham
The Nottingham Trent University
References
Hickman, P (1996) GramEx French. TELL consortium CALL package available through Hodder
& Stoughton.
Tame, P (1996) PROF: a new way of revising
French grammar. Active Learning 5 (CTISS
Publications).
Tame, P (1997) PROF (‘Practical Revision of
French’). ReCALLNewsletter no. 10
ReCALL
Vol 10 No 1 May 1998
139
Technology Enhanced Language Learning
Multimedia CD-ROMs down in price
The TELL Consortium language learning CD-ROMs are now available at the
lower price of £49.95 plus VAT
Encounters
(French, German, Italian, Spanish and Portuguese)
This range of programs on CD-ROM is designed primarily for
the non-specialist language learner. The materials may be used
independently or integrated into existing course materials. Each
CD-ROM (one per language) contains 20 or more dialogues,
divided between situation-specific modules. Each dialogue has
contextualised help and support and provides practice in the
language used in basic situations such as booking a hotel room,
asking the way or ordering a meal. Also included is a more advanced module, which is a situationalbased activity including speaking and comprehension practice with revision and testing exercises.
Ça sonne français
An introduction to the phonetics of French from which learners
can acquire a deeper understanding and mastery of French
pronunciation. The program covers the classic topic areas (e.g.
rounded vowels, stress, rhythmic groups), based on carefully
selected video clips,and presents broad phonetic transcription. It
also provides listening comprehension and a facility for voice
recording so that learners can judge their own progress.
InterprIT(Italian)
A self-access interpreting program for learners following
advanced interpreting courses. The program contains eight
Liaison Interpreting modules consisting of interviews between
an English and an Italian person on contemporary topics, which
users are asked to interpret. There are also two modules
providing practice in Consecutive Interpreting, which include
use of an Interpreter's Notepad on screen.
All seven CD-ROMs exploit the speed and visual attractiveness of multimedia.
They require a 486 or Pentium multimedia PC with CD-ROM drive, sound card,
speakers and microphone. A minimum of 8 MB RAM is recommended.
Networking is not recommended. Packs of 10 CD-ROMs and manuals can be
purchased at the reduced price of £350 plus VAT.
140
ReCALL
Vol 10 No 1 May 1998
141
Diary
9 May 1998, Manchester, UK
Natural Language Processing in ComputerAssisted Language Learning
Information: Marie-Josée Hamel, Dept of
Language Engineering, UMIST, PO Box 88,
Manchester M60 1QD, UK
Tel: +44 161 200 3100
Email: [email protected]
http://www.ccl.umist.ac.uk/whatsnew/nlpreg.html
10-12 September 1998, Leuven, Belgium
EUROCALL98: From Classroom Teaching to
Worldwide Learning
(see p139 for details)
25-27 May 1998, Stockholm, Sweden
ESCAWorkshop on Speech Technology in Language Learning
Information: Workshop Secretariat, STiLL, KTH
(Royal Institute of Technology)
Tel: +46 8 790 7854, Fax: +46 8 7907854
Email: [email protected]
http://www.speech.kth.se/still/
17-19 September1998, Bergamo, Italy
5th CercleS International Conference: Integration
through Innovation.
Information: Maurizzio Gotti, Centro
Linguistico d'Ateneo, Università di Bergamo, Via
Slavecchio 19, 24129 Bergamo, Italy
Tel: +39 35 27 72 16
Fax: +39 35 27 72 27, Email: [email protected]
2-3 July 1998, Hull, UK
Workshop on Advising for Language Learning
Information: Elizabeth Bradley, SMILE Secretary,
Language Institute, University of Hull, Hull HU6
7RX, UK
Tel: +44 (0)1482 465862/466172
Fax: +44 (0)1482 466180
Email: [email protected]
21-12 September1998, Oxford, UK
ALT-C 98
Information: ALT, Dept of Continuing Education,
University of Oxford, 1 Wellington Square, Oxford
OX1 2JA, UK
Tel: +44 1865 270360
Email: [email protected]
http://www.tall.ox.ac.uk/alt/alt-c98/
7-9 June 1998, Varna, Bulgaria
Multimedia and Foreign Language Training
Information: Dr Milko Todorov Marinov,
Dept of Computer Systems, University of Rousse,
8 Studentska Str, 7017 Rousse, Bulgaria,
Tel: +359 82 44 507 356, Fax: +359 82 486 379
Email: [email protected]
14-17 October1998, Beijing, China
ICCE98: Global Education on the Net
Information: ICCE98 Secretariat, Computer
Center, Northern Jiaotong University, 100044
Beijing, R.China
Email: [email protected]
http://www.njtu.edu.cn/icce98/
13-17 July 1998, Melbourne, Australia
WORLDCALL: Call to Creativity
Information: June Gassin, Horwood Language
Centre, University of Melbourne, Parkville,
Victoria 3052, Australia
Email: [email protected]
15-16 October1998, Berlin, Germany
Languages & the Media
Information: ICEF - Languages & the Media,
Niebuhrstr. 69A, 10629 Berlin, Germany
Fax: +49 30 324 9833 or +49 228 211944
24-27 July 1998, Oxford, UK
TALC 98: Teaching and Language Corpora
Information: Email: [email protected],
http://users.ox.ac.uk/~talc98/
4-6 September1998, Norwich, UK
AFLS 98 (Association for French Language
Studies)
Information: Dr Marie-Madeleine Kenning,
142
School of Modern Languages and European
Studies, University of East Anglia, Norwich NR4
7TJ, UK, Tel: +44 1603 592152
Email: [email protected]
16-18 September1999, Besançon, France
EUROCALL99
Information: Thierry Chanier, Laboratoire
d'Informatique de Besançon, Université‚ de
Franche-Comté, France
Tel: +33 3 81 58 84 70
Fax: +33 3 81 66 64 50
Email: [email protected]
http://lib.univ-fcomte.fr/RECHERCHE/P7/
EUROCALL/EUROCALLE.html
ReCALL
ReCALL: Notes for Contributors
ReCALL, the journal of CTI Modern Languages in association with EUROCALL, seeks to fulfil the stated aims of
EUROCALL as a whole, which are to advance education by:
(a)
(b)
(c)
promoting the use of foreign languages within Europe;
providing a European focus for the promulgation of innovative research, development and practice in
the area of computer-assisted language learning and technology enhanced language learning in education and training;
enhancing the quality, diffusion and cost-effectiveness of relevant language learning materials.
All submissions are refereed.They are accepted for consideration on the assumption that they have not been previously published and are not currently being submitted to any other journal.
Typical subjects for submissions include theoretical debate on language learning strategies and their influence on
courseware design, practical applications at developmental stage, evaluative studies of courseware use in the teaching and learning proce!ss, assessment of the potential of technological advances in the delivery of language learning
materials, exploitation of on-line information systems, and discussions of policy and strategy at institutional and discipline levels. Survey papers are welcome provided that they are timely, up-to-date and well-structured.
The language of ReCALL is normally English. However, papers in French or German will be considered.
Authors should be aware that editorial licence may be taken to improve the readability of an article.
Three free copies of the journal are sent to contributors in lieu of offprints.
Copyright is assigned to the publisher, but the right to reproduce the contribution is granted to author(s), provided
that the contribution is not offered for sale. The publisher reser ves the right to publish the contribution electronically
via World Wide Web.
*
*
*
*
Hard copy: preferably laser-printer output.
On 3.5" disk in Word for Windows 2.0 format or higher (please state version).
On 3.5" disk in ASCII format.
On 3.5" disk in Rich-Text-Format (RTF).
Please label your disk with your name, date, the titles of files stored on the disk and the name of the word-processor
you have used.
Papers may also be submitted in MIME-encoded format by email.
Texts should not exceed 5,000 words: line spacing 1.5 with a point size of 12 (please indicate word-count at the
end of your text). The text should be left-aligned only.
Make sure that graphics and screen dumps are also available on disk and are of sufficient size and quality to be
reproduced in a reduced format. Please indicate which graphics package you have used to produce them.
Your text should be laid out as follows:
Title of article: Do not use capital letters, except at the beginning of the title and for proper names. In languages
other than English, use standard conventions.
Author: First name, last name, institution.
Biographical information: Brief, no more than 50 words.
Abstract: No more than 100 words.
Text of article
References
If your article includes numbered sections and paragraphs, use the following system:
1.
1.1.
Vol 10 No 1 May 1998
143
1.2.
1.3.
2.
2.1.
2.2.
2.3.
etc.
Use bulleted lists within above system or i., ii., iii. then a., b., c. No brackets.
Abbreviations
Don't use full stops in abbreviations: ICI, OBE not I.C.I., O.B.E.
When referring to the title of an organisation by its initials, first spell out the title in full followed by the abbreviation in
brackets, thus: Imperial Chemical Industries (ICI). Thereafter refer to ICI.
Underlining
Don't underline. Use italics or bold for emphasis.
Bibliographical referencing within the article
... as was stated in a recent study (Davies 1995:65) ...
... see also Ahmad et al. (1985:123-127) ...
“... quotation ...” (Davies 1985:15)
Please avoid using footnotes.
References at end of the article
Please pay particular attention to the use of full-stops after initials and the use of commas, colons, brackets. Above
all, be consistent. Your text will be returned for re-editing if you do not adhere to the prescribed system.
i. Single-author books
Davies G. D. (1985) Talking BASIC: an introduction to BASIC programming for users of language, Eastbourne:Cassell.
ii. Dual-author books
Davies G. D. & Higgins J. J. (1985) Using computers in language learning: a teacher’s guide, London: CILT.
iii. Multiple-author books
Eck A., Legenhausen L. & Wolff D. (1995) Telekommunikation im Fremdsprachenunterricht, Bochum: AKS-Verlag.
iv. Edited books
Rüschoff B. & Wolff D. (eds.) (1996) Technology-enhanced language learning in theory and practice:EUROCALL 94:
Proceedings, Szombathely: Berzsenyi Dániel College.
v. Articles in journals, magazines, etc.
Little D. (1994) “Learner autonomy:a theoretical construct and its practical application”, Die neueren Sprachen 93 (5),
430-442.
vi. Articles in books
Johns T. (1991) “Data-driven learning and the revival of grammar”.In Savolainen H. & Telenius J. (eds.), EUROCALL
91: Proceedings, Helsinki: Helsinki School of Economics, 12-22.
Contact address
Please address your manuscript, and any queries, to:
June Thompson
Editor, ReCALL
CTI Centre for Modern Languages, University of Hull
Hull HU6 7RX, UK. Email: [email protected] or [email protected]
144
ReCALL