Ed-Tech Research

Ed-tech research in the great onlining

(The “great onlining” is a title inspired by the posts at this blog.)

A unique experiment in distance education is coming to an end. After several weeks in which all of education moved into distance learning delivered via the internet, today, most schoolchildren in Saint Petersburg will be looking forward to an early start to their summer holidays. 16 year-olds who are preparing for the «ОГЭ» school examinations and 18 year-olds preparing for the «ЕГЭ» will be offered a new series of “consultations” on Youtube. Starting with mathematics at 16.00 For the rest, it is not yet known whether schools will re-open in May, but even if they do, all schooling will be optional, and distance-learning will mostly stop. The main reasons for this decision 1information from Fontanka website are:

  • parents may be worried about sending their children into schools for fear of infection
  • families which have already moved out of the city to their summer dachas, may not have internet access or equipment available for studying online.

University students are likely to continue, as far as possible, with the ordinary program of studies for their degree courses – but delivered in a variety of remote formats.

What should the field of educational research be doing in response to this enormous experiment, which is simultaneously being run in many other cities across Russia and around the world?

Educational research in the field of ed-tech often attempts to evaluate particular new technologies and tries to decide whether they have “potential”. Of course, before the idea of “Computer Based Learning”, ambitious claims were made for gramophone, cinema, radio and TV. Typically, research concludes that YES there is “potential” but in practice this is hard to achieve because of a number of factors which arose from the context of the particular lessons being studied.

This is how in 1993 Diana Laurillard summarised the results of all existing research 2 Laurillard, D. (1994). How can learning technologies improve learning. Law Technology Journal, 3(2), 46–49. Available online at . At the time of writing her paper, the innovative technology in question was interactive video, distributed on CD-Rom. But that detail is less important than the overall argument:

all evaluation studies are able to provide consistent evidence of the ways in which the context fails

This doesn’t mean that an educational researcher should ignore the fact that the context of the Great Onlining almost guarantees failure. What it does mean is that, if we start from this assumption, a number of more concrete tasks and questions arise for the researcher.

I am interested in the following issues:

  • how can teachers make use of the global nature of this sudden event?
  • what are the steps that a teacher can take autonomously, within the constraints of my particular context?
  • do we expect any permanent changes in education, and if so, what initiatives should begin now, to prepare for them?
EAP Knowledgebase

corpus reading 3

Krishnamurthy, R., & Kosem, I. (2007). Issues in creating a corpus for EAP pedagogy and research. Journal of English for Academic Purposes, 6(4), 356–373.

This article traces the increasing use of corpora in EAP classrooms and finishes by describing what the most useful kind of corpus for this kind of teaching would be like. Both the interface of the tool for analysis and the structure of the corpus itself are considered.

The motivations for using corpus-based approaches:

  1. DDL
  2. discovery rather than repetition of standard examples.
  3. learner autonomy

Academic discourse has always been included in more general corpora.

Issues in making an academic corpus.

Is it just general. Or divided in some way by subjects. The question is how to make the classifications of discipline.

Then classifications of genre are also arguable. The written ones may come from university assignment tasks or ielts writing questions. A classification of academic speech events is also provided (quoting the MICASE Manual).

Problems of processing the texts. Some corpora place restrictions on the types that can be submitted. Some strip out parts – references and quotations. But this may make the text less authentic.

Classification of level. Some corpora take only staff writing and PhD theses, others only from the fourth year students.

The collection of lower-grade texts would be useful for teachers looking for problem areas to address.

Mentions the sketch engine software as “more research-oriented than pedagogic”.

In the early days all corpus software was called “concordancer” and concordancers are well suited to the classroom because that’s a simple function. The current tools require a complicated query language.

Gives a favourable mention to the BYU interface “Mark Davies’s View interface”.

The writers would like one big corpus covering all the categories. Lots of different disciplines and lots of different levels.

EAP Knowledgebase

corpus – extra reading 2

Gilquin, G., Granger, S., & Paquot, M. (2007). Learner corpora: The missing link in EAP pedagogy. Journal of English for Academic Purposes, 6(4), 319–335.

learner copora, especially how one collection was used to create the “Macmillan English dictionary for advanced learners” with its special section on academic writing.

Writers note that most research has been about corpora of native speaker English. The aim of the article is to demonstrate how a corpus of student writing can be helpful for dealing with writing problems.

Cites Flowerdew 2002 with four distinct research paradigms in EAP –

Swalesian genre analysis

contrastive rhetoric

ethnographic approaches

corpus-based analysis.

The first three of these focus on the context or situation of the communication. Corpus based analysis is distinctive because it allows much more detailed information about language structures. The first three all deal with things that are also problems for native speaker novices at academic writing: “pragmatic appropriacy” and “discourse patterns”.

Mentions different software for work with corpora. Including:


Sketch Engine

discover that “academic discourse is highly conventionalised”.

CIA – “contrastive interlanguage analysis” is useful in showing L2 differences in learners with different L1. Or comparisons between learner language and “natives” who are supposed not to be learners in the same sense. (there is some research on corpora of “novice native” writers but there is not so much overlap with the problems of non-natives.

Examples of the kind of things that can be discovered by looking at learner corpora:

Learners are familiar with key EAP verbs but not their lexico-grammatical patterning. Modal verbs, connectors are problem areas.

Interesting note about Coxhead – the list she produced took out the 2000 most commonly used words. These words can be used differently in Academic English and so could be usefully studied too.

Honourable mention for Milton 1998 wordpilot which was based on learner English in Hong Kong – it’s actually a CALL application rather than a coursebook. (traditional resources are more conservative).

Why is it hard to make materials for academic writing based on corpora?

Research shows that the discourses are varied according to the discipline. In the universities students tend to get EAP for General Academic purposes…not so specific.

Learners need to be trained to use corpora.

Problems can be L1 specific which also makes for less generalisable information.

There may not be a clear link between the results of corpus research and what actually ends up being taught. There are other factors: learners’ needs, teaching objectives and teachability.

MLD – monolingual learners’ dictionary. In the Macmillan project 12 rhetorical or organisational functions are identified.

There’s an argument that using a corpus of expert writing is not ideal to teach language learners. Maybe a corpus of writing by novice native speaker students would be more appropriate? But these could also provide not very good models!

The macmillan project used the “International Corpus of Learner English” with 6085 essays written by learners with 16 different mother tongues. The essays are untimed and written with the help of reference tools.

The project goes for a compromise about L1 influence: “ Only linguistic features shared by at least half of the learner populations under study are discussed in the writing sections.”

Examples of learner problems:

overuse of the phrases: for instance and for example.

Overuse of adverbs for certainty like really, of course, absolutely

underuse of hedging adverbs – apparently, possibly, presumably

tendency to put however at the start of a sentence, and less likely to put it in the middle.

Using THOUGH in sentence final position is typical of NS speech but not so likely in academic writing.

Wrong use of ON THE CONTRARY (which actually means the opposite is true) to mean simply “by contrast” or “on the other hand”.

Invented phrase like “as a conclusion”, where NS writes IN conclusion.

This research produced the “get it right” boxes in the dictionary.

EAP Knowledgebase

corpora – extra reading 01

This week I started an online course with Sheffield university about using Corpus tools in EAP. Here are some notes on the extra reading from the first week:

Vyatkina, N., & Boulton, A. (2017). Corpora in Language Teaching and Learning. Language Learning and Technology, 21 (3), 1–8. University of Hawaii,.

This article uses the abbreviation DDL – “data driven learning” to categorise the field. The term covers various strands of research, but they can be grouped as follows:
A– “theoretical underpinnings” – what the corpus data can tell us about the nature of language.

B– “descriptive”. Not really explained in the article, but I imagine articles that mainly describe particular practices used in the classroom or for materials writing and that speculate about future developments of the field.

C– Empirical evaluations – including learner attitudes, measuring the value of DDL for learners,

Empirical evaluations of the results from DDL as a teaching approach only start in around 2000. They note that there is relatively little DDL work done in the USA.

It uses the terms “emic” (from the subject’s perspective) to describe research done by asking students to fill in questionnaires…and “etic” (from the researcher’s or an external perspective) to describe research done with pre- or post intervention tests or other kinds of experimental control.

It notes two trends over time:

From lexico-grammatical studies towards greater interest in the characteristics of discourse.

From corpora as a learning aid towards corpora as a reference resource.

Legitimation Code Theory

Legitimation Code – what is “abstract”?

Karl Maton speaking to BALEAP in 2017 isn’t satisfied with the term “abstract“. Semantic Gravity goes from weak at the top – for things that depend less on context for their meaning (the process of photosynthesis), down through general groups of things (flowering plants without woody stems) to strong gravity at the bottom where the meaning is highly linked to a particular context (Taraxacum officinale, the common dandelion). The other dimension on the semantic plane is density. Semantic Density is the complexity of the practice related to the knowledge. The meaning of “gold” for a chemist is denser as a knowledge practice, than “gold” for an everyday person who hopes to be given a “gold watch”.

Some abstract knowledge practices are very simple. (He gives the example of management discourse = “rarefied code” to contrast with LCT = “rhizomatic code”).

There are real issues at stake here….one difference between jargon and theory is their semantic density they’re often lumped together and then bullshit masquerades as meaningful, and theory is dismissed as unnecessary. We see this in public discourse. We see it with the dismissal of experts and major economic and political decisions based on totally empty rhetoric. Nothing springs to mind immediately visiting Britain right now, but I’m sure that you can fill in an example…

De la misère symbolique

“narcissisme primordial”

The foreword to “De la misère symbolique” 1 I’m going to call it “Symbolic Destitution” in English although the published English translation was called “Symbolic Misery” contains footnotes to Deleuze, but it begins by stating that it is a continuation of an analysis in « Aimer, s’aimer, nous aimer ».

I think this is enough to identify Stiegler’s starting point as located in the “Sovereign” part of the autonomy plane. This means that the writer has high positional autonomy: he’s talking about ideas which he has already built up into his own system of thought; and high relational autonomy – he refers to “narcissisme primordial” an idea with roots elsewhere, but we’re definitely meant to understand this as a concept that belongs to Steigler’s way of thinking rather than say, Greek myth, or psychoanalysis.

Moodle and MoodleNet

LCT Centre Occasional Paper 1

Saturday’s topic is my own knowledge practice in relation to #Moodle and Moodlenet: I often find myself making up analogies with face-to-face teaching to explain the procedures I want teachers to adopt. I elicit responses to hypothetical questions like these: “If you went to all your lessons in a carnival mask, what would happen?” “If you had an invisible assistant who wrote on the blackboard during your lessons, would you need to communicate with them?

In LCT 1 Legitimation Code Theory these moments, when I appeal to teachers to bring their common-sense feel for what a classroom is like into discussion of online pedagogy, would be called “introjected code”. This is one of the areas in the autonomy plane. It has high relational autonomy – because the aim is still ABOUT moodle, but low positional autonomy – because the discussion is bringing in knowledge from another sphere.

The paper by Maton and Howard is available here.

The Johnson government

Legitimating moral emotions

LCT 1 Legitimation Code Theory likes to plot knowledge practices in planes which allows them to track changes over time and make comparisons. I’m taking, as a first idea, that the 2019 Conservative manifesto represents an abandonment of Thatcherite rhetoric in favour of a more traditionally conservative appeal to the kind of moral emotions nicely described by Haidt in “The Righteous Mind“.

The 5 planes proposed by LCT are: Autonomy (how connected the practice is to other areas of knowledge) , Specialization (the characteristics of the knowledge involved and the person who knows it) Semantics (the complexity of things being discussed and how abstract they are) and Density and Temporality (which I’m less clear about so far). The next question is which plane would be the place to mark the change towards a sentiment like “this country has felt trapped, like a lion in a cage” ?

Legitimation Code Theory

Legitimation Code Thursdays

These posts are preparation for a project applying Legitimation Code Theory to 6 knowledge practices:

Sun. – Symbolic Destitution

Mon. – Methods of Rationality

Tues. – Russian Nights

Weds. – Where the Wasteland Ends

Thursday – LCT can and does discuss itself!

Fri. – the Johnson government (UK politics)Sat. – Moodle and MoodleNet communities

Where the Wasteland Ends

Where the Wasteland Ends

Theodore Roszak wrote this book in 1972 and I first read it about ten years after that. It makes a contrast with “Russian Nights” because the chemist and the poet here are on opposite sides in a battle over the fate of visionary imagination. “the beauties of science are not the beauties of art but their antithesis” he says.

Like Yudkowsky on Monday, he warns the reader in the introduction about the first few chapters. Expect them to be bleak.