Thoughts about Orthographic Reform
Though our views have developed quite a bit in the past few years they are still fairly represented by these early notes:
Notes from 2001: (taken from email exchanges with reformers)
Why has the English writing systems resisted reform?
In the early days of writing, writing systems were sacred.
In the medieval days of writing, writing served only the elite few.
In the 16th through 19th centuries the literacy-divide served the upper classes.
In the 20th Century we accepted our orthography as a fixture in our thinking about reading (because all thoughts of changing it had such unacceptable implications).
Today, the central issue is institutional inertia, the success of written English in becoming the world’s leading language in so many global communication domains: economics, politics (if only in so far as dealing with the USA on trade or AID) science, technology, medicine, entertainment. The constituencies affected by any change to the language are the most powerful constituencies in the world. The written language works for them – any change a nuisance and disturbance. They collectively resist change because their interests are best served by preserving things the way they are. People who are not above a certain threshold of literacy-proficiency don’t even exist for them (practically speaking – they may exist as beneficiaries of their philanthropy).
We have taken the code for granted because we can’t imagine it could be otherwise and have become dull to the issue after hundreds of years of failed attempts to fix it. This is supported by some who believe that the ambiguity that impedes learning to read, becomes helpful to the language once someone can read.
I don’t think ‘we’ have traveled very deep into the problem because of the kind of proposals that have come forth about what to do about it. I don’t think NICHD or NIH or IES has a solid research based understanding of the how much this problem is directly implicated in reading problems. (2006 note: There is mounting evidence for the “Orthographic Depth Hypothesis“).
Because of the impracticality of reform proposals and how well the orthodoxy has insulated themselves from their assaults, I don’t think they really get the dimension of the problem of the code’s ambiguity. As with any endeavor it’s important to understand the problem without constraining our thoughts to what a solution looks like – take the problem deeply home and then from there get creative. I think the baby was thrown out with the bathwater. People may all generally agree there is a relationship between reading difficulty and code ambiguity but I don’t think they have seen the dimension of the problem or made the connection that this ambiguity is responsible for causing a hundred million people to ‘hurt’ or that its causing hundreds of billions of dollars to be lost.
I don’t think we can travel very deep into the ‘what to do about it’ until we better understand what ‘it’ is – what I call ‘code-induced (or artificial) ambiguity overwhelm’.
I remain persuaded that the only way any reform proposal will ever travel beyond the ‘converts’ (orthography reformists) depends on the mainstream research community ‘getting’ the necessity of understanding the role of ambiguity in the retardation of the learning to read process. When they see how their own values are impeded by the ambiguity (and its economic and social costs) then perhaps they will become open to an iterative experimental process that can arrive at a smart balance between feared upheaval and reducing the ambiguity. We, this project, need to develop ways to pilot/drive the researchers into working the problem. I don’t believe the US Dept. of Ed. will ever accept spelling reform proposals as the risk is too great to prove them and because they require otherwise unaffected constituencies to change the way they write.
The US educational community recently woke up to the fact that for the past few hundred years every philosopher with an educational bent has tried to persuade the educational community to incorporate their views. The results have been a mess. The castle walls have been recently and substantially reinforced at the policy level – even the press has been co-opted. If it doesn’t have a rigorous scientific research backing the US Department of Education is encouraging the educational community to stay away from it. Look at the language coming out of congress. They mean to stop distracting experimentation and develop an orthodoxy protected by the research community. In a way this is understandable.
Any reform to the ‘code’ will have to go through the research community. They currently are numb to the talk of spelling reform. Because they don’t see a way of changing the code they accept it. It’s ‘behind them’. Their research models ASSUME the fixity of the code.
Those of us concerned with making the learning to read process more developmentally friendly to our children (no matter which vector we may come in on) can either scream into the night with our protest against the insanity of the code, or formulate research projects that get the attention of the research community. If the projects we propose start off with requiring the use of systems of spelling or alphabet modifications they will never get tried. The risk to the students of teaching them systems that are incompatible with the prevailing orthography is prohibitive – it threatens to retard their overall educational process.
We need to demonstrate to the research community in language and concepts relevant to them that this ambiguity problem is worth their attention. To do this we need to develop research models that can explore this without requiring children be guinea pigs to an abnormal orthography. We need to decouple understanding the dimensions of the ambiguity-effect from solutions to it that are so threatening and impractical that the idea of entertaining the problem goes down with them.
Letter to the Spelling Reformers: 9-15-2002
I appreciate very much what it means to serve, in mind and heart, a noble mission and goal. I have respect for everyone who has cared enough to work toward the goals underlying spelling reform and I appreciate very much how spelling reformers have kept a vital aspect of the ‘code’ conversation alive…
As Roosevelt should have taught us, prematurely advocating a solution for a problem that people don’t realize they have becomes suspect and easily dismissible. For many in the mainstream of the reading and language related sciences, simplified spelling reform still evokes the descendant stink it was stigmatized with in the early 1900s. I don’t want what we are doing here to be seen as a new attempt to champion an old folly (however noble I understand its origins to be). I want to engage in a dialogue with spelling reform communities to harness our collective learning but I don’t, at this time, want to get involved in comparing the pros and cons of the various proposed solutions. I want to stay on the track of understanding the origin and nature of of the code and the effect of its ambiguity on learning to read and in turn on learning in general. It is important that we avoid risking superficial dismissal by association with the baggage of spelling reform.
Though I am interested in what the spelling reform community has learned and in the historical stories they preserve, I don’t believe spelling as an issue runs deep enough to tap the energy that I believe is necessary to change the spelling (or more broadly the code). We must come to understand the unnatural challenge to our unconscious processing that processing this ambiguous code involves. Concurrently, we must understand the effects of our affects (I will describe this later) in the formation of our ‘decoding’ infrastructure. Once we reframe the interior processing contexts of learning to read, what we need to do will become glaringly obvious, until we do, it will remain hopelessly opaque and resisted.
The primary significance of spelling (to me) is that it reflects our collective negligence in caring for the health of our children’s learning. The greater our concern for the ecology and health of learning the more glaringly apparent the cognitive and emotional stresses of learning such a complexly ambiguous code become. Spelling is one aspect of a centrally significant issue – one aspect, of the overall code. However, its not the root of the code’s ambiguity. The ambiguities in spelling emerged from the more elemental letter-sound correspondence confusions.
I want to see the spelling change. I believe the spelling will change and change in a way that reflects the pioneering work done by spelling reformers, but I believe spelling reform will follow not lead the way.
For me, the fulcrum of change lies in championing how well we learn. Once we are concerned with the health of learning, we can begin to talk about learning to read. With a concern for the interior, inside-out learning-wellness of our children, the cognitive processing encumbrance of the ambiguous code and the emotional response that attends processing it (the feelings children experience about themselves as they learn to read) , becomes a paramount concern.
Learning to read has no precedence in our evolutionary history. For 99.999999999>> of our life form’s existence nothing even remotely like reading existed. We are not evolutionarily wired for reading and reading hasn’t existed long enough for our neurophysiology to have evolutionarily adapted to it.
For the first couple of thousand years of reading by alphabet learning to read had a sequentialness: see the letters – say the sounds – do it fast enough to emphasize and blend the sounds together (like Plato said…). Learning to read a phonetic script is analogous to the workings of a machine. Load the next letter and fire it, load the next letter and fire it…
The challenge to our brain, never before experienced in the history of the brain, resulted from the erosion of letter-sound correspondences precipitated by forcing the Roman alphabet into representing the English (and other) spoken languages. Short almost twenty letters, letters had to convey additional sounds depending on which other letters preceded or followed them (letter-sounds became context-dependant). This ambiguated the relationships between letters and sounds and opened the space for all the later spelling level confusions to come in.
Now, no longer 1 to 1, see&say, it became necessary to ‘buffer’ and ‘disambiguate’ the stream. Instead of next, load and fire, the brain must unconsciously recognize each letter but suspend articulating it and instead hold it in ‘decoding memory’ , most of the time along with some number of its preceding letters. New unconscious processing reflexes must form to extract contextual cues from ‘comprehension memory’ and then apply them to work out the ambiguity of the letters held up in ‘decoding memory’ (according to a series of memorized instructions (spelling and phonical) operating from within another ‘module’ of processing). Unlike math where there is time to process its code, this code-work must function fast enough for its output to feed a virtually heard or spoken stream moving at the pace of conversational language and self-conscious thought. If the unconscious decoding<>disambiguation<>comprehension assembly processing takes to long, the system stutters and breaks down.
Specifically, in order for the novice reader to begin to devote more attention and memory capacity to the text that is being read for strong comprehension to occur, phonological and decoding skills must be applied accurately, fluently and automatically. Laborious application of decoding and word recognition skills while reading text reduces attentional and memory resources, thus impeding reading comprehension.
G. Reid Lyon, Ph.D.
Chief, Child Development and Behavior Branch
National Institute of Child Health and Human Development
National Institutes of Health
Now comes the other major point of the series. Concurrently streaming and interacting with the cognitive processes sketched above we have affectual processes. The affectual processes drive our feelings and direct our cognitive processes into and away from becoming emotions. One of the component ‘affects’ of our affectual processing system is shame. Shame interrupts our stream of consciousness and calls attention to what ‘I am doing’. It interrupts and impedes the directional thrust of self-transparent working attention and calls self-awareness to something we are doing ‘wrong’ . Shame is a learning prompt.
The physiology of shame, like the other bio-affects in our affective-cognitive-system (ACS), has been evolving for millions and millions of years. Languageless animals and infant humans display shame. The deep hard wiring of the affectual processing system evolved long before oral language let alone written language. Shame didn’t evolve to provide us a way to be self-consciously correcting the kind of unnaturally ambiguous, unconscious, processing involved in reading. We can’t be self-conscious participants in the decoding & disambiguating aspects of reading, they must happen unconsciously-automatically faster than we can consciously participate in. Thus, while shame triggered self-consciousness may be a helpful prompt to pick up a book and try again and it is helpful at the conscious level of evaluating what we are comprehending, with respect to the unconscious processes involved in learning to read shame (relating to decoding and disambiguation processing) disentrains and confuses the process.
Once shame starts to associate with learning to read a dark spiral begins. When shame comes in it disrupts the very cognitive processes involved in reading – it makes worse what it comes in to report on. The more shame the less cognition entrains to reading, the less cognitive entrainment in reading the worse the reading, the worse the reading the more shame. Add to this that humans tend to develop ‘processing scripts to avoid doing things that illicit shame, and it becomes clear: shame is catastrophic to the unconscious processes involved in reading. If shame scripts interlink with decoding reflexes reading will become a painfully frustrating experience that the child doesn’t want to do.
The first casualty is self esteem: they soon grow ashamed…
National Institute of Child Health and Human Development
Learning to read represents an unnaturally ambiguous unconscious processing challenge unprecedented in the evolutionary developmental history of our brains. It requires the development of faster than conscious processing infrastructure in order to decode, disambiguate and assemble a virtual stream for consciousness to comprehend. These processes can’t functionally co-exist with self-conscious shame. If we want to help children learn to read, we must reduce the ambiguity overwhelm they are experiencing and we must reframe the emotional context of their learning to read so as to reduce the shame they experience (the shame they experience, as if they are doing something wrong). I think the best way to accomplish both is for parents, teachers and the reading science community to understand the cognitive and affective consequences of the ambiguity in the code, thus this series…
From: International Reading Association – Reading Research Quarterly November/December 2005
International research correspondent, Marketa Caravolas (see our interview with Dr. Caravolas)
Marketa Caravolas reports on a growing body of research involving comparisons of literacy development in English-speaking populations as compared with populations speaking other languages. An intriguing and consistent finding of previous studies is that English-speaking children tend to acquire early literacy skills, such as word decoding and spelling, more slowly than do children speaking most other comparison languages with alphabetic writing systems (Bruck, Genesee, & Caravolas, 1997; Caravolas & Bruck, 1993; Cossu, Shankweiler, Liberman, Katz, & Tola, 1988;Durgunoglu & Oney, 1999; Wimmer & Goswami, 1994). Current theories put forth to explain why English-speaking children fare poorly in these comparisons involve the relative complexity and inconsistency, or “depth,” of English orthography.
Philip Seymour of the University of Dundee, Scotland, and his colleagues in 12 European countries (Seymour, Aro, & Erskine, 2003) conducted cross-linguistic studies of foundational levels of literacy, working within the EC COST Action A8 network. Results revealed that English-speaking children in their first two years of schooling in the United Kingdom (UK) have the poorest outcomes regarding familiar word identification and nonword reading when compared with children from other countries tested. In addition, children learning other relatively complex orthographies, such as French, Danish, and Portuguese, also tended to perform less well on foundational-level skills, although not as poorly as the English-speaking children. These findings are consistent with an orthographic depth hypothesis, which posits that children’s development of early literacy skills is strongly tied to orthographic complexity. The authors estimated that the rate of foundational literacy development for English-speaking children is twice as slow as that of children learning relatively shallow orthographies such as Finnish, Greek, and German.
In a related question, researchers sought to determine whether orthographic complexity also influences the cognitive components and processes that underlie normal literacy development on the one hand, and the profile of cognitive deficits in dyslexia on the other hand. For example, according to one hypothesis, the highly consistent letter-sound mappings in shallow orthographies required less intricate phonological awareness and decoding skills than would more complicated orthographies. Whereas speed of word naming and reading fluency have been shown to be important indicators of reading proficiency throughout the primary years (Wimmer, 1993), a recent study by researchers at the University of York, UK, and the University of Amsterdam, Netherlands (Patel, Snowling, & de Jong, in press), found that in a comparison of English (deep orthography) and Dutch (relatively shallow orthography) primary school children, phonemic awareness was a significant predictor of reading ability in both languages, whereas naming speed was not. A related study conducted through the University of Liverpool, UK, and Charles University in the Czech Republic (Caravolas, Volín, & Hulme, in press) found similar results for English and Czech primary school children. The Czech children, whose language is orthographically shallow, revealed more advanced literacy skills; however, differences in ability were more consistently predicted by phonological awareness skill rather than naming speed.
The Children of the Code project is grateful for the help of Dr. Steve Bett, Dr. Edward Rondthaler, and Dr. Valerie Yule. In the early days of our project as we were learning about the history of the code we encountered their work and through them many of the stories of spelling reform.
Our conversations with the following people also explore aspects of orthographic reform:
Past Unidel Professor of Educational Studies & Professor of Computer, Information Sciences & Linguistics, University of Delaware; Author: The American Way of Spelling: The Structure and Origins of American English Orthography
Professor of English, University of Texas at Austin; Co-Author: A History of the English Language
Chair of Media Studies, University of Virginia; Author: The Alphabetic Labyrinth
John H. Fisher
Medievalist; Retired Professor Emeritus of English, University of Tennessee; Leading authority on the development of the written English language; Author: The Emergence of Standard English
Professor, Department of Languages and Cultures of Ancient Mesopotamia at the University of Leiden in Holland. Author: Unfolding Language – an evolutionary tour of mankind’s greatest invention.
Physician; Author: The Alphabet vs. The Goddess
Robert Logan (not yet online)
Professor of Physics, University of Toronto; Author: The Alphabet Effect
Chair, Department of English, Louisiana State University; Research: The Textual Awakening of the English Middle Classes, 1380-1520
Naomi Baron (not yet online)
Linguist; Director, TESOL, American University; Author: From Alphabet to Email: How Written English Evolved and Where It’s heading
Walter Isaacson (not yet online)
CEO, Aspen Institute; Author: Benjamin Franklin: An American Life
John Gable (not yet online)
Past Executive Director, Theodore Roosevelt Association
Steve Bett (not yet online)
Linguist; Editor, Simplified Spelling Journal
Peter Krass (not yet online)