My Thoughts on Teaching

(BLOG #3.5 – Bonus Blog 2 by Jeremy Eckert – UIC-Barcelona student)

This will be a meandering post in which I raise many questions about teaching, and answer almost none. Enjoy!

Education is non-linear

I’m constantly struck by the fact that teaching practice is fundamentally non-linear. A teacher can put a great deal of thought into a lesson and have it go horribly, with little engagement from students, yet on another day go into a lesson much less prepared and yet oversee a stimulating, fruitful discussion among enthusiastic and inquisitive learners.

The theory of teaching can feel equally tantalizing, because it is simultaneously straightforward and enigmatic. When written simply, explanations of learning can sound circular, and even obvious:

How do children learn problem-solving skills? By solving problems.

How do children learn to interact with others? By interacting with others.

How do children learn to understand natural speech? By listening to natural speech.

We know what the learners have to do in order to learn, but now we must make them all do it. So much of the work of teaching happens in students’ own minds and feels largely outside of the direct reach of the teacher.

 

Abilities are latent, success isn’t always flashy

For my MA program, I have to analyze videos of teachers teaching lessons. When I watch videos of real classroom interactions and study them through the lens of conversation analysis, looking at how teachers use their interactional resources to support understanding, I’m struck by how ordinary it looks to practice good teaching.

Much of what teachers do is simply what normal people do when communicating with others: pointing at a board, rephrasing, using facial expressions, pausing, or emphasizing key words. These are not skills explicitly taught in teacher training courses, just commonplace interactional norms people acquire through a lifetime of communication.

There is nothing magical here, no single secret trick that makes learners learn. The lessons themselves often look very normal, with mixed success in student responses, yet the guiding analyses written by the university highlight teachers’ successful use of interactional resources to clarify meaning and keep interaction moving forward—even when the immediately visible responses from students are anything but flashy. In some ways, this is disappointing, because part of me was hoping my MA program would introduce me to the one easy trick I needed to guarantee learning. In other ways it’s reassuring. The examples of teachers successfully using classroom interaction to support learning resemble what I do, and what my colleagues do.

 

Thinking of the classroom as a box

I used to do private tutoring in my colleague’s apartment bedroom that had been converted into a classroom. (That classroom was later moved and expanded into the school which my colleague and I opened, and are currently operating.) I had a realization: the classroom is a box. For safety and consistency, we put children in these boxes so they don’t wander into traffic during a lesson. This sounds a bit silly written out like that, but it’s true: the classroom solves the logistical issues and the safety concerns of the adults involved in the process. But the children? They would rather not be in that box, mostly. Many would rather be at home, or playing in the park.

The classroom is a box. Children are placed in that box for an hour or so, and the teacher is expected to make that box both educational and engaging, turning it into a place of structured play and even simulated reality.

But to engage in simulated foreign language interaction, a learner must have some kind of need. In traditional classrooms, the need itself is also contrived by the teacher, implied but never spoken out loud. “You need to learn this because I say so.” Or if the learner’s mother signed the child up for class without the learner’s consent: “You need to learn this because your mother says so.”

Basing class work on learners’ needs outside of class

Considering the question of ‘needs’ is one reason I’ve found myself drawn to action-based methodology. I’ve long felt that motivating students requires some level of creating a fictional ‘need’ for them to interact with English, which can be difficult to force. (I may need to be careful here. It’s easy to conflate the idea of an immediate need [an incentive to participate in an activity], with a long-term need outside of the class, like needing to learn English in order to participate in a globalized job market.)

Action-based approaches seem, at least in theory, to drop the need for simulated needs by legitimizing classroom work through external actions and real outside needs. What happens in the classroom matters to the students because it leads somewhere beyond the classroom itself.

Instead of the traditional classroom way of saying,

You need to participate in an imaginary dialogue activity about birthday parties because the teacher said so,”

an action-based approach might say,

Your classmate who isn’t here today will have a birthday soon. Do you want to have a party for her? Great. What do we need to prepare to make this happen? Let’s work in pairs together to plan her birthday party!” (And then around the time of that classmate’s birthday – importantly – we all have that party just as the students planned it, so that the birthday dialogue exercise had a real impact on the future actions of the learners).

This is my understanding of what the action-based methodology means. I suppose in the above example, the birthday party plan isn’t exactly a “need” which existed before the teacher offered it as an option to the kids, so it’s still somewhat contrived by the teacher, but at least the dialogue activity leads to real future action.

I can think of one long-term need which my young learners have: they need to get into a good school later, so that means they eventually need to pass IELTS/TOEFL exams. In this sense, would a test-prep center be following action-based methodology? After all, it’s based around fulfilling a future need outside of the test-prep classroom.

 

Flattening multi-dimensional ability into a 1-D timeline

I’ve found it difficult to conceptualize a one-dimensional, linear chronological track for something as multidimensional as language competence. In both lesson planning and syllabus design, there is sometimes leftover space that needs to be filled on the timeline of planned learning. I may have clear lesson or semester aims, a plan for how most of the time will be spent achieving those aims, and a sense of how and when review should happen—yet there is still time which is unaccounted for.

I struggle to articulate guiding principles for filling that space, and it can sometimes feel arbitrary what extra content ends up there. How to know what to teach these kids, and at what point?

Social justice as relief

Some pedagogies use social justice as their guiding North Star. This makes sense in multiracial democracies with protected freedom of expression, but in a extremely homogeneous Han city composed of almost purely middle-class families, and in which speech against authorities leads to guaranteed loss of freedom, that philosophy can feel poorly suited to teachers who don’t want to get disappeared into some political prison (or deported).

Still, I’ve begun to wonder whether social justice might take a different form here—not as political voice or direct critique of authority, but as relief for chronically over-stressed young learners facing burnout in a merciless race to college. Perhaps there could be a way to work stress relief into the curriculum for these kids, as an indirect form of social justice. 

Flexible curriculum or flexible teachers?

I’ve also noticed that many student groups work extremely well when they come into class well rested on their Sunday morning class timeslot. But those exact class groups also have a Wednesday evening class at 6:40 p.m., after a whole day of school, and they are understandable extremely antsy then.

The question which this raises for me is, should our curriculum take this into account (bake into the syllabus that there is one type of lesson for weekday evening lessons, and one type for weekend morning lessons), or should we just leave it up to every individual teacher to modify for their particular situation? Can we trust all teachers to do such modifications on their own? They can be trained to do this but my question still stands:

When should certain issues be addressed via curriculum design and when should certain issues be addressed on a case-by-case basis by individuals? 

 

 

Some Noteworthy Nuggets About Education

Several ideas and quotations changed my way of thinking about education.

In The Aims of Education, Alfred North Whitehead famously described education as moving through three stages:

“The first stage is Romance. The second stage is Precision. The third stage is Generalization.”

This rings true for me based on my own experience. My favorite classes—and my favorite teachers—first ignited a genuine love for the subject before moving on to practical aspects.

Another influential idea comes from Life in Schools, where Peter McLaren writes that

“The classroom is a cultural arena where… ideological, discursive, and social forms collide in an unrelenting struggle for dominance.”

This helped me see classrooms not as neutral spaces, but as sites where bigger social expectations and power relations from outside of the classroom continue to play out. Which country’s English gets taught? Why must these kids learn to speak like me, while I was never pressured to learn to speak like them when I was growing up? Whose wish was it for this child to be here in class with me – was it her own wish, or her parents’ wish? What kinds of voices are rewarded in this classroom, and what kinds are tolerated or quietly corrected?

Finally, I once encountered the claim somewhere (forgot the source) that,

Education is not an engineering problem, but an ecological one.

Thinking of the classroom as an ecosystem does justice to the sheer number of interacting variables and moving parts: learners’ histories, motivations, moods, relationships, institutional pressures, and the limits of teacher control. In this view, teaching is less about producing results and more about shaping conditions—conditions in which learning may happen unevenly and unpredictably, over time.

That’s all for now.

SOURCES:

Council of Europe. (2020). Common European framework of reference for languages: Learning, teaching, assessment – Companion volume. Council of Europe Publishing.

Whitehead, A. N. (1929). The aims of education and other essays. Macmillan.

McLaren, P. (1989). Life in schools: An introduction to critical pedagogy in the foundations of education. Longman.

Posted in Uncategorized | Tagged , , , | Leave a comment

Misconceptions about How Language is Acquired

PERCEPTIONS OF PARENTS, STAFF, & EDUCATIONAL ADVERTISING

(BLOG #3 by Jeremy Eckert – UIC-Barcelona student)

The following is my attempt at listing and reflecting on some of the more noteworthy ideas about learning that I’ve encountered as a teacher/school owner which I would label as misconceptions, even if well-meaning. This piece won’t claim to reach any final answers on the matter, but it may give me a deeper understanding of where I stand on the question of how language learning works.

HOW SOME PARENTS THINK LEARNING HAPPENS: “CAN YOU JUST GIVE MY SON A LESSON WHILE YOU’RE BOTH IN THE GO-KART QUEUE?”

In the early days of starting my school in East Asia, some students’ parents requested VIP one on one ‘classes’ (more like playdates) and they told our school, “I don’t want you to teach my son in the classroom. Just take him to the carnival for an hour and speak English to him. Take him to a western restaurant and chat with him while you eat.” This kind of unstructured exposure could potentially have benefits if it happened every day, but none of our teachers would be willing to go on so many playdates. I think those parents’ rationale was as follows:

(1) “Students in this country are already taking too many extracurricular classes in this high-pressure study environment, so let’s try to minimize classroom time and increase play time but continue learning all the same. [I think this is a very understandable and rightfully empathetic stance.]

(2) “The learner is a vessel into which the teacher pours knowledge simply by talking at him or her, so there’s no need for lesson structure or lesson planning. Just teach my kid at the indoor trampoline park, or how about improving his listening comprehension in the wave pool?” [This is only slightly exaggerated for comedic purposes.]

Needless to say, a few of our teachers only had such unstructured playdate lessons with parent and child, and this was never systematized into the curriculum, nor made into a service we encourage.

HOW SOME PARENTS THINK LEARNING IS HINDERED: “TWO SYSTEMS MUSTN’T BE TAUGHT AT ONCE!”

Another misconception is related to being overly cautious about timing English learning with certain aspects of Mandarin learning. In first grade, children who speak Mandarin natively are taught a phonetic-like script based on the Latin alphabet called Pinyin. Some parents refuse to teach their kids the English phonics work I give their first-grade child because they think it will interfere with the child’s learning of “Pinyin”, when in actuality, I feel that learning pinyin HELPS the children’s phonics abilities, based on what I’ve seen of their phonic awareness. It’s true that some of the sounds are different, but most of the letters of pinyin share the same sound as English ABCs.

HOW SOME PARENTS THINK LEARNING IS SHOWN: “TELL ME EVERYTHING YOU’VE LEARNED THIS WEEK, RIGHT NOW. “

I’ve witnessed many instances of parents demanding their small children to perform for them on the spot with no prompt or scaffolding. Some parents overestimate the children’s ability to spontaneously speak English without being given a topic, specific question, or a motivating reason to speak (nor given any time to think or plan). They also overestimate young children’s ability to use metalanguage and summarize everything they have learned. Other parents also expect very young kids to read blocks of text with no pictures or scaffolding, even though we explicitly tell those parents that children at such a level are only expected to listen to the audio and follow along with their finger, not read it out loud without help.

HOW MARKET FORCES INCENTIVIZE PRIVATE LANGUAGE CENTERS TO PORTRAY LEARNING: LINEAR AND GUARANTEED

Over the past 8 years of operating, our school of 300 students has seen its fair share of success stories – impressive learners with noteworthy language acquisition results. We’ve also witnessed some heartwarming turnarounds. Students who initially didn’t like learning English or were seriously struggling, later became high-achievers and fell in love with English after joining our school. But then again, our school has also seen its share of less eye-catching results.

It can feel like a fool’s errand to try and fully separate our school’s contribution to the learner’s performance from the other factors involved in their language development (factors like parent involvement in English practice at home, motivation, love of English), especially since the maximum requirement for students to attend our school is 3 hours per week. By itself 3 hours is simply not sufficient for deep language uptake, which is why we encourage the students to do the homework and read the books we provide for them, but not all students follow this advice.

I say all this to make the point that it is very humbling to be held responsible for the learning outcomes of so many young minds. But when your primary concern is keeping the school’s lights on, and you’re trying to compete in the market, there is no room for being humble in your advertising. One must portray learning as if it’s an assured, linear process. (Come to class a lot, learn a lot.)

And if you don’t guarantee results, good luck, because every other tutoring center will. When I watch advertisements for certain language tutoring schools in my region, I can’t help but cringe. They claim to show a lesson in action but it’s obviously scripted and rehearsed, and not representative of the average students attending that brand of school. But who wants to attend a school whose advertisements show the sometimes less than picture-perfect reality of teaching/learning?

There are times when our school’s dilemma is: in what situations should we adapt ourselves to meet the expectations of consumers, and in what situations should we work to adjust those expectations themselves? We naturally do a bit of both. Sometimes the parents need us to give them a bit of education on how learning happens, and how not to push their children too hard. Other times, we have to change some of our school practices to respond to feedback from customers.

HOW SOME NOVICE TEACHERS VIEW LESSON AIMS: “NOBODY DIED, SO IT WAS A GOOD LESSON.”

One of my roles at the center is to give orientation training, as well as on-the-job skill development. I routinely conduct observations of our teachers during class. A teacher and I always have a discussion before the observation, then I conduct the observation, and then we have a discussion afterward as well. Observing lessons has made me confront some of the common oversights committed by new teachers, as well as some things which I overlook in my own teaching as well.

One time, I observed one of my teachers, and then we discussed what she did well in her lesson. Then I pointed out that there was one core aim of the lesson – which she had previously set with me during our pre-class conversation – that she hadn’t accomplished during the lesson but could have achieved. “Yeah, but it ended up fine.” She replies, as if the bar for the lesson *not* going fine was someone ending up with a pencil stuck in their eye socket. By a hospital’s standards, yes, everything went fine, because nobody was hurt or killed. But that’s not the bar we can or should use for effective teaching. We need to take note of the lesson aims which we successfully achieved, and those which we didn’t.

Running this English center has taught me just how important outside accountability is for teaching. While an emergency room surgeon feels intrinsic motivation to do a good job because they don’t want to face the trauma of another patient dying on their watch, a semi-qualified, less motivated teacher might feel that they have done a good job as long as no one is violently upset or bored enough to complain. Another important thing to look for in a teacher is identity. Does the teacher identify as an educator? Or does he/she identify as a world traveler, just doing this job for now in order to fund their plane rides?

HOW SOME NOVICE TEACHERS THINK LEARNING IS SHOWN: “THE STUDENTS ARE PRODUCING THE PHRASES, THEREFORE THEY UNDERSTAND THEM.”

Another point I have needed to make to new teachers is: “You practiced the forms a lot, and I heard the students producing them accurately, but you breezed over the meaning part. Are you sure they knew what they were saying?”

New teachers sometimes underestimate how easily form can drift away from meaning. Students can say structures fluently while remaining unsure of their communicative function. Centering meaning is not automatic, it must be protected. I try as often as possible to re-orient the goals of my teachers toward meaning-based communication, rather than form-based drilling. I have also taken steps to structurally revamp the curriculum to reward communication rather than accuracy.

HOW SOME TEACHERS MISINTERPRET STUDENT CONDUCT: “STUDENTS ARE ACTING UP. IT MUST BE BECAUSE THEY ARE A NAUGHTY GROUP.”

Some teachers misinterpret student behavior by assuming that disruption is a fixed trait of the group rather than a response to the learning conditions. When students “act up,” it is often less a sign of naughtiness than of boredom, fatigue, or disengagement with repetitive materials. In many cases, the students are not resisting learning itself, but the narrow range of activities which the teacher is allowing in class. This places responsibility back on the teacher to plan varied, meaningful activities while also remaining flexible and able to adapt to the classroom vibe.

These anecdotes reveal a set of shared assumptions about how language learning is believed to work. These assumptions are held by basically anyone who doesn’t study the field of SLA or have experience in teaching. Across the situations I mentioned above, language learning is often treated as something automatic, immediately visible, and easily controllable, with an emphasis on surface behaviors rather than less immediately observable long-term processes through which meaning and communicative competence develop. Common assumptions that emerge include:

  • exposure alone is sufficient for acquisition
  • teachers pour language into learners’ brains
  • learning systems interfere with one another rather than reinforce awareness
  • real learning must be instantly performable and unprompted
  • accurate production of forms guarantees understanding
  • student misbehavior reflects character rather than learning conditions
  • progress in language learning is linear, cumulative, and guaranteed

Over time, running this center has pushed me away from these simplified models and toward a more cautious view of language learning as complex, non-linear, and only partially visible. More to come in my next, bonus blog post.

Posted in Uncategorized | Tagged , , , | Leave a comment

Output or Input?

The Debate around the Role of Speaking in Language Learning

(BLOG #2 by Jeremy Eckert, UIC-Barcelona student)

One concept I was introduced to during my MA course is the concept of Merrill Swain’s Output Hypothesis. Output here refers to language output (speaking or writing – what comes out of your mouth or your pen), contrasted with language input (listening or reading – what goes into your ears or eyes). The Output hypothesis claims that language learners acquire language more successfully when producing the language by speaking it or writing it. I’ll briefly explain the hypothesized mechanisms.


The first mechanism of acquisition manifests when the learners notice gaps in their language abilities while trying to express themselves (for example: “That is what I want to say… but this is all that I’m able to say…”), which leads them to either acquire that missing language when they receive feedback, or to seek out the missing language themselves, to use in the future.
While the learners produce output, they also simultaneously hypothesize about how the language works. (“Hmm…I’m guessing that I can add this word to the beginning of my sentence…. Oh and I forgot an important word, maybe I can substitute it with this other word and everyone will still understand me…”) This creative use of language, plus self-aware thinking (hypothesizing, testing, evaluating) helps knowledge about language turn into language acquisition.
Lastly, the third mechanism through which output leads to acquisition is when learners reflect on their output afterward in some way, through conversation with others discussing the output they just produced, or by remembering what they said/wrote, when looking back on it at a later time.

This contrasts with Krashen’s Input Hypothesis which proposes that comprehensible, and interesting input is the key to language acquisition, and output is secondary – merely the result of all the language which the learner already acquired by consuming comprehensible input through reading and listening.

Basically, the same way that researchers of human development have argued for a long time about whether it is nature or nurture which primarily shapes human development, so too the researchers of second language acquisition have been debating for a while about whether it is mostly input or output which primarily drives language acquisition. Both debates share the same underlying structure: two explanatory forces which are clearly related get treated as if only one of them can be the true explanation.

The cynic in me can’t help but wonder if scholars who argue strongly for one position over the other are less concerned with resolving the question empirically than with carving out a recognizable niche within the field. (Great for publishing!) After all, sharply defined positions are easier to defend and brand, and easier to cite than nuanced, mixed ones. On the other hand, the scholar in me understands that the human brain is incredibly complicated and difficult to study, and researchers are allowed to have different passionate opinions about how it works. Despite decades of research, there is still no instructional approach that guarantees acquisition.

Would it be a cop-out to just say “Why not both input and output?” the same way I resolve my own Nature/Nurture debate? Can we just say that acquisition comes half from input and half from output? That’s actually pretty close to what the Interaction Hypothesis proposes. The Interaction Hypothesis posits that, during interaction, learners process both input and output, while also negotiating meaning and receiving feedback, and these processes lead to acquisition.

So there you have it – a hypothesis which claims both input and output are required for language acquisition! Now the more difficult part: translating these ideas into effective teaching methods, replicable for as many classroom environments as possible.

When we’re staying within the realm of theory, it’s perfectly fine to just say “Obviously learners need both input and output. Can everyone just stop arguing now?” But when we get into the realm of practice, it’s quite important for us to figure out who is more correct, because the effectiveness of our teaching is at stake.

Let’s see what Steve Kaufmann has to say about this input/output debate. Steve Kaufmann is not an SLA researcher, but he is a highly experienced adult language learner whose views are based on decades of immersion and self-directed study rather than formal linguistic theory.

I found his video compelling. My main take away from his video was the case he made in favor of input:

“So output is important, but still… Input over output, input before output, input more than output.”

Something else I found noteworthy was how dismissive he was of Swain’s 3 mechanisms of acquisition:

“We learn from output because it enables us to identify our gaps… 2nd of all, output has a hypothesis-testing function. […] We test whether we know the grammar. It’s kind of… the same as finding our gaps, I don’t see the difference. The third thing is a metalinguistic function. […] To me, it’s all the same.”

Kaufman seems not to be interested in actually engaging with Swain’s ideas. Instead he flattens 3 distinct cognitive mechanisms into one vague category of ‘finding gaps’, which makes Swain’s theory look repetitive when it isn’t. Saying Swain’s mechanisms are “all the same” is like saying perception, reasoning, and reflection are identical because they all involve ‘thinking’. It’s a form of equivocating, frankly. All due respect to Kaufmann, I think he is a prolific learner of languages, and his video makes valid points about input, backed by experience. But he seems to be already settled into a distinct view of language learning which has worked well for his own studies, and everyone else’s theories get filtered through that lens. He and many followers of Krashen’s Input Hypothesis also talk a lot about the dangers of encouraging learners to speak because speaking raises the learner’s Affective Filter (another way of saying it raises the learner’s anxiety).

This is where my view starts to differ. While valid in many cases, the claim that “speaking raises a learner’s anxiety” shouldn’t be treated as an axiom, or some kind of universal natural law.

Personally, I think I lean towards Swain’s Output Hypothesis because of my own background as a learner of Arabic. I didn’t like seeking out comprehensible texts and audios (except Arabic music, which I still love). The type of passive, sustained input that Kaufmann prefers to learn from was (and still is) exactly what raised MY learning anxiety. I found the idea of seeking out and consuming such input to be a daunting task, and it requires a lot of patience and motivation to seek out appropriate texts at a beginner/intermediate levels, especially back in the year 2010 for Arabic. And even when I was able to find some resources, sitting through long stretches of language I didn’t fully understand made me impatient and unmotivated. Passive exposure demanded an attention span I didn’t feel I had, and as soon as any confusion started to accumulate, my motivation dropped.

In fact, to this day, listening to texts or watching movies is my least favorite way to learn a language. How do I instead seek out comprehensible input? In college, I did so by finding Saudi language partners at my university for conversations! In other words – producing OUTPUT (speaking with Arabic native speakers) was my way of receiving INPUT. It can be very effective, because in a conversation, the interlocuter gives you input in short bursts, requiring a low attention span, and the interlocuter can adapt to your level by simplifying their language on the spot when you don’t understand, or even sometimes attempting to translate into their limited English.

That’s why I don’t always pay deference to the supposed axiom that speaking raises anxiety more than listening or reading. For many learners, sure. But clearly not for all. In my case, extended listening raised my affective filter far more than conversation ever did. Interaction gave me input that fit my level and attention span. From that perspective, Swain’s argument doesn’t compete with input-based theories—it connects how input actually gets processed for certain learners.

So when Kaufmann says “input before output” or “input more than output,” I don’t think he’s wrong. But I also don’t think that this hierarchy is universal. Sometimes output itself is the path to input. Sometimes speaking is not the stressful part—long stretches of listening, reading, or watching are.

Update: 

Upon re-reading this post I thought of one more point I wanted to add. In other videos Kaufmann talks about the pure “joy of listening“.

I find this illuminating, and it seems to help explain his preference for input. I on the other hand have always found a deep, gratifying pleasure in reproducing the foreign sounds of new languages. I find output much more pleasurable than input. To transfer my ideas to another person just through the correct use of teeth, tongue, lips, and voice, is so satisfying, and it can be a source of pride which motivates further interaction. 

SOURCES:

Swain, M. (1985). Communicative competence: Some roles of comprehensible input and comprehensible output in its development. In S. Gass & C. Madden (Eds.), Input in second language acquisition (pp. 235–253). Newbury House.

Luo, Z. (2024). A review of Krashen’s input theory. Journal of Education, Humanities and Social Sciences, 26, 130–135. https://www.researchgate.net/publication/378700427_A_Review_of_Krashen%27s_Input_Theory

Long, M. H. (1981). Input, interaction, and second-language acquisition. Annals of the New York Academy of Sciences, 379(1), 259–278. https://doi.org/10.1111/j.1749-6632.1981.tb42014.x

Posted in Uncategorized | Leave a comment

Reflecting on Language and Power

(BLOG #1.5 by Jeremy Eckert, UIC-Barcelona student – this is a bonus blog post inspired by BLOG #1)

When people talk about changing English norms, a common concern is whether it is fair for certain groups of speakers to set the standards for the language. Braj Kachru addressed this issue through his well-known model of Inner Circle, Outer Circle, and Expanding Circle Englishes, arguing that Inner Circle countries have traditionally acted as norm-setters. Some scholars think its unjust for the “Inner Circle” countries to still be setting English norms today. Now, of course, I am not denying the initial injustice of colonization, nor the violence and coercion through which English spread in many parts of the world. However, when the question shifts from historical responsibility to the practical matter of how language norms came to exist and why they persist, it is hard to see how anywhere except the ‘Inner Circle’ would set English norms. Areas with the largest historical populations of English speakers and writers naturally gained influence by producing dictionaries and literary works that were widely read and passed down over time.

Sure, the outer/expanding circles have their own native English speakers now. But language norms are shaped not only by the number of people speaking English today, but also by generations of speakers who passed away, and whose language use left a mark. Their writing and speech helped establish patterns (as well as songs, literature, films) that continue to influence English to this day. As G. K. Chesterton once put it, tradition can be seen as a kind of “democracy of the dead,” and I think the same can be applied to talking about language traditions, where past speakers still indirectly have a voice in how the language looks and sounds today. (To be clear, I’m saying this as an explanation for why Inner-Circle countries still set norms today, but I’m not saying that this is how it will – or  how it should – be in the future, as English becomes less and less tied to the Inner-Circle.)

Similar to the backlash against Inner-Circle’s monopoly on norm setting, there is some academic backlash against the term “native speaker” itself. I believe that even though the term “native speaker is often criticized for being hard to define clearly, it remains a useful way to talk about language ability. Using the term does not mean claiming that native speakers are morally superior. It simply recognizes that some speakers develop a strong intuitive feel for the language after acquiring it in childhood and having lifelong exposure to its everyday use. Do I feel inferior by having to admit that I’m not a native speaker of French? No, it’s simply a fact.

I’ve also noticed that even people who criticize the concept of the “native speaker” as unscientific often continue to use the term in their own speech. They may rightly question its role in teaching, but they still rely on it as a reference point. The term is simply unavoidable, not because of ideology, but because there is no clear replacement that captures the same mix of experience and historical connection to a language.

Acknowledging this does not mean that native-speaker norms must always be the main goal in language teaching. Teachers can still question which models are most useful for learners in different contexts. But keeping the term helps us talk about language proficiency.

At the same time, reflecting on the concept of the native speaker has required me to confront my own position and the privileges attached to it. Having taught in the Middle East and East Asia, I have seen firsthand the effort students must make simply to access subject knowledge through English. For many learners, English is not just the medium of instruction but an additional cognitive barrier layered on top of already demanding academic materials. Watching students strain their brains just to follow explanations has made me acutely aware of how different this experience is from my own.

I have been fortunate to study all of my subject areas in my native language, allowing me to focus fully on content without the added difficulty of decoding a foreign language. It’s easy for native speakers to overlook this, especially if you’ve never been required to learn complex material through a second language. The way language fades into the background for native speakers while studying a subject matter is a powerful advantage.

This privilege extends beyond the classroom. I am also aware that my position as a native speaker has opened up professional opportunities that would have been far more difficult to get otherwise. For example, opening an English school in my current host country would almost certainly have been more challenging had I not been recognized as a native speaker. In fact, my privilege exists on two levels: native-speaker privilege on top of passport privilege. There are many fluent and highly competent native speakers of English whose language ability is denied because of their nationality, despite having the same proficiency as other native speakers. Recognizing this has made clear that “native speaker” status is not only about language, but also about global power structures that decides whose English is legitimate.

Discussions about decolonizing English teaching also raise questions about the links between language and power. While it is unrealistic to separate language completely from culture, we may need to do so for English because it inhabits a special position compared to other languages taught around the world. The aim of TEFL is world communication, not cultural transfer. While reflecting during my MA course, I had to own up to this fact. As my professor talked about “decolonizing TEFL” and “native speakerism”, I couldn’t help but initially balk at the idea. “I wonder if teachers of other languages share this anxiety that teaching their language might be linked to cultural domination or colonization. Most likely not!…” I thought to myself. But the fact is, only teachers of English have to worry about this right now, because nothing else even comes close to having the world status of English. I personally know many teachers of Mandarin, including some working in organizations such as the Confucius Institute which has been spreading to countries around the world, and all of them say they never worry that teaching their language is a form of cultural domination. Instead, sharing language and culture is seen as something positive, worth celebrating.

English, however, carries a heavier historical burden. In some cases, such as the forced removal of Aboriginal children in Australia and their compulsory education in English (to name just one of many grim stains on the name of English teaching in history), the language was used not simply for communication, but as a tool to suppress and replace local cultures. Because of this history, decolonizing English teaching is less about rejecting English literature or its past, and more about being aware of the responsibility that comes with the language’s global power.

If English now functions as a universal means of communication, it needs to loosen its ties to cultural superiority in ways that other languages do not. This does not mean stripping English of culture entirely, but it does mean teaching it with greater sensitivity to whose cultures are centered and whose are ignored.

Sources

Kachru, B. B. (1985). Standards, codification and sociolinguistic realism: The English language in the outer circle. In R. Quirk & H. G. Widdowson (Eds.), English in the world: Teaching and learning the language and literatures (pp. 11–30). Cambridge University Press.

Kachru’s circle model. (n.d.). StudyLib. https://studylib.net/doc/27634818/kachrus-circle-model

Australian Human Rights Commission. (n.d.). Historical context: The stolen generations. https://humanrights.gov.au/bringing-them-home/significance/historical-context-the-stolen-generations.html

Posted in Uncategorized | Tagged , , , , | Leave a comment

How English is Spoken in the Wild, and How That Affects English Teaching

(BLOG #1 by Jeremy Eckert, UIC-Barcelona student)

Did you know that the majority of all English conversations happening on Earth right now are most likely happening between non-native speakers? That’s right, native speakers are no longer the majority of English users (and this has been the case for a few decades already). As you probably know, English occupies a special place in global communication, serving as a world Lingua Franca. But while scientific research, aviation, and many news media use Standard British or American English, the unscripted exchanges happening right at this moment between, for example, Korean and German CEOs in business meetings, or between Saudi and Brazilian online gamers messaging each other, occur in a functional English known as English as a Lingua Franca.

English as a Lingua Franca (ELF) is that practical English which is used by non-native speakers when communicating with other non-native speakers, often not relying on the grammar norms or pronunciation of native speakers. Since “non-native speakers of English” is a very broad term encompassing the diverse majority of English speakers on Earth, ELF is highly variable in its features. Since it doesn’t consist of a distinct set of grammatical features or phonemes, linguists cannot classify it as a distinct English variety. Instead, what makes it distinct is its speakers’ flexible usage of communication strategies to reach the goal of mutual intelligibility. This might be important to the field of teaching and learning English because it shows that learners of English aren’t following some of the norms that were taught to them in their courses which traditionally hold Native Speaker English as the ultimate standard.

Acknowledging that many ELF speakers don’t always have use for language norms set by native speakers of Standard English can make a person wonder, do ELF users simply lack knowledge of English grammar and pronunciation, or are they choosing to communicate in this way for other reasons? According to research, it’s not necessarily because they don’t know about the English rules. Krashen’s distinction between ‘learned knowledge’ (knowing ABOUT English) and ‘acquired language’ (being able to USE English) helps explain why learners who know the English rules might prefer to use simplified or non-standard forms during real-time interaction. When ELF speakers are put on the spot, they use language which is functional and easy to access in their minds, rather than consciously learned rules. Instead of this implying that ELF speakers are ‘incorrect’, it might instead show learners’ adaptation to real communicative contexts. If this is true, teachers may need to rethink which aspects of English should be focused on in the classroom, and which aspects might be less useful for our students, and should be de-emphasized.

To understand why these kinds of language habits arise, it’s helpful to look at how second language acquisition (SLA) has been thought of, and compare it with more recent ideas challenging those models. One traditional model of SLA labels the awkward in-between stage which language learners occupy before mastering a language as “interlanguage”. Interlanguage models tend to think of ‘learner language’ as if it’s a stable mental list of rules that slowly grows to the same size of a native speaker’s mental list of rules over time. More recent, usage-based approaches see language differently. They argue that linguistic knowledge is not fixed, but constantly changing and shaped by use. For example, Nick Ellis (2008) suggests that learners do not build a separate “interlanguage grammar.” Instead, they develop flexible, probabilistic connections between what they hear and what they say, based on frequency, patterns, and experience. In other words, don’t imagine learners writing a grammar book in their brains as they learn more language. Rather, imagine learners slowly strengthening patterns of use based on what they hear and say most often.

Imagine: Speaker A successfully made himself understood to his listener when saying “she don’t like it” instead of saying “she doesn’t like it”, and this frictionless interaction gave him no reason to correct himself and switch “don’t” to “doesn’t”, and he will continue to speak this way in the future as more smooth interactions reinforce this habit. To reduce his own cognitive load, and maintain a fluent talking speed for his listeners, he continues to use “don’t” in all cases. People are understanding him, so he is accomplishing his goals. From this perspective, English as a Lingua Franca is not a reduced or broken form of English, but a natural result of how language is learned and used in multilingual settings.

For this reason, I believe ELF can be taught in the classroom. One aspect of moving toward teaching ELF would be focusing more on successful communication, and focusing less on accuracy of grammatical forms. Even before I learned about ELF in my MA course, I already felt that some of the grammar points which my school’s English textbook was expecting 2nd graders to produce accurately were too nuanced to be focusing on, at least at this stage. I simply didn’t feel right penalizing my students on tests whenever they wrote “Have you got some milk?” (They were supposed to write “Have you got any milk?”) Using ‘some’ in this context is completely understandable, and it’s a sentence which is maybe even used by native speakers in some situations.

While I saw nothing wrong with exposing 2nd graders to the words “a/an/some/any”, and the situations when you use them, I disagreed with the amount of focus on producing accurate forms that the textbook exercises and tests were putting on this distinction. Class group after class group who usually performed well on my tests consistently struggled with this particular unit. In order to get them to pass the unit test, we had to spend an undesirable amount of time practicing this grammar distinction in class, rather than focusing on meaningful self-expression and fluent communication. Compared to other grammar points which my textbook tests them on, the cognitive load of juggling “a/an/some/any” is relatively high. Consider the following three lines (inspired by the unit test in question, but not directly taken from it):

“Have you got _____ meat?” “No, I haven’t got _____.”

[answers: any; any]

“Would you like _____ oranges?” “No, I don’t want _____.”

[answers: some; any]

“Here is _____ apple. Here is _____ apple juice. Here are _____ apples.”

[answers: an; some; some]

To fill in those blanks accurately, the learner’s mind is:

        -Juggling 4 possible answers (a/an/some/any);

        -Paying close attention to whether the food item in the sentence is…

                singular/plural (an apple/some apples)

                countable/uncountable (an apple/some apple juice)

                starts with a vowel/consonant (an apple/a lemon)

        -Checking whether the whole sentence is…

                positive/negative (I’ve got some… / I haven’t got any…)

                a statement/a question (There are some… / Are there any…?)

Finally, they also need to remember exceptions to the rules they learned: 

        -Use “any” when asking questions: “Have you got any meat?” but DON’T use “any” when asking questions to make an offer: “Would you like some meat/fruit/oranges?”

        -Use “some/any” with uncountable words like meat/cake: “I’d like some chicken/some cake.” but ALSO use “a/an” with those words because sometimes uncountable words can be countable: “I’d like to buy a chicken/a cake.”

That’s a lot to consider mentally, all at one time! I would argue that it’s an unreasonable cognitive load to expect pre-A1 second graders to bear while communicating. If even strong students consistently stumble over these distinctions, it suggests that the issue is less about the learner’s ability and more about a curriculum’s over-emphasis on accuracy at the expense of communicative success—an imbalance that ELF research invites us to change.

One reason we should move toward teaching ELF is that it can expand access to English. Teaching English with a stronger focus on intelligibility rather than strict accuracy can allow more learners to participate meaningfully in English communication, instead of feeling that English is only for those who can master every small rule.

Another reason is fairness. No matter what field/industry our learners eventually work in, they probably won’t need to sound like native speakers, but they will very likely need to speak some English, and they will need to be understood. When we insist that all learners aim for native-like grammar from the very beginning, we raise the barrier to success unnecessarily. Learners who can already communicate clearly may still feel like they are “bad at English” simply because they fail accuracy-based tests. Teaching ELF means redefining success as being able to communicate ideas, solve problems, and interact with others, rather than producing perfect grammar under test conditions.

Teaching ELF does not mean abandoning structure or grammar altogether. Instead, it means being more selective about what we emphasize. One practical step is to carefully evaluate which grammar points are most meaningful for communication at a given level. When I was researching for this blog post, I found out about the linguistic concept of “salience”. Salience basically refers to how much a language feature “jumps out” to the learner. Structures that strongly affect meaning—such as word order, basic tense contrasts, or clear question forms—deserve attention. Other structures, like subtle article distinctions that rarely cause misunderstanding, can be introduced gently but tested less strictly, or postponed until learners are developmentally ready. (One example of a grammatical feature that is not very salient to my learners at ALL is the 3rd person “s” added to the end of verbs like “walk/walks”; “do/does”; “watch/watches”. From when 3rd person “s” is introduced to them in grade 1, allllllll the way to grade 6, my students never produce this structure on their own during unplanned conversations — NEVER. This indicates that this structure doesn’t stand out to them when they hear it, and the learners are able to get their message across perfectly clearly without using it.) I would also not pressure learners to produce idioms (which are mainly used by native speakers), though it will still be necessary for advanced students to understand the most common ones.

Another key part of teaching ELF is focusing more on communication strategies. These include skills like paraphrasing when you don’t know a word, asking for clarification, checking understanding, repeating or rephrasing key points, and using gestures or examples to support meaning. These strategies can be taught through simple classroom activities: role plays where students have to explain something using limited language, information-gap tasks that require clarification questions, or group tasks where success depends on being understood rather than being grammatically perfect. I can model these strategies myself and praise students when they use them successfully.

Assessment also needs to change if ELF is taken seriously. Instead of mainly testing whether learners choose the correct article or verb ending, teachers can assess whether students can complete communicative tasks: ordering food, explaining a problem, giving instructions, or sharing opinions. Errors that do not block understanding can be noted but not heavily penalized. This helps learners see English as a tool for communication, not a constant test they are failing.

REALITY CHECK: What about IELTS/TOEFL exams? If I teach ELF to my students, will they be prepared for these exams which gatekeep the world’s best universities? Maybe it won’t prepare them specifically for passing these tests. But will it prepare them for communicating in English? Absolutely. Can that serve them by giving them a foundation of confidence and communicative effectiveness which can later be polished for accuracy? I think so.

In a world where most English users are non-native speakers, teaching English only through native-speaker norms no longer makes sense. Teaching English as a Lingua Franca offers a way to increase access, lower cognitive load, and prepare learners for real communication with other non-native speakers. By focusing on intelligibility, meaningful structures, and communication strategies, teachers can help learners use English with confidence, even if their language is not always “perfect.”

SOURCES:

British Council. (2013, August 27). British Council report explains The English Effect. https://www.britishcouncil.org/contact/press/english-effect

Britain, D. (2018). Using English as a lingua franca in education in Europe https://www.asau.ru/files/pdf/1882962.pdf

Krashen, S. (1988). Teaching grammar: Why bother? http://www.sdkrashen.com/content/articles/teaching_grammar_why_bother.pdf

Nick C. Ellis. (2008). Usage-based and form-focused language acquisition: The associative learning of constructions, learned attention, and the limited L2 endstate. In P. Robinson & N. C. Ellis (Eds.), Handbook of cognitive linguistics and second language acquisition (pp. 372–405). Wiley-Blackwell https://onlinelibrary.wiley.com/doi/epdf/10.1002/9780470756492.ch4

Wikipedia contributors. (n.d.). Salience (language). In Wikipedia, The Free Encyclopedia. https://en.wikipedia.org/wiki/Salience_(language)

University of Vienna. (n.d.). English as a Lingua Franca (ELF): Research areas.
https://anglistik.univie.ac.at/staff/teams-and-research-areas/elf/

Posted in Uncategorized | Tagged , , , , | 1 Comment

I’m Back for Now

This is a short buffer post to separate my old personal posts from my next four posts which will be assignments for my master’s degree in Second Language Acquisition and TEFL.

A lot has happened between my last post in 2018 and now, and there’s really no use in trying to update you. (The two or three people reading this will already be up to speed on the happenings of my life since 2018.)

More posts coming soon!

Posted in Uncategorized | Leave a comment

Thoughts on Democracy

As I mentioned previously, I so take for granted the fact that Ch. leadership is authoritarian that it is easy for me to forget that they call themselves a republic. It’s easy to forget, considering that both in my own conversations and in the media, I have heard Ch. people point to the current US president as a smear on democracy’s name. (As someone who despises DJT I will be the first one to acknowledge that the particular democracy that promoted such a debased individual to its highest office has indeed been corrupted in numerous ways.) Such discussions make me ponder the limits or downfalls of US representative democracy.

I assumed that so much bashing of American democracy entailed that the Ch. government could shamelessly praise the benefits of benevolent dictatorship and turn democracy into a dirty word. To my surprise, however, democracy (民主 mín zhǔ) can be seen listed as the second most important of Ch.’s “Socialist Core Values” on nationalistic ads like the one below.

“Socialist Core Values: Prosperity, Democracy, Civilization, Harmony, Freedom, Equality, Justice, Rule of Law, Patriotism, Dedication, Integrity, Friendship”

I suppose I had underestimated the cross-cultural power of paying lip service to comforting buzzwords like democracy, even in Ch.

Placing the obviously Orwellian redefinition of the word “democracy” aside, it is necessary to point out to any patriotic American readers that life isn’t pain and suffering for middle class Ch. citizens. While they may not live in a democracy, economically their lives have significantly improved over the last few decades and there is at least a perception that the government played a positive role in that. And there is good reason for them to feel hopeful about aspects of future development considering Ch. is projected to replace the US as the world’s top economy before 2030, and has spent more on infrastructure than the US and Europe combined.

I am unwavering in my deep support for legally protected human expression and political freedom. (I would be an utter fool to argue against free speech on a personal blog.) But apologists for Ch. autocracy bring up a valid point when they argue that being able to freely call the president an asshole in the US is small consolation when your city’s infrastructure is crumbling, you can’t afford basic medical care, and your elected representatives only serve the interests of concentrated wealth anyway.

Critics of Western-style democracy love to point to Singapore’s late autocratic leader Lee Kuan Yew as a foil to the gridlocked and inefficient legislative chambers across Europe and the Americas. As Singapore’s first prime minister, he held a monopoly on power and jailed anyone who opposed him. Even after his death, a 16 year old Singaporean boy was arrested for criticizing him online. On the other hand, he ruled for the decades during which the tiny country saw its most explosive economic growth and improvements to its standard of living. To quote an article from The Atlantic:

For Lee Kuan Yew, “the ultimate test of the value of a political system is whether it helps that society establish conditions that improve the standard of living for the majority of its people.” As one of his fellow Singaporeans, Calvin Cheng, wrote this past week in The Independent, “Freedom is being able to walk on the streets unmolested in the wee hours in the morning, to be able to leave one’s door open and not fear that one would be burgled. Freedom is the woman who can ride buses and trains alone; freedom is not having to avoid certain subway stations after night falls.”

It is enough to make you question if a constitution like America’s, which aims to guarantee political rights, is sufficient for protecting citizens’ rights to fulfill their basic daily needs. Is it always the case that open debates, due process, self-rule, etc. leads to better housing, healthcare, roads, education? I don’t think we necessarily have to choose between the two. South Korea, Japan, and Taiwan are all development success stories in East Asia that haven’t relied on anything near Singapore’s or Ch.’s level of political oppression to flourish as economies.

It would be a cop out to simply say, “There are pros and cons to every system of government” and leave it at that, but in a meandering and lightly sourced blog post such as this, I won’t be able to definitively answer if being democratic leads to the highest standard of living in all cases. In fact, it can be argued that the reverse happened for America; that as the postwar standard of living improved and the economy saw growth rates never before seen by any nation, the government began to deliver (relatively) more on its democratic promise of representing all US citizens. After all, it is easier to compromise with the demands of your citizens when waterfalls of abundance are crashing over everyone’s head. Perhaps the USA is only a fair-weather democracy, falling back into reinforcing racial hierarchy when economic growth is less exponential. American historian Edmund Morgan argues in his book American Slavery, American Freedom that American democracy was able to exist because of, not in spite of chattel slavery. Perhaps the economic surplus produced by $3.5 billion dollars worth of free slave labor (which is equal to $75 billion current US dollars) was what freed white America to come together in a false brotherhood of whiteness and play their democracy game in the first place.

The zero sum dynamic of white Americans’ gain coming at the cost of black Americans’ loss carries through to the education system as well. When only white males were allowed to attend college, a university education was extremely cheap (in fact many state colleges were tuition free), and not so coincidentally, in the 1960s and 70s when more women and minorities started entering university campuses and organizing liberation movements that some governors couldn’t stand, state governments cut funding for universities, one of the catalysts for college tuition to begin its meteoric rise which continues to this day. Considering this and many other historic disparities in resource allocation between races, one can’t blame writer Ta-Nehisi Coates for concluding that, with DJT’s election, white America has demonstrated that it would rather burn its institutions to the ground than let all citizens in America benefit from them. It is impossible to defend the American political system as the best on Earth when such a racial hierarchy has always been an essential part of its nature.

Still, I intuitively choose democracy over any supposed benevolent dictatorship because in a properly functioning democracy, the people have a system they can use to pursue their community’s best interests. People know when their own community’s roads and schools are neglected and can act to rectify those problems. In that sense, a properly functioning democracy can absolutely lead to a higher standard of living and economic growth.

However the key phrase there is “properly functioning”. The problem with this way of thinking is that every other system of government sounds equally plausible when we only regard it within the ideal conditions. One could just as easily say that in a properly functioning benevolent dictatorship, the omnipotent leader focuses all his energy on improving the lives of his subjects. In a properly functioning communist society, all the citizens have their material needs met and have no ambition to do better than their neighbor. In a properly functioning representative democracy, policies that the population widely supports, like the legalization of marijuana, would be enacted.

A few years ago, I discussed politics with a Saudi acquaintance and he believed that Saudi Arabia doesn’t implement the true Islamic system of government. “In fact,” he said, “No Muslim country in this world correctly practices Islamic government.” In other words, in a properly functioning Islamic caliphate, everything would be ideal. This reminded me of how no communist society has ever reached a state of pure communism. A vanguard party has seized control in many nations, but they never managed to get to stage three of the plan where the vanguard party (along with the entire government) is dissolved, money is no longer needed and everyone is equally prosperous. To many of us in the US, the failed experiments of the Soviet Union and Maoism teach us that their goals were not attainable–at least not through vanguardist means. I asked my Saudi acquaintance, after making the above point about communism, “If everyone is failing to make your ideal Islamic government a reality, at what point do you have to throw out the ideal?” He responded that the ideal doesn’t have to be thrown out, just updated.

I look at democracy with a similar attitude. If the ideal isn’t being realized in the US then let’s update it. A quick look around the world reveals that “benevolent” dictatorships in no way hold a monopoly on economic growth and high standards of living. (And if their growth is higher than social democracies it’s often because they are industrializing much later than them.) Canada, Switzerland and, as always, the Scandinavian countries have the highest standards of living and their democracies are healthier than that of the US, in part because their systems are younger and thus more recently updated versions of the democratic ideal. Scandinavia, with it’s relatively homogenous population, has stronger social welfare policies and labor movements than the US. Additionally, it has democracy in the workplace, thus economic rights are prioritized along with political rights. This leads to what political theorist Benjamin Barber calls “strong democracy.” Economist Gar Alperovitz elaborates on strong democracy in his book What Then Must We Do:

I’m talking about genuine democracy, not just voting. Real participation… The kind where people not only react to choices handed down from on high, yea or nay, but actively engage, innovate, create options–and also decide among them.

Ch. might overtake the US as top world power, but let’s not allow that possibility to make us cynical or fatalistic about the possibility of fully actualizing the democratic principles we were taught about in school.

Posted in Uncategorized | Leave a comment

I Won’t Feed the Newsfeed

Last fall I deactivated my Facebook account and I haven’t been back since. I’m still not exactly sure what changed in me but I’m glad it changed. It has been a long time coming. Throughout the months leading up to my account’s final deactivation I had temporarily deactivated and reactivated it several times. I’m officially done with all social media timelines for the foreseeable future. I’m only on Gmail and Messenger. Messenger is useful because it’s linked to my Facebook account and it allows me to message all of my Facebook friends even though my Facebook account isn’t active. Since I live in Ch., I also need to have a WeChat account, and I do use that for texting friends but I never use it to post “moments” (the WeChat equivalent of status updates) on the Wechat timeline, and I don’t even look at what other people post on it.

For years, especially the years after I finished college, I spent a considerable amount of my daily life on Facebook, reading and sharing news articles, commenting, and mindlessly scrolling the newsfeed. I would login just to check what one friend was doing and then I would suddenly slip and scroll into a bottomless pit. These days I still keep up with the news, I’ve just lost the need to share it with hundreds of friends.

Where does that need to share come from? I constantly shared news articles because they were about issues that concerned me and I genuinely wanted more people to talk about them. In fact, I viewed my own role as a minor zeitgeist influencer. That’s not a grandiose delusion–we all contribute to our own social network’s marketplace of ideas, but not all of us take that role seriously. When you view your posts on Facebook with that kind of potential significance it can turn into a burden. Reading this, you may think it’s no wonder that I eventually got overwhelmed, considering the sheer number of issues to keep track of and bring to the public’s attention, but that wasn’t the only reason I left Facebook. When I think about it now, I boil it down to three main reasons.

1) Time wasting

This one is pretty self-explanatory so I don’t have much to elaborate on here. I will say that, as unnecessary as a lot of my activity on there was, I did read a LOT of informative articles, absorbing a great deal, and as futile as my debates with friends and strangers may have been, I feel like it developed a voice for me and sharpened my rhetorical tools.

2) Embarrassment

The public nature of every status, every comment, every like and angry reaction left me trying to examine all my activities through as many eyes as possible. How could I be misinterpreted here? I can’t like this because it’ll probably show up on this person’s newsfeed. I should edit that status. I should delete that comment before anyone replies. I should hide this status from that person. And so on. I’m already neurotic as it is. Having an extension of myself online that anyone could scrutinize and misunderstand at any hour of the day left me constantly feeling a subtle, creeping dread after almost every status I posted.

3) Narcissistic tendencies

The problem I feel most embarrassed about is the way Facebook infected my own way of thinking. More often than I care to admit I found myself having what I call Facebook fantasies. I would daydream about having some enviable experience (usually traveling somewhere exotic) and I imagined that experience through the lens of what it would look like to my friends on Facebook. Wouldn’t it be cool to travel to some place few American tourists dare to go? Maybe I’ll briefly befriend some armed group in a desert and they can point their guns at the camera and it’ll look all gritty and stylish, like a Vice Documentary. “Where ARE you, Jeremy?? Is that safe????” “How are you still alive??” “Great shot!” would just be the beginning of my deluge of comments and likes. Then, NOBODY could deny that I’m definitely an interesting person worthy of attention. Whenever I realized I was having another Facebook fantasy like this I would immediately cringe and curse my own brain for being so ridiculous. Then after that I would curse Facebook.

It’s been months since I noticed having my last Facebook fantasy. Five months clean. I’m not sure at this point if I will ever go back. I already talk to my Facebook friends through Messenger and I get email updates on world news through my email. The irony isn’t lost on me that most of my three or four readers here on this blog are only aware of my updates because of a family member sharing them on Facebook, so I can’t fully hate Facebook, but sharing my blog posts isn’t enough reason to reactivate. The only reason I can imagine returning will be to post photos again. But even then, they’d have to be some extraordinary shots to compel me to walk back into the dark alleys of social media and contact my online like-dealers again. Maybe if I go to Somalia and spend a day or two taking selfies with some Kalashnikov toting pirates I’ll have to share those photos. That would be such a cool aesthetic for my online persona.

Posted in Uncategorized | 1 Comment

Sweet and Sour Child of Mine

I’m now entering the fifth month of my Ch. sojourn, attempting to adopt a second foreign culture. Years ago, after reading several fictional and non-fictional accounts of the American invasion of Iraq and being fascinated by the beauty of Arabic calligraphy for years, I adopted Arab language and culture as my interest. I wanted to learn a language that was very distant from European languages like English and French, and my goal was to be able to speak to the most people possible, so it didn’t hurt that Arabic is the fourth or sixth most spoken language on Earth depending on whose list you consult and how you count speakers. (After telling my Nigerian Arabic language partner at IU that I learned languages based on how many people I would be able to speak to with it, he joked, “My friend, if everyone went by that rule then nobody would ever learn my native language [Hausa]!”)

No matter how much we claim to be citizens of the world, or that we find all languages interesting, language nerds always have a favorite. It also holds that some languages or sub-cultures just don’t click. During and after middle school, a pivotal period for shaping my taste and tolerance of exotic sounds, I was mesmerized by Bollywood music, Raï, and traditional Japanese Biwa music, among other genres from around the world that may be too foreign to most American ears. On the other hand, I never got into K-Pop, anime, or the actual movies attached to the Bollywood songs I loved. I missed the boat on those art forms and often found them corny and bewildering. I felt much more at home with the idea of being a modern day orientalist, quill in hand, devouring Arabic texts by candlelight at the desk of my study.

Now I embark on a new lengthy and expensive adoption process. It’s daunting to start from zero all over again, even exhausting. I’ve made the decision that this is the last time a new language gets V.I.P. treatment from my brain (other V.I.P.s include French, Spanish, Arabic and Hindi), by which I mean that any other new language I decide to learn will be for fun, at most only dabbling in it without the pressure of setting “total fluency” as my goal. I can feel myself reaching the limits of my patience already. During a lesson in McDonald’s the other day in which my Ch. teacher taught me about using measure words in the context of ordering certain quantities of food (“I want two cups of coffee” “three boxes of cookies” etc.) I found myself thinking “I’m learning how to say this again?” It was only a momentary twitch of mental resistance, but one that I haven’t experienced before. I took it as a sign that I’m ready to stick with languages I’ve started learning so far, including Mandarin, and only improve on those.

A few times over here I’ve gotten the feeling that I’ve abandoned my Middle Eastern child. I get nostalgic for the soul that I find in Arabic shaabi music that I have yet to find an equivalent of in Ch. music. Soul is a completely subjective term, and it wouldn’t be fair for me to say Ch. music doesn’t have it. But so far I’ve mostly been exposed to a lot of the Ch. pop that plays in taxis and stores, which frankly doesn’t feel like it has its own unique flavor. A lot of it sounds heavily influenced by J-Pop and K-Pop which is in turn just an East Asian twist on American Pop genres. Of course Ch. music is incredibly diverse, so it’s impossible to make any one statement that applies to every genre. There is a very meditative and cerebral appeal to traditional Ch. instrumental music, especially when knowing that the music was composed by a monk more than a thousand years ago. That is a different and very valuable expression of the human soul. But at the moment I miss a different kind of soul. Fiery vocals, the fluttering, soaring voices of Egyptian singers like Hakim that reverberate with every emotion along the human spectrum. This nostalgia for Arab culture has led me to spend the last month listening to Warda the legendary Algerian singer and several Egyptian shaabi musicians.

I don’t know if I’ll ever truly master any of the languages I’m trying to adopt, or even if I managed to pull off the usage of my adoption metaphor in this post. (I could have just as easily used a dating metaphor for languages, for example, “I feel like I’m cheating on Arabic with Ch.”) All I know is that “adoption” is the word that felt right for what I’m doing. I’m not adopting these cultures in that I’m dressing like them or believing their dominant ideologies, I’m adopting them in that I’m deciding that I now identify as a student of their culture. I thought that I had picked my pet foreign culture for good and that culture was Arab Islamic. Now here I am in Ch. and I’m still adjusting to life after love of Arabic.

Posted in Uncategorized | Leave a comment

Good vs. Evil: Are we breaking even?

(spoiler alert: I don’t know and there’s no way to know—but what if there were?)

“The created universe carries the yin at its back and the yang in front; through the union of the pervading principles it reaches harmony.” –part 39 of the Tao Te Ching

Let’s assume, just for the sake of argument, that there really is such a thing as a quantifiable Bad and Good that exists outside of personal opinion. If that’s the case, is there balance between them in the universe? And if so, will there always be?

People often say they want balance in their lives, but I’ve always found that idea a little strange. Why would we aim for equal positives and negatives when, at least in theory, we could aim for having only positives instead? The question of balance brings to mind an idea from physics known as the Zero-Energy Universe hypothesis. This hypothesis suggests that the total positive energy of matter in the universe is exactly canceled out by the negative energy of gravity. If it’s correct, then the sum total of all energy in existence might be—and might always have been—zero.

If that’s true, it helps answer one of the biggest questions surrounding the Big Bang. How could so much energy come from “nothing” without violating the Law of the Conservation of Energy, which states that energy can neither be created nor destroyed? The Zero-Energy Universe hypothesis offers a neat solution: there was no net increase in energy at all. Matter and gravity have simply been canceling each other out since the very beginning. The universe may have exploded into existence, but its energy books were balanced from the start.

“He thought the world’s heart beat at some terrible cost and that the world’s pain and its beauty moved in a relationship of diverging equity and that in this headlong deficit the blood of multitudes might ultimately be exacted for the vision of a single flower.” –Cormac McCarthy, Blood Meridian

If the physical universe really does balance itself in this way, it’s tempting to ask whether the same might be true of the moral universe. Could it be that the total amount of evil and benevolence also cancel each other out over time? The idea of cosmic symmetry has an intuitive appeal, and many belief systems throughout history have leaned into it. Various ancient religious traditions imagined the universe as a battleground between opposing forces of equal power, locked in an endless struggle between light and darkness. In those systems, evil is not a flaw or a mistake—it is a fundamental ingredient of reality.

This stands in contrast to another influential tradition, which argues that only good truly exists and that evil is merely the absence of goodness, like darkness is the absence of light. In this view, evil has no independent substance of its own. It is a lack, a void, something that could theoretically be eliminated entirely. Still other traditions take a different route altogether, viewing opposing forces not as enemies but as complementary halves of a larger whole. From that perspective, the universe cannot exist fully without both.

So who’s right? Is evil a rival force, a necessary counterpart, or just an absence waiting to be filled?

“Human history is not the battle of good struggling to overcome evil. It is a battle fought by a great evil struggling to crush a kernel of human kindness.” –Vasily Grossman, Life and Fate

A quick look at the world as it exists today can make it feel as though the Bad clearly outweighs the Good. But maybe that impression is misleading. We are constantly exposed to reports of violence, corruption, and disaster, while acts of kindness often pass unnoticed. No news outlet keeps a running tally of how many people help strangers, forgive each other, or quietly do the right thing on any given day. Even a perfectly even split between good and bad might feel imbalanced to creatures who are wired to notice threats more than stability.

Still, I don’t like the idea that balance is the best we can hope for. I don’t like the implication that everything good must come at an equal cost, or that nothing can ever be purely beneficial. And yet, in practice, almost every action we take seems to involve some kind of trade-off. The global financial crisis offered a clear example of this. Many people convinced themselves that enormous profits could be generated without risk, only to discover—too late—that the risk had simply been hidden, deferred, or misunderstood. Just because we can’t see what we’re paying for something doesn’t mean it’s free.

Does this mean the world is fundamentally a zero-sum game, where one person’s gain must always be another’s loss? Is it impossible for everyone to win at once?

Nothing is perfect. –English saying

Al-kamaal l-illah (“[Only] god is perfect.”) – Arabic saying

There’s a common saying in English that nothing is perfect. In other languages, a similar observation is made, but with an added turn toward the divine: perfection is acknowledged as real, just not attainable by humans. I find that contrast interesting. It suggests that while we recognize imperfection everywhere we look, we also can’t quite let go of the idea that perfection exists somewhere, even if only in theory. The very fact that we can imagine perfection at all seems to imply that we haven’t fully accepted “breaking even” as the ultimate ceiling for the moral universe.

“The arc of the moral universe is long but it bends toward justice.”
–Theodore Parker (later quoted by Dr. Martin Luther King, Jr.)

Despite our skepticism toward idealism, and despite our tendency to mock people who dream too big, there’s a deep and persistent human longing for moral progress. Religions often express this longing through visions of enlightenment, salvation, or paradise. Even in secular contexts, the same impulse appears. Some futurists imagine a coming age in which technology will solve humanity’s moral and material problems, transforming not only society but the universe itself into something closer to perfection. These visions differ in their details, but they all share the same underlying belief: that the future can be better than the present in some profound, almost transcendent way.

“Government thinks things done by accident can only be remedied by accident. In actuality, things done on purpose can only be remedied on purpose.” –Richard Rothstein, research associate at the Economic Policy Institute

The danger comes when we begin to believe that this progress is inevitable. If moral improvement is guaranteed, then individual effort starts to feel unnecessary. If balance is the default state of the universe, then someone else will always pick up the slack. In this way, optimism can quietly slide into passivity.

It’s easy to interpret famous claims about moral progress as meaning that justice will eventually prevail on its own, without human intervention. But history suggests otherwise. Many of the most enduring injustices in society were not accidents. They were the result of deliberate policies and intentional decisions. And if harm was done on purpose, it won’t undo itself by accident. Ignoring injustice doesn’t cause it to fade; it allows it to harden.

I tend to interpret hopeful statements about moral progress not as predictions, but as challenges. The moral universe doesn’t bend itself toward justice. It has to be bent—slowly, imperfectly, and with sustained effort. In practical terms, that means actively repairing damage rather than assuming time will take care of it. It means acknowledging that balance is not the same thing as fairness, and that stability is not the same thing as peace.

The idea that balance is the natural state of the moral universe can be comforting, but it can also be dangerous. It encourages the belief that inaction leads to harmony, that peace emerges automatically, and that the present moment represents the best we can do. I don’t know whether humanity is morally breaking even. No one does. But I do know that assuming balance will eventually assert itself without our deliberate moral effort is a way of quietly surrendering the future.

Posted in Uncategorized | 1 Comment