top of page

Multiple Modalities for Two Areas of Language Learning

Dorothy.png

Dorothy Chun

UC Santa Barbara

Keynote for ALLT, NTUST, Taipei, 2021 

In this presentation, in line with the conference theme of multiple modalities, multiple literacies, multiple perspectives, I would like to share my experiences from two different CALL projects, both of which use several modalities.

 

The first project involves using Computer Assisted Pronunciation Training (CAPT) to teach L2 discourse intonation and to raise the awareness of L2 learners as to how melody and rhythm contribute to expressing meaning. The focus is on helping L2 learners produce intelligible, comprehensible speech (instead of accent-free or native-like speech) and thus communicate more effectively. By allowing learners to visualize the critical pitch contours and stressed syllables of their speech, they can focus on adapting their prosody to that of native speakers. In addition, knowing how prosody functions in the L2 can help them with listening comprehension.

 

The second project is a Virtual Reality (VR) game using the Oculus Quest headset that colleagues and I are developing for enhancing children’s L1 literacy but that could also be applied to L2 learning. In considering how using the Quest involves four modalities –visual (seeing), auditory (hearing), tactile (touching) and kinesthetic (moving)– connections to learning are proposed. In addition, I will discuss other VR apps more specifically for L2 learning and how recent research and practice confirm the affordances of multiple modalities, not only for directly engaging learners and providing greater contextualization but also developing their digital literacies, so important in the 21st century.

bottom of page