When I first learned that we had to create a Digital Story project, the stress that was building up at the back of my neck was the use of technology. I knew that I was going to use iMovie but at that point, I still didn't know how to use it properly. I was preparing myself for those painful moments synonymous with my technical writing days where I had to tweak html and java codes to ensure the online help I was creating turned out properly on the user's interface. Those were long and laborious days.
Funny enough, when I embarked on the project, I discovered that it wasn't the technology which I had to worry about. The pain came from the story. In last week's sharing session, I empathized with many of my course mates. Most of us had spent quite an extensive amount of time deciding on the topic, the approach, concept, design and etc. Fortunately for some of us, we eventually settled on topics which had some sort of impact on us or there will be some use for our completed projects. Then comes the next rather difficult part - writing the reflection. Having to critically reflect on the 'why' can be quite challenging, especially in some cases where 'it's just like that'.
I wonder then if my students will be concerned about the same matters should they embark on a digital story project in class? Will they be carried away by the myriad of tools or will I be giving them an opportunity to critically reflect on the meaning they'll be trying to create and communicate? It is indeed very challenging. Nonetheless, my digital story project experience has surely allowed me to learn about the joys and pains of creating a digital story. And I hope to be more effective when I work with my students on this.
Tuesday, March 31, 2009
Thursday, March 26, 2009
2009 Macmillan Interactive Webinars
Macmillan invites us to join them as they discuss the one issue we all have in common as educators - the teaching of language, and how we can do it better.
The Macmillan Webinars are a series of live video talks from some of the biggest names in ELT. You can watch these directly in your web browser, and all webinars are free to view. Have a question? You'll be able to put questions directly to the presenter during the session.
There's a webinar on "New technology - new pedagogies" on the 8th of April.
Synopsis
Advances in new technology have changed forever both the teaching and learning of languages. Nevertheless, the use of technology in ELT remains a hugely controversial area and disagreement exists at every level. This webinar will critically analyse some of the key new technologies, such as interactive whiteboards, podcasts and wikis. It will then provide an overview of some of the emerging ‘new pedagogies’ and describe how they affect a number of key areas: grammar, lexis , the four language skills, phonology and learner training.
The Macmillan Webinars are a series of live video talks from some of the biggest names in ELT. You can watch these directly in your web browser, and all webinars are free to view. Have a question? You'll be able to put questions directly to the presenter during the session.
There's a webinar on "New technology - new pedagogies" on the 8th of April.
Synopsis
Advances in new technology have changed forever both the teaching and learning of languages. Nevertheless, the use of technology in ELT remains a hugely controversial area and disagreement exists at every level. This webinar will critically analyse some of the key new technologies, such as interactive whiteboards, podcasts and wikis. It will then provide an overview of some of the emerging ‘new pedagogies’ and describe how they affect a number of key areas: grammar, lexis , the four language skills, phonology and learner training.
Wednesday, March 18, 2009
A Glimpse into the Future
We may be using this technology to find information and learn things in the near future...
If you're new to TED, it's a great place to learn about new technologies, engineering, design, and social strategies that deal with the challenges in the world today. For example, a couple of years ago, Jeff Han wowed the world with the first multi-touch interactive screen. Now, this technology is manifested in Microsoft Surface, iPhone and iPod Touch which use similar technologies. Basically, we can get a glimpse of the future from TED talks.
Enjoy!
Multimodal Learning
I enjoyed Jacqueline's presentation last evening. I've learned new things from her especially those examples! I didn't know about the website where book fans could write a chapter or add a character to their favourite series or author's book. Nor was I aware of the children's interactive story book on BBC's learning website. I think it's a great way to learn how to read, that is, if the child learns to map the words to the sounds. While watching Jac demonstrate the use of the website, I was reminded of the movie, The Reader where the protagonist learned to read by 'mapping' the audio recorded version of the story with the words on the page. To some extent, I can't help but acknowledge how technology has evolved from cassette tapes to online interaction and soon perhaps, tactile interaction.
As for my digital story project, I read my script in chunks and timed myself. Oh my! How much I had written had to be chopped off! I learned that a 36-syllabic, 23-word line took me 12.5 secs to record. To me, that's not a lot of information I could give to my audience. The reality is, I had to decide on the pertinent points and forego the rest. In a way, I was also forced to re-think how I should capitalize on the other modes to deliver my script without the narration.
After, I tried using Audacity to record my script. It was easy to use and I appreciate the editing tool in the software. I could remove silence or unwanted parts easily. There were no issues opening the sound recording files in iMovie and dragging the tracks to the respective clips. Then, it was a matter of extending or reducing the clip to accommodate the sound track.
I'm really surprised that an I-dio-T like me can create a digital story. :) It's very encouraging. I'm sure my students can create a lot more fun and 'wow' projects and it'll be interesting to learn about their process, the decisions they make and rationale for the selection of certain modes and the meaning afforded to them.
As for my digital story project, I read my script in chunks and timed myself. Oh my! How much I had written had to be chopped off! I learned that a 36-syllabic, 23-word line took me 12.5 secs to record. To me, that's not a lot of information I could give to my audience. The reality is, I had to decide on the pertinent points and forego the rest. In a way, I was also forced to re-think how I should capitalize on the other modes to deliver my script without the narration.
After, I tried using Audacity to record my script. It was easy to use and I appreciate the editing tool in the software. I could remove silence or unwanted parts easily. There were no issues opening the sound recording files in iMovie and dragging the tracks to the respective clips. Then, it was a matter of extending or reducing the clip to accommodate the sound track.
I'm really surprised that an I-dio-T like me can create a digital story. :) It's very encouraging. I'm sure my students can create a lot more fun and 'wow' projects and it'll be interesting to learn about their process, the decisions they make and rationale for the selection of certain modes and the meaning afforded to them.
Sunday, March 15, 2009
My First Digital Story Creation Adventure
The initial challenge I faced in my digital story project was my trying to come up with the story itself. I do agree with Ohler’s “focus on (the) story first, technology second, and everything will fall into place”. I was stuck at the storyline phase for a couple of days as I stared at the various story maps in Ohler’s book. In the end, I decided to approach it from the Purpose-Audience-Context perspective. Then, the magic started working.
With my story in mind, I started looking up videos and photos in my library. Thankfully, Mac does a fantastic job in organizing them. Then based on the focus of my story, I started dragging and dropping the relevant video segments and still images into the iMovie timeline just to gauge the duration of my story. It was a rigourous decision-making exercise, which required lots of cutting, chopping and trimming. However, I’m very grateful that iMovie was idiot-proof. I only had to access iPhoto and iTunes for my resources and I couldn’t believe that I could whip something up rather easily. Of course, the editing part required some patience with the mouse. I also used two photos from Google images and wondered if I had the rights to use the images. Maybe I should cite my sources under credits, just like in a film?
Satisfied with the clips I was going to use to represent the various aspects of my story, I went on to music. It was much easier at this stage. I chose the song which meant most to me in various ways and I felt that it fitted the mood I was trying to create.
After, I played around with transitions and effects, which were much limited by whatever that is on the software. Maybe there are ways to create or import sophisticated ones from elsewhere but I don’t know how to do that. Well, for now, I shall take baby steps and focus on getting the modes which I’m using (no matter how ordinary they may seem) and try to get the best out of them.
Next challenge… recording of script to ensure that my narration fits … I’m already feeling the jitters.. I sound horrible in recording and I’ve to be very careful with my pronunciation, tone, tempo… yikes!
With my story in mind, I started looking up videos and photos in my library. Thankfully, Mac does a fantastic job in organizing them. Then based on the focus of my story, I started dragging and dropping the relevant video segments and still images into the iMovie timeline just to gauge the duration of my story. It was a rigourous decision-making exercise, which required lots of cutting, chopping and trimming. However, I’m very grateful that iMovie was idiot-proof. I only had to access iPhoto and iTunes for my resources and I couldn’t believe that I could whip something up rather easily. Of course, the editing part required some patience with the mouse. I also used two photos from Google images and wondered if I had the rights to use the images. Maybe I should cite my sources under credits, just like in a film?
Satisfied with the clips I was going to use to represent the various aspects of my story, I went on to music. It was much easier at this stage. I chose the song which meant most to me in various ways and I felt that it fitted the mood I was trying to create.
After, I played around with transitions and effects, which were much limited by whatever that is on the software. Maybe there are ways to create or import sophisticated ones from elsewhere but I don’t know how to do that. Well, for now, I shall take baby steps and focus on getting the modes which I’m using (no matter how ordinary they may seem) and try to get the best out of them.
Next challenge… recording of script to ensure that my narration fits … I’m already feeling the jitters.. I sound horrible in recording and I’ve to be very careful with my pronunciation, tone, tempo… yikes!
Saturday, March 7, 2009
More on Multimodality & Multiliteracies
We had a live experience on the constraints of working with a mode - drawing a mind map - in last week's seminar.
To begin with, my mind just got stuck when I heard 'mind-map' as I often find this mode somewhat restrictive for sharing/presenting ideas or concepts. For one, mind-mapping is very linear in that, one item will move on to another in a multi-level way, e.g. A to B to C, then to D. Like others, I don't always think in this manner, sometimes I may think of A, then jump to D and come back to B and C later on.
Some of us felt that certain key terms and concepts could appear or be presented in different parts of the map. As Laura has shared, some of us had trouble with 'epistemological commitment', that is, trouble deciding where we could put certain terms on the map. Perhaps without using the glue given, we could have written the terms out and in that way, less restricted in illustrating connections.
This experience reminds me of the sort of frustration one might have in writing or describing about something on a computer screen. A page on the WWW is usually not static, so to explain to someone in person is viable as a demonstration helps him visualize and understand better. However, to explain it on paper is another matter. Sometimes it is not possible or easy to capture a screen and explain it at the same time.
On a related but different train of thought... I do agree that it is important to teach semiotic awareness and let learners explore the affordances of different modes and design. I'm starting to have some concerns, especially as we approach our DST project. Isn't design like art which means that it is subjective? How do we evaluate then? How do we learn to appreciate it if we're not familiar with its culture?
More discoveries on the way...
To begin with, my mind just got stuck when I heard 'mind-map' as I often find this mode somewhat restrictive for sharing/presenting ideas or concepts. For one, mind-mapping is very linear in that, one item will move on to another in a multi-level way, e.g. A to B to C, then to D. Like others, I don't always think in this manner, sometimes I may think of A, then jump to D and come back to B and C later on.
Some of us felt that certain key terms and concepts could appear or be presented in different parts of the map. As Laura has shared, some of us had trouble with 'epistemological commitment', that is, trouble deciding where we could put certain terms on the map. Perhaps without using the glue given, we could have written the terms out and in that way, less restricted in illustrating connections.
This experience reminds me of the sort of frustration one might have in writing or describing about something on a computer screen. A page on the WWW is usually not static, so to explain to someone in person is viable as a demonstration helps him visualize and understand better. However, to explain it on paper is another matter. Sometimes it is not possible or easy to capture a screen and explain it at the same time.
On a related but different train of thought... I do agree that it is important to teach semiotic awareness and let learners explore the affordances of different modes and design. I'm starting to have some concerns, especially as we approach our DST project. Isn't design like art which means that it is subjective? How do we evaluate then? How do we learn to appreciate it if we're not familiar with its culture?
More discoveries on the way...
Friday, February 20, 2009
Multimodality and Multiliteracies
According to Jewitt (2008), the basic assumption of multimodality is that apart from language, meanings are made through organized sets of semiotic (representational and communicational) resources called modes. He adds that 'no one mode stands alone in the process of meaning; rather, each plays a discrete role in the whole' (p. 247).
The Real Slum of 'Slumdog'
This video comprises a few modes that makes it a multi-modal text. The modes include: speech, still images, video, colours, text/captions, music and sound.
Each of the modes adopted had a clear purpose and together, they have a powerful impact on the audience. The images and video accompanying the narration, help to engage the audience and to motivate them to follow the story. The captions make it easy for the audience to follow the interviewee's speech and to an extent, the images duplicate the interviewee's speech and help reinforce the story.
The orchestration of modes in this multi-modal text is powerful as the images, sounds, speech and text, capitalizing on the senses, influence the audience to think about what's going to happen to Dharavi when developers move in to change the landscape. As such, the intended meaning (to inform) and the received meaning (to anticipate changes) of the text are likely to be met through the affordance and aptness of fit of the modes used.
UN World Food Programme

In the advertisement above, text, image, colours are put together to project a powerful meaning - that of starvation. The earth and puddle, fork and soup spoon are metaphorically used to represent food as seen through the eyes of those who are starving. The earth and puddle have been transformed and transducted into a soup plate and soup respectively. This is not a usual or normal context for the audience and yet the ad was cleverly done to capture the audience's attention and the message is communicated.
Apart from the ability to read and write print-based materials, the increasing cultural and linguistic diversity in the world and the pervasive presence of new media and images require our learners to know how to decide on which digital resource to use, and the aptness of fit of particular modes of representation to fulfil communicative functions.
The Real Slum of 'Slumdog'
This video comprises a few modes that makes it a multi-modal text. The modes include: speech, still images, video, colours, text/captions, music and sound.
Each of the modes adopted had a clear purpose and together, they have a powerful impact on the audience. The images and video accompanying the narration, help to engage the audience and to motivate them to follow the story. The captions make it easy for the audience to follow the interviewee's speech and to an extent, the images duplicate the interviewee's speech and help reinforce the story.
The orchestration of modes in this multi-modal text is powerful as the images, sounds, speech and text, capitalizing on the senses, influence the audience to think about what's going to happen to Dharavi when developers move in to change the landscape. As such, the intended meaning (to inform) and the received meaning (to anticipate changes) of the text are likely to be met through the affordance and aptness of fit of the modes used.
UN World Food Programme

In the advertisement above, text, image, colours are put together to project a powerful meaning - that of starvation. The earth and puddle, fork and soup spoon are metaphorically used to represent food as seen through the eyes of those who are starving. The earth and puddle have been transformed and transducted into a soup plate and soup respectively. This is not a usual or normal context for the audience and yet the ad was cleverly done to capture the audience's attention and the message is communicated.
Apart from the ability to read and write print-based materials, the increasing cultural and linguistic diversity in the world and the pervasive presence of new media and images require our learners to know how to decide on which digital resource to use, and the aptness of fit of particular modes of representation to fulfil communicative functions.
Subscribe to:
Comments (Atom)
