This week I made my first You Tube video. An exercise in my Computers and Writing course, I created a script and, working in iMovie, I spliced in images and (minimally) edited my materials, all in the service of working with You Tube’s closed captioning features. While my video is very rudimentary, it serves the purpose for this exercise. The idea, to see how far off-script the closed captioning feature actually is is from the actual recording.

My classmates, discovered found poetry in their closed captions. The text produced by You Tube resulted in odd, sometimes poetic, often humorous errors in transcription. My video? The captions are oddly close to the script, in fact there were only nine somewhat significant changes (more than a tense shift) in my entire video; and while these variations do not make much sense, I would suggest that none of them drastically affect the understanding of the video.

My script was a response to an article entitled “Recovering Delivery for Digital Rhetoric” by James E. Porter. Take a look below and be sure to click on the CC button once you hit the play button to see the captions.

The closed captioning struggled each time that I used the author’s name, responding differently at each usage. Here is a break down of all the deviations between the original script and the automatic closed captioning feature:

Original Script                                        You Tube CC
Porter’s                                              car Porter’s
As Porter tracks                                  it’s pretty tracks
Of                                                      and
A new genre                                      you john request
Modes                                                modems
As Porter                                           escorted
Porter                                                 clear
Snide comment                                   side comment
New policy                                           team policy

I made no attempt to speak clearly or produce an easy to caption video; but I also didn’t try to confuse the captioning feature. My close to accurate project, especially in conjunction with my classmate’s projects, definitely brings into question the feasibility of  accessibility software, such as closed captioning. Why did my project serve as a fairly accurate representation, whereas other projects significantly deviated? What is the variable that produces either authentic, or inauthentic text?

While slightly disappointed in the lack of humor that my closed-captioned video produced, I managed to stumble upon Rhett and Link who have produced several humorous caption-fail videos (among other videos that are also entertaining).  I have included one here for your viewing pleasure… enjoy!

Advertisements