Archive for the ‘multimedia’ Tag
Just posting a note that I heard from a CNET representative regarding my inability to uncover closed captioned videos at their site. Apparently, there was a problem that caused temporary prevention of captions. Sure enough, when I returned to the CNET site today, it didn’t take long to find a video with CC. For fun, here’s an overview of the T-Mobile G1 with Google – the “world’s first Android-powered mobile phone.” (Choose the Product Videos tab. To view captions, choose the CC button on the player.)
Have I ever learned a lot about captioning video for the Web. In a previous post, I provided a list of captioning services for hire (AKA “outsourcing” your captioning needs). Since then, I have acquired and accumulated information that resides in my browser bookmarks, my delicious, CDs, TextEdit docs, notebooks, Stickies on my desktop, even scraps of paper in my briefcase. I’ve finally had a chance to filter through it all and regurgitate it here. Suffice it to say that the process of choosing how to get your media captioned is complicated, and trying to summarize it feels like I’m chasing my own tail. My purpose in this post is to report some options for self-captioning (prefaced with words of caution), followed by a “well, if I were you…” lecture in my next post.
OH! Before moving on, I must digress to additional interesting points about the benefits of captioning. If you have followed this blog (or listened to the Accessibility in the MLTI series on the Maine Department of Education iTunes U site…or are just plain smart about accessibility), you know that captions are much more than a service for people who are deaf or hard of hearing. Here are two more bullets on the list of benefits: Searchability and Navigability.
“Searchability” because search engines (e.g., Google) can’t search video/audio files on the Web. If it’s captioned/transcribed however, then the engine can uncover it and list it in the search results. Kevin Erler of Automatic Sync Technologies (AST) recently reported that after CNET hired AST to caption its media, its hits on Google increased by 30%. (I spent some time at CNET TV and could not uncover one video that is closed-captioned. Their player has a “CC” button, but I was continuously greeted with a message stating “Sorry, closed captions are not available for this video.”)
The appeal of “navigability” is not unlike the benefit of searchability: Searching text (captions) within a video allows you to navigate from one point to another. Captions are kind of like “tagging” the video content. This, of course, is most relevant to lengthy video clips.
What I’ve learned about self-captioning (AKA “insourcing”) is that, well, it’s tough to make an argument for it. Insourcing means that you use your own process for captioning the video that you and/or your students create. This means time and labor, and if you plan to employ students, money. The time and labor are primarily a product of the generation of the video transcript, which can be painstaking and mind-numbing. Shortcuts are strongly discouraged as the quality of the transcript forms the foundation for the quality of the captioning. And speech recognition software (e.g., MacSpeech Dictate), although highly accurate for sitting at your computer and talking purposefully, is not so effective when it comes to capturing spoken language during more informal and impromptu situations common to making movies.
Another pitfall to self-captioning is having to know the technology. I recently learned that you caption for the player, not the media file. I’m no techie, but that was a serious conception blower for me. Furthermore, many players don’t support captioning.
Having said what I needed to say about insourcing your closed captioning needs for the video created in your classrooms, if you choose to explore your options, there are many. I have learned of several that I feel comfortable enough to share (i.e., I know enough to be dangerous rather than reckless):
YouTube As of August 28, YouTube supports captioning. While recognizing the need to support its viewers who are deaf and hard of hearing, YouTube is also strategically marketing for multiple languages by referring to “subtitles” (120 languages are available).
Flash I must admit that I never considered Flash to be of the accessible type. I don’t think it always has been, but Adobe Flash CS3 has a built-in captioning component. Adobe provides a list of tools and services for adding captions to Flash video.
MAGpie (also enables audio description)
MacCaption for Final Cut Pro or any Non-Linear Editing (NLE) system
So that’s what I’ve got. Do I understand all of this? No. And I really care not to. And I doubt many teachers will care to go down multiple roads only to turn around and start over at the original intersection to try yet another. In my next post I’ll propose a possible workflow for schools to get their media captioned and up on the Web in a timely, efficient, and cost-effective manner.
A few months back I wrote a post regarding the (lack of) accessibility of Web 2.0 media. That post generated a couple of helpful comments, and I’ve also had the opportunity to collect additional information. Here’s an update.
Recall that “accessible” media means that the content can be interpreted by all users. For example, a video that is captioned is accessible to viewers who are deaf or hard-of-hearing, as well as English Language Learners. Indeed, ongoing research has shown that the use of captions can improve literacy skills, including comprehension, of lots of learners. Another example of making video accessible is known as “description.” A described video is one that is narrated for individuals who are blind or have low vision. During scenes that don’t include dialog or other audio cues, a narrator describes what is happening onscreen. Because video description (aka “audio description”) can supplement and embellish what viewers see, it is yet another literacy tool. Here’s an exemplar of a fully accessible segment of The Lion King. You’ve never experienced The Lion King like this!
You can search for more accessible DVDs on the Web. Nearly all (if not all) NOVA videos are available with captions and video descriptions. You may be surprised that some of the DVDs you are currently using are fully accessible. Start by searching the WGBH Media Access Group Accessible DVD collection. Another collection is offered by the Described and Captioned Media Program, which has a free-loan library for qualifying students.
While video description is not mandated, laws related to television closed captioning have been in existence since 1990. Those laws have not yet made the transition to the Web, but disability advocates have tried to keep pace. In fact, the Internet Captioning Forum (ICF) is a collaboration among the most popular Web media industries and the National Center for Accessible Media (NCAM). Additionally, Section 508 of the Rehabilitation Act, a 1998 amendment, is a motivating factor for organizations that want to compete for Federal government contracts.
Currently, two ways exist for you to get your classroom video captioned: send it out to a professional or do-it-yourself. In today’s post, I’ll describe some organizations for hire. My next post will provide a list of options for folks who are technically inclined to venture out on their own captioning experiments.
Automatic Sync Technologies (AST) offers automated captioning services for everything from videotapes to streaming media video searches (if videos are captioned, you can do text searches of them!). These folks will take your media (video, podcast, webcast…), caption it, and return it to you. Education rates are available and very, very reasonable (if you have a transcript it’s even cheaper). Turnaround times are also highly impressive. An invitation was recently distributed by MaineCITE (Maine’s AT Act project) for a free workshop delivered by an AST member to be held in Portland on September 11. The invitation is attached here mecite_invitation
The National Captioning Institute (NCI) also offers its services for your Web video.
Computer Prompting & Captioning Co (CPC) is yet another.
Heck, you can even hire an organization to transcribe a video of an event in realtime, i.e., live streaming, which is formally known as Communication Access Realtime Translation (CART). Caption First is an example of this service.
Next up: Options for creating captions yourself.
My goodness. A plethora of free audio resources have become available on the Web and I’ve finally collected a selection in one place. I can’t claim that I’ve tried them all, but I can tell you a little about each (not that anyone can’t copy and paste from a Web site and do a little editing…).
First, here’s a list of sources that offer free audio books and other materials. It’s important to note that text transcripts of the audio files available at these sites are uncommon.
LearnOutLoud.com has 500+ free audio and video titles that have been collected from the Web. Their directory features free audio books, lectures, speeches, sermons, interviews, and many other great free audio and video resources. Most audio titles can be downloaded in digital formats such as MP3 and most video titles are available to stream online. The link above will take you directly to LearnOutLoud’s directory of free audio and video titles. Note that you can also purchase titles of materials that are in copyright and therefore not freely usable.
AudioBooksforFree.com has MP3, iPod, and DVD audio books (adventures, detectives, horrors, classics, children, non-fiction, philosphy, etc.) for download. According to its Web site, every audio book is produced and recorded by professional actors/narrators and experienced directors. The site is organized by category (fiction, children’s, non-fiction), as well as a listing of daily additions. If you’re looking to grow your own audio library in an instant, you can opt to purchase the whole site collection or bundles.
LoudLit.org offers public domain literature paired with audio performances. They set themselves apart from other sources that offer audio books by stating that “putting the text and audio together, readers can learn spelling, punctuation and paragraph structure by listening and reading masterpieces of the written word.” Their collection of children’s stories, poetry, short stories, novels, and history is small but growing. They actively seek donations on their homepage. For example, currently they’re raising money to complete the narration of The Scarlet Letter.
LibriVox has volunteers who record chapters of books in the public domain and release the audio files back onto the Web. According to their Web site, their goal is to make all public domain books available as free audio books. LibriVox is a totally volunteer, open source, free content, public domain project.
If you have your own digital text that you want to have read aloud, here are a couple of nifty and free online tools:
ReadTheWords.com is a free online text to audio conversion tool. Simply tell it what you want to have read aloud (upload a file) or copy and paste your plain text (up to 80,000 characters), choose a voice and reading rate, and…voila…audio file. Multiple options for listening to the file are provided: listen online, download MP3, post it on your blog or Web site, or turn it into a podcast. Here’s an example of the transcript of the Sound of Learning video, the topic of my last post, read aloud by ReadTheWords.
An alternative to ReadTheWords is SpokenText.
Check them out, compare and contrast, implement.
For a long time I’ve been searching for a teacher-friendly tool for adding captions to video. For obvious reasons, we consider video and media captions to be provided solely for people who are deaf or hard of hearing. But “…research is examining the potential for captions as a learning tool for acquiring English-language and reading skills. These studies are looking at how captions can reinforce vocabulary, improve literacy, and help people learn the expressions and speech patterns of spoken English” (National Center for Accessible Media). It’s not obvious that the use of captions might be a literacy tool. The image on this post is a stillshot of a captioned video that was produced by my colleagues at ALLTech for an online assistive technology course.
You know you’re obsessed with universal design when you approach a hip Web 2.0 technology through a lens of accessibility. And that’s exactly what happened to me earlier this morning. While catching up on some blogs that I’ve fallen behind on, I found Omnisio. According to its Web site, a user can annotate an uploaded video by adding “in-video comments.” If you go and browse* the videos, you might think I’m crazy to think that such a feature might be relevant as a substitute for captioning…but that’s never stopped me from making radical leaps to universal access. The “add-in comments” are meant as a participatory tool – a means by which Omnisio viewers can either cheer or heckle one’s uploaded video…on the video. This is a bit different from YouTube, on which viewers can positively or negatively construe videos, but on a designated comments page. Omnisio also hosts a comment area for each video, which supplements the annotations that viewers can insert over the video itself.
What most intrigues me about the “add-in” feature is that it appears that a user can insert a comment anywhere they wish. This lends to the possibility that captions can be inserted in sync with a video.
If you go there and view a featured video or two (“Steve Ballmer-goes-nuts” is, well, “wow”), you’re really going to think I’m unhinged. But that didn’t stop me from sending Omnisio a question on their feedback link…the possibility of extending a feature designed for one purpose to a broader application. A unique inquiry, I’m sure!
*Omnisio videos require Adobe Flash Player 9, which is not installed on the MLTI laptops.
We’ve known for some time now that multimodal instruction (e.g., integrating multimedia) can be more effective than conveying content using single modes of instruction (e.g., lecture only). Followers of research are aware of the dangers of overloading our instruction with multimedia – like all good things that must be consumed in moderation.
But what does effective use of multimedia in instruction look like? Under what parameters are we being responsible conveyors of content via multimedia? A new report summarize the findings of a review of multiple research studies. It’s fascinating in that it goes well beyond what we know about technology in education, extending to research on what goes on in individual learners’ minds as concepts and information are being processed under various conditions and modes of delivery.
The report, commissioned by Cisco Systems, is by the Metiri Group and titled Multimodal Learning through Media: What the Research Says
Here’s an excerpt that I especially like in the context of universal design:
“One of the bottlenecks to efficient learning is our own physiology – the way our brains are wired severely limits our capacity to learn. It is precisely this limitation that educators must overcome through informed design of learning environments, curricula, instruction, assessments, and resources. As they design lessons, create learning environments, and interact with students, they are seeking augmentations that accommodate for these human limitations. This is analogous to the design of machines (such as cars, tractors, elevators, robotic factories, can openers, stairs, etc.) used to accommodate for our severe physical strength and endurance limitations – only now we are augmenting intellectual capacity rather than physical capacity.” (p. 7).
And when it comes to those parameters that we’ve been seeking – i.e., How do I know when I’ve got the right balance of combining modes of presentation without under- or over-doing it? – here are some principles that the report cites from multiple studies (see pp. 12 & 13):
1. Multimedia Principle: Retention is improved through words and pictures rather than through words alone.
2. Spatial Contiguity Principle: Students learn better when corresponding words and pictures are presented near each other rather than far from each other on the page or screen.
3. Temporal Contiguity Principle: Students learn better when corresponding words and pictures are presented simultaneously rather than successively.
4. Coherence Principle: Students learn better when extraneous words, pictures, and sounds are excluded rather than included.
5. Modality Principle: Students learn better from animation and narration than from animation and on-screen text.
6. Redundancy Principle: Students learn better when information is not represented in more than one modality – redundancy interferes with learning.
7a. Individual Differences Principle: Design effects are higher for low-knowledge learners than for high-knowledge learners.
7b. Individual Differences Principle: Design effects are higher for high-spatial learners rather than for low-spatial learners.
8. Direct Manipulation Principle: As the complexity of the materials increase, the impact of direct manipulation of the learning materials (animation, pacing) on transfer also increases
A finding of the report that can inform our work as instructors is that “Students engaged in learning that incorporates multimodal designs, on average, outperform students who learn using traditional approaches with single modes” (p. 13). A good reason to keep on keepin’ on!