G3ict is the Global Initiative for Inclusive ICTs

G3ict: The Global Initiative for Inclusive ICTs
Find Us: FacebookTwitterLinkedInFlickrRSS Share:
Search | Site Map | Contact
You're currently not logged in. Login | Register
Home  »  Resource Center  »  G3ict Newsletter  »  News
Robert Pearson

Accessible Media


Subscribe to Newsletter
Tell a Friend
Print this Page

10/01/2015

Visual Emotional Learning and Inclusion


Can advances in television content accessibility lead the way towards inclusive emotional learning? Robert Pearson is affirmative. 

 To now focus solely on the emotional accessibility of your media through the use of description is limiting

Image: To now focus solely on the emotional accessibility of your media through the use of description is limiting, as the definition of media and its prolific availability is ever expanding.

My three-month-old son Maxwell has begun to smile. He knows Mom and Dad and even his sister, and has a tiny, cute grin that starts as a side smile and grows into a happy giggle that includes his big blue eyes. As a parent, I spend endless hours observing my son and thinking about his expressions and interactions with his family. For instance, does he look for a cue to begin smiling or is it an emotional impulse, expressed when he feels happy or perhaps, it’s a baby thing.

I mimed a smiling face at him a few times as well as smiling with my eyes to which he responded happily. At other times, while just showing a grin or just tickling him under the chin, he may have been unsure. It would seem that visually, he was beginning to make a correlation between smiling and delight. He was utilizing many senses to experience something in a manner similar to those around him.

I am reminded of a blog post I wrote in 2013, “How Emotionally Accessible is Your Media?” At the time I focused on the inclusion of audio description to facilitate a similar emotional response for those who could not otherwise hear or see content being displayed on television. For example, watching a gold medal hockey game or watching something more sombre such as a state funeral.

Technologies evolve and things have come a long way in the last two years. To now focus solely on the emotional accessibility of your media through the use of description is limiting, as the definition of media and its prolific availability is ever expanding.

For example, consider the Internet of Things, or as I prefer, the Internet of Everything. We connect every thing we use, from the clothing we wear, to the cars we drive, to the food storage sections, into the virtual network that envelops us. On the other hand, media now includes things such as your current GPS position, to the temperature outside, accessing multimedia content and networking with people.

There was a recent advertising campaign for a new app developed by Listerine that allows a person to utilize a smartphone app to “feel” a smile. It’s an example of the interpretation of emotion using technology as a medium, what I consider visual emotional learning leading to inclusion. How would we accomplish this within a standard broadcasting environment? Historically, description and captioning have been the only means by which accessibility accommodation was possible on TV programs. Let’s refer though to a recent major product announcement that declared that the ‘future of TV is apps’. If TV viewing is navigated through apps and coordinated through a virtual assistant, are we now also discovering a means by which we can dramatically increase the emotional accessibility of standard broadcast media?

The first step for such an evolution would start with voiceover compliance. It also begs the question as to whether any virtual assistant will become our first form of artificial intelligence. That may become a means again by which technology may be able to substitute for sensory perception or ability, or even offer guidance for typical emotional responses based upon those substitutions, when consuming media.

*************************************************************************************************************************************************************************************************

Related Resources

Blog: Where Does Accessibility Begin for the Internet of Things Ecosystem | Read Robert Pearson's Article.

Publication: Internet of Things: New Promises for Persons with Disabilities | Download Free PDF.

Event: DEEP 2015: Designing Enabling Economies and Policies | October 14-15, Toronto | View Event Details.

back


Related Items:

• Providing Education by Bringing Learning Environments to Students (PEBBLES)

•  AAPD LAUDS UNITED NATIONS G3ICT INITIATIVE FOR INCLUDING DISABILITY PERSPECTIVES AT DIGITAL CITIES WIRELESS ROUNDTABLE

• G3ict Expresses Heartfelt Condolences on the Demise of AMI Accessibility Officer Robert Pearson's

• Nominations Open for U.S. FCC Chairman’s Award for Advancement in Accessibility (AAA)

• ITU Workshop on Making Media Accessible to All: Options and Economics, Geneva, Switzerland


Comments
Previous    1     Next    (Total records: 1)

CCAC Captioning
Bravo. We've referred to the emotional side of everyday life as "relationship" and argued that you cannot achieve relationship without meaningful communication. Technologies are communications more than information alone. And this is a stretch, yet we now wonder if, somehow, "live events" (or "life events") might also be included in this broader use of the term term "Media"? If yes, this assigns live captioning (speech-to-text) to the media of everyday life, needed by so many millions globally. Current systems are usually automated with a combination of human and machine intelligence and action. This also might garner more understanding and respect needed to ensure that we all have the resources (technologies) everyday for access and inclusion. LS, CCACaptioning dot org
12:31 PM, 10/06/2015

Previous    1     Next    (Total records: 1)
Post new comment:
Only register users can add comments please Log-in