Jump to content area the CNICE logo, four people dancing around a globe and a book

Canadian Network for Inclusive Cultural Exchange


Search the CNICE website:

Jump to Content

Acknowledgements
1 General Guidelines for Inclusive New Media Cultural Content
	1.1 INTRODUCTION
		1.1.1 Scope
		1.1.2 Terms Defined
		1.1.3 Discussion of Disability Culture
	1.2 EXISTING PRINCIPLES OF ONLINE ACCESSIBILITY
			1.2.1.1 General Web Content Accessibility Guidelines
			1.2.1.2 Authoring Tools and User Agents Guidelines
			1.2.1.3 Language or Format Specific Accessibility Guidelines
			1.2.1.4 General Software Accessibility Guidelines
			1.2.1.5 Operating System Specific Accessibility Guidelines
			1.2.1.6 Education Focussed Accessibility Guidelines
		1.2.2 XML and Interoperable Information
		1.2.3 Accessibility Focussed Metadata and Information Architecture
		1.2.4 Inclusive Usability Evaluation Methods
	1.3 KEY BENEFITS OF ACCESSIBLE DESIGN FOR ONLINE CULTURAL CONTENT
		1.3.1 Accessible Equivalents for Deliberately Challenging Interfaces
		1.3.2 The Role of Aesthetics
		1.3.3 Entertainment and Engagement Values
		1.3.4 Perspective: A Cultural and Technical Consideration
		1.3.5 Interpretation
	1.4 MOVING ARTWORK ACROSS MODALITIES
		1.4.1 Translating Emotion Across Modalities
		1.4.2 Provide Multiple Perspectives
		1.4.3 Consider the Presentation Context
		1.4.4 Considering Cultural Differences
		1.4.5 Integrate with Workflow
	1.5 INTRODUCTION TO ONLINE CONTENT MODALITIES
		1.5.1 Visual Modality
		1.5.2 Audio Modality
		1.5.3 Haptic Modality	
			1.5.3.1 Haptic Devices
		1.5.4 Language Modality
	1.6 MODALITY TRANSLATIONS
		1.6.1 Twelve Possible Modality Translations
		1.6.2 Alternative Modality Equivalents of Visual Content
			1.6.2.1 Visuals to Language
				General Techniques for using Language to Describe Visuals
			1.6.2.2 Visuals to Audio
				Video Descriptions
				Music
				Sound Effects
				Automated auditory displays
			1.6.2.3 Visuals to Haptics
		1.6.3 Alternative Modality Equivalents of Audio Content
			1.6.3.1 Audio to Language
				Transcription of dialogue (Captions)
				Description of Music
				Description of Sound Effects
			1.6.3.2 Audio to Visuals
				ASL Translation of Speech
				Graphical Component of Enhanced Captions
				Visual Displays of Music or Sound Effects
			1.6.3.3 Audio to Haptics
		1.6.4 Accessible Collaborative Tools
			1.6.4.1 A-Chat: Accessible Online Chat Tool
				Key Features
				Screenshots:
				Suggested Use of A-Chat:
			1.6.4.2 A-Comm: Client-Side Chat/Whiteboard Tool
				Key Features:
				Screenshot:
				Suggested Description Strategies:
				Using Pre-Authored Drawing Objects:
		1.6.5 Ensuring Accessible Control
			1.6.5.1 Keyboard Accessibility
			1.6.5.2 Mouse Adaptation
			1.6.5.3 Voice Recognition
			1.6.5.4 Gestural Interfaces
	1.7 CONCLUSION
2 Online Enhanced Captioning
	2.1 THE CURRENT STATE OF CAPTIONING AND DESCRIPTIVE VIDEO ON THE WEB
	2.2 THERE'S NO ONE STANDARD FOR CAPTIONING AND DESCRIPTION ON THE WEB
	2.3 STYLE GUIDES FOR CAPTIONING AND DESCRIBING ON THE WEB
	2.4 TOOLS FOR CAPTIONING AND DESCRIBING ON THE WEB
	2.5 POSSIBILITIES FOR CAPTIONS AND DESCRIBING ON THE WEB
		2.5.1 Captioning and the Web
		2.5.2 Description and the Web
	2.6 TAKING TIME TO ADD ACCESS TO THE WEB
	2.7 THE CAPSCRIBE TOOL
	2.8 ONLINE CAPTIONING OF FLASH
		2.8.1 The Flash Captioning Tool
		2.8.2 Using the marblemedia Flash Captioning tool
		2.8.3 Techniques, Usability Issues and Further Considerations
		2.8.4 Summary
3 Online Video Description
	3.1 INTRODUCTION TO VIDEO DESCRIPTION
		3.1.1 Real-Time Online Video Description
		3.1.2 System Description
		3.1.3 Procedure for describing a live stream
		3.1.4 Preferences
		3.1.5 Technical Difficulties
	3.2 DISCUSSION AND RECOMMENDATIONS
4 Remote Real-Time ASL Interpretation
	4.1 INTRODUCTION
	4.2 TECHNOLOGY OVERVIEW
		4.2.1 Connecting IP technologies with ISDN
		4.2.2 Bandwidth recommendations
		4.2.3 Applications
		4.2.4 Physical space/Room technologies
		4.2.5 Environmental Considerations
		4.2.6 Future Considerations
	4.3 TECHNOLOGY ISSUES RELATED TO ACCESSIBILITY
		4.3.1 Video conferencing and use with sign language interpreters for people who are deaf.	
			4.3.1.1 Sign Language Interpretation
			4.3.1.2 Video remote interpreting
			4.3.1.3 Challenges of Video Remote Interpreting
			4.3.1.4 Considerations for video remote interpreting
				Eye contact/gaze
				Seating
				Environmental and technical issues
				Detailed Examples of Visual Noise
				Other physical factors to consider
				Turn-taking
				Confidentiality
				Special Considerations for managing multipoint or multi-application conferencing
				Other factors
			4.3.1.5 Considerations for Remote Sign Language Interpreters
				Preparation
				During the video conference
				Processing in public
				Lag Time
				Interpreters, deaf people and hearing people at several sites
				Interrupting the speaker (either deaf or hearing)
				Reduced Signing Space
				Auditory Referencing
				Team Interpreting
				Deaf Interpreter
			4.3.1.6 Skills Needed for Remote Interpreters
				Other factors to consider
	4.4 ILLUSTRATIVE CASES AND PERSONAL ACCOUNTS HIGHLIGHTING ISSUES
		4.4.1 Personal Account: My First Impressions of Videoconferencing
		4.4.2 Personal Account: My First Impressions of Videoconferencing
		4.4.3 Personal Account: Web Applications Specialist
	4.5 USER ENVIRONMENT SCENARIOS:
		4.5.1 Health care Environments
			4.5.1.1 Tasks
			4.5.1.2 Physical scenarios
			4.5.1.3 Issues unique to health care settings
				Physical Positioning
				Sign language production
				Socio/Political
			4.5.1.4 Guidelines and Protocols specific to Health Scenarios
		4.5.2 Education Environments
			4.5.2.1 Typical tasks
			4.5.2.2 Physical scenarios
			4.5.2.3 Issues unique to education settings
				Turn-taking
				Shared applications
				Other important factors
			4.5.2.4 Guidelines and Recommendations specific to education
		4.5.3 Business Environments
			4.5.3.1 Tasks
			4.5.3.2 Scenarios
				One to One Meetings
				Small Group Meeting
				Large meetings
			4.5.3.3 Issues unique to meeting settings
				Turn-taking
				Visual materials
				Socio/political
				Technology alternatives
			4.5.3.4 Guidelines and Recommendations specific to meetings
	4.6 LIVE CAPTIONING: AN OVERVIEW
	4.7 RECOMMENDATIONS AND GUIDELINES - SUMMARY
	4.8 GLOSSARY OF TERMS
5 Representations of Visual Geo-Spatial Information
	5.1 INTRODUCTION
	5.2 2D CONTENT
		5.2.1 Translations: Visual to Sound
			5.2.1.1 Spatialized Audio
			5.2.1.2 The Aural Legend
			5.2.1.3 Information Scalability
			5.2.1.4 Translation Algorithms
		5.2.2 Translations: Visual to Touch
			5.2.2.1 Full Force and Tactile Feedback Distinctions
			5.2.2.2 Haptic Effects Provided by Immersion Corporation
			5.2.2.3 Tactile Effects Applied to Geospatial Content
		5.2.3 Format: Scalable Vector Graphics (SVG)
			5.2.3.1 Resources
			5.2.3.2 Methodology
			5.2.3.3 Code Sample 1: An  SVG that draws a fill red circle.
			5.2.3.4 Code Sample 2: Using Sample 1 code, group objects together, add sound and assign JavaScript functions as needed (shown in bold).
			5.2.3.5 Code Sample 3: The minimum php code required to implement haptics with SVG.
			5.2.3.6 GIS Applications and SVG for Government On-Line (GOL) Content
	5.3 3D CONTENT
		5.3.1 Transformations: Visual to Sound
			5.3.1.1 Visual to Touch
			5.3.1.2 Visual to multimodal
		5.3.2 Format : Web3D - Virtual Reality Modeling Language (VRML)/X3D
			5.3.2.1 Methodologies
			5.3.2.2 Code Sample 1: Initial php with JavaScript to enable immersion effects
			5.3.2.3 Code Sample 2: VRML generating a red sphere that will detect collisions
			5.3.2.4 Code Sample 3: Java applet used with VRML from Step 2 to detect collisions.
			5.3.2.5 Code Sample 4: Embedding VRML and applet into the php from Step 1
			5.3.2.6 VRML/X3D and GIS
				GeoVRML
				Resources
		5.3.3 Navigation and Way-finding Guidelines
				Learning about an Environment
				Landmark Types and Functions
				Landmark Composition
				Landmarks in Natural Environments
				Combining Paths and Landmarks: Landmark Placement
				Using a Grid
	5.4 MODALITY COMPLEMENTARITY
	5.5 APPENDIX: CODE SAMPLE FOR ADDING BOTH IMMERSION WEB PLUG-IN AND USER ENVIRONMENT DETECTION TO php.
References


Perspectives:
BOX 1: ACCESSIBILITY FROM AN ARTIST'S PERSPECTIVE BY SARA DIAMOND
BOX 2: AN ASL CONTENT DEVELOPER'S PERSPECTIVE BY ELLE GADSBY


Figures:

FIGURE 1: POTENTIAL MODALITY TRANSLATIONS.
FIGURE 2: LANGUAGE BRIDGES ALL THREE COMPUTER MODALITIES AND IS A FLEXIBLE MEDIUM FOR MODALITY TRANSLATION.
FIGURE 3: THIS SCREENSHOT SHOWS ONE OF THE A-CHAT USER PREFERENCE SCREENS.
FIGURE 4:THIS SCREENSHOT SHOWS THE MAIN A-CHAT SCREEN. THE MESSAGE AREA IN ON THE TOP-LEFT, THE COMPOSE MESSAGE AREA UNDER IT AND THE OOTIONS AND USER LIST AND HISTORY AREAS ON THE RIGHT.
FIGURE 5: AN EXAMPLE OF A TEACHER-LED LESSON USING DRAWINGS TO DEMONSTRATE A CONCEPT, THREE STUDENTS ("AMANDA", "DAVID" AND "CATHY") SUBMIT PEER DESCRIPTIONS. THE MOST RECENTLY DRAWN SHAPE, AN ELLIPSE HAS NOT YET BEEN DESCRIBED.
FIGURE 6: EXAMPLE OF CAPTIONED STREAMED VIDEO.
FIGURE 7: SAMPLE OF CAPTIONED PBS VIDEO.
FIGURE 8: EXAMPLE OF ANNOTATIONS WITH CAPTIONS.
FIGURE 9: EXAMPLE OF TRADITIONAL APPEARANCE, MIXED CASE CAPTIONING.
FIGURE 10: EXAMPLE OF CAPTIONS WITH ENHANCED FONT STYLES.
FIGURE 11: EXAMPLE OF ENHANCED CAPTION FONT STYLE USED TO CONVEY TONE AND ENERGY.
FIGURE 12: EXAMPLE OF MULTILINGUAL CAPTIONS.
FIGURE 13: EXAMPLE OF GRAPHICS TO CONVEY LANGUAGE AND SOUNDS.
FIGURE 14: EXAMPLE OF GRAPHICS TO CONVEY SILENCES AND PAUSES.
FIGURE 15: EXAMPLE OF ANIMATION TO CONVEY SOUND ELEMENTS OF VIDEO
FIGURE 16: CAPSCRIBE TOOL
FIGURE 17: SILENCE INDICATORS THAT MOVE HORIZONTALLY AS THE VIDEO PROGRESSES
FIGURE 18: DESCRIBER CONTROL AND STATUS INDICATORS.
FIGURE 19: DESCRIBER INTERFACE.
FIGURE 20: PREFERENCES SCREEN.
FIGURE 21: DIFFERENT FURNITURE AND PLACEMENT CONFIGURATIONS.


Tables:

TABLE 1: USABILITY EVALUATION METHODS WEB RESOURCES.
TABLE 2: SUGGESTED SEATING ARRANGEMENTS FOR ALL PARTICIPANTS.
TABLE 3: SUMMARY OF TECHNOLOGY CONSIDERATIONS FOR VIDEO CONFERENCES INVOLVED HEARING AND DEAF PARTICIPANTS.
TABLE 4: BEHAVIOURAL, COMMUNICATION AND ETIQUETTE ISSUES.
TABLE 5: REAL WORLD OR AURAL METAPHOR SOUNDS FOR AN INTERSECTION MAP.
TABLE 6: ABSTRACT SOUNDS FOR AN INTERSECTION MAP.
TABLE 7: GEOPOLITICAL MAP TRANSLATION MATRIX.
TABLE 8: TRANSLATION ALGORITHM FOR REPRESENTING TYPICAL MAP ELEMENTS WITH HAPTIC EFFECTS.
TABLE 9: FIVE LANDMARK TYPES.
TABLE 10: SAMPLE LANDMARK ITEMS.



We acknowledge the financial support of the Department of Canadian Heritage through the Canadian Culture Online Program

Canadian Heritage Logo

horizontal rule the ATRC logo