Friday 30 November 2007

SLED Accessibility Threads

Following are excerpts from the SLED mailing list, which deal with accessibility and visual impairment,


SLeek for the vision impaired?

Sean FitzGerald (seanf at tig.com.au) writes,

But the really neat thing about SLeek that leads me to think it may be useful for the vision impaired (if screen readers work with it) is that it has a "Follow" function that lets you nominate to hitch your av to another av in range... a virtual version of a vision impaired person holding the arm of a guide. It works quite well. Then the guide just has to describe the environment.



Blind people in SL - Idle speculation

Mike Reddy (mike.reddy at newport.ac.uk) writes,

This would be easier with a client SDK that could trap text, use text to speech and allow keyboard macros, but given the existing client could we not have a HUD or head mounted scripted object that 'spoke' information. Location, people's names as they came and went, object IDs. Within the current system, these would probably have to be pre-recorded and linked to specific text, say in a notecard. Alternatively, objects in an 'accessible' area could be able to self report, say if someone approached them within a certain distance for a certain time. This area could be made the home location for participants. We could even run a competition to design accessible vending machines that used sound instead/as well as text.

To aid people with visual impairments - most people who are blind aren't actually 'blind' - it would be great to have control over field of view in the client, which could effectively allow mouse view of a small angle to be the equivalent of a magnified image, much as PC viewing software allows the whole screen to be enlarged. Sadly, this would not easily include text. However, if we had a HUD object repeating any 'heard' text in the mouselook view, then even this might be possible. This would require chat in the mouselook view...

Ah well, maybe when I have a PhD student to throw at it...


Jeff Hiles (jeffrey.hiles at wright.edu) wrote

As Danielle said, right now you would have to pair a blind student with another student or with an assistant who could navigate, read, and describe what's on the screen. That's not unique to people with visual disabilities, though.

The visually impaired could participate more directly, though, if the SL client was accessible to screen readers. I know blind people who have embraced instant messaging with clients that work with JAWS. So, in theory, it would be possible for people who can't see to carry on their own text conversations in Second Life. That degree of independence, I think, would make the experience more immediate and immersive.

However, the Second Life client doesn't currently give screen reader access to chat or IM text. In fact, you can't even read the menus with JAWS. If the client did have that most basic accessibility--chat, IM and menus--blind users would still need some assistance getting around.


Accessibility

Lisa Dawley (lisadawley at boisestate.edu) writes,
I was doing a presentation in our amphitheater one day. A gentleman in a wheel chair asked me if I could make the stadium accessible, because there wasn't a seat large enough for him to "land" his wheelchair and he had to float.


Second life for the visually impaired

Roome, Thomas C (thomas.roome at student.utdallas.edu ) writes,

In the near future the Internet will make a shift from web sites to a 3D environment spaces. The same information that is on a web site can be available to people in a 3D environment, but the question is how can a 3D environment be accessible for people with disabilities? The UTD Accessibility Island will be trying to find the answers to this question. One of the, island goal is to provide information on video game accessibility and general information on the different disabilities. Another goal is to create a conference center for people to discuss different topics around Accessibility. The last major goal of the island is to provide some land for research and development, and I want to form an in world research team of scripters, programmer, educators and people with disabilities. If you would like to become a research team member, then please contact Tom06 Castro or e-mail thomas.roome at student.utdallas.edu



Further thoughts on people with visual disabilities in Second Life

Jeff Hiles (jeffrey.hiles at wright.edu) writes,

When I work with JAWS users in real life, they sometimes ask me to give them my arm and guide them where they need to go. What if you could "give an arm" in Second Life and lead someone around? Better yet, what if you could do that invisibly so no one else in Second Life knew you were there? The key would be for you to be able to guide someone remotely, without having to be in the same room as the person you were guiding.

For example, as a guide, you would have the ability move your friend's avatar through Second Life, and to see what that avatar would see. But your friend would have control of chat and IM. From your computer, you would move the avatar through Second Life wherever your friend asked you to take it. The two of you would communicate by voice, say through Skype, and you would describe everything you saw.


Danielle Mirliss (dmirliss at yahoo.com, Danielle Damone in SL) also comments,

I also work closely with several students on my campus that are blind and they would be willing to give us feedback on the experience.

Disability in SL

The BBC disability website, Ouch! discuss some of the appeal of SL in Staying in is the New Going Out
A new nightlcub called Wheelies officially opens its doors this Friday, the 1st September, at 9pm UK time.

Owned by Simon Stevens, who has cerebral palsy, Wheelies aims to make guests feel comfortable about disability as well as dancing, drinking and just plain having a good time.

And in the comments, Kopilo Hallard quite rightly says,
The point is that he couldn't go out and socialise and SL gives him a platform so that he can meet his needs (ie socialisation) even in his current physical state.

This gives him an escape from reality, a breath from being physically unable to do things.

Besides that point, SL is a great way to network with people from all over the world. To gain perspectives which may not be abled to be gained in the geographical region due to culture, social or other conforms.

Also SL gives developing artists both music, graphic, programming, etc a way of having more exposure which they can not just gain in their day to day life, if you like in a similar way to myspace, except the music can be played live.

Additionally, on the SLED mailing list, Jeff Hiles (Farley Scarborough in SL) recommends,
In addition to the many articles on the Web about Simon Stevens and his
Wheelies night club, you may want to look at Fez Rutherford's blog,
"2nDisability." He has created avatar animations that simulate disabilities.

Also, Cubey Terra has made three very nice wheelchairs that are available free at the GNUbie Store at Indigo. They are down the ramp and to the left.

http://slurl.com/secondlife/Indigo/195/62/40

Thursday 29 November 2007

Mailing Lists & Fora

Generally accessibility issues seem to be dealt by Linden with John Lester, AKA Pathfinder Linden.

"Pathfinder Linden: well, I'm very interested in things we might be able to do from LL's perspective to make SL more accessible

Pathfinder Linden: so please hit me with recommendations :)"

The following resources might be useful,


SLED also has a forum with a thread called Supporting Visually Impaired Users, though it looks like it hasn't had any posts since January 2007.

From that forum Jonathon Richter has the following to say,
I concur that we ought to frame the problem in terms of the various affordances that Second Life as a medium allows its users - indeed, the benefits of SL over other types of learning media are precisely the selling points as to why we want accessibility to these incredible learning environments, yes? So, first - documenting the various affordances and the skills/inputs required to successfully navigate the media and receive said benefits is crucial.
And Jeff Farley has this to point out:
"There's a slogan in the disability-rights movement that goes 'Nothing about us without us.'"

Following are some extracts from the SLED mailing list that deal with visual impairment:

Different users of Second Life

Jeff Hiles writes,
But I think the most promising technological aide lies inways to make SL chat accessible, since the SL client isn't accessible to screen readers.
If I had to accommodate someone today, I'd approach it like a stage production and provide a skilled audio describer. The describer might join the blind person or communicate through VOIP. The person could then tell the describer where to move the avatar and what to do with it, while the describer summed up the scenes along the way. If it was done right, perhaps no one in SL would know the person was blind.

Ideally, the person's screen reader would have access to his avatar's chat so that communication would be direct, not through the describer. I'm not sure if that level of accessibility is possible yet.


SimTeach host the Education Wiki and also recently published a transcript of a meeting called Supporting Students With Disabilities", which was about
"discussing accessibility issues within Second Life, with a particular focus on how to best accommodate students with disabilities when SL is used for educational purposes"


This is the summary,

* The user interface and software of SL does not currently allow much freedom in regards to how it is manipulated (e.g., mouse versus keyboard). The UI is currently not JAWS- compliant as well. The use of XML-based user interfaces in future versions could provide great flexibility for tuning the software to a user's needs.

* The vast amount of visual information in SL is currently inaccessible to residents with visual disabilities. The addition of metadata (like the ALT and LONGDESC tags used for images in HTML) was suggested. While enforcing the inclusion of helpful metadata is tricky, it was agreed that educational builds at least should adhere to a standard.

* Regarding accommodations for a student with disabilities in SL, it was suggested that equivalent RL practices could be applied. A blind student might have a companion to assist him or her. This led to an interesting question regarding whether the companion or the student or both would have avatars in SL.



The following parts of the conversation have been cut from their context and reassembled without any intervening and off-topic posts,

Kate Spatula: have anyone of you had an instance where a person with a disability, say visual issues, was involved in a class using SL?
buridan Simon: /not that i know of

Ali Andrews: not yet

Gus Plisskin: Kate: Not visual issues, but I've build footpedals for those with carpal tunnel who can't use mouse buttons.

Kate Spatula: so that's one concern... the interface is very mouse-heavy on here, isn't it?

buridan Simon: /mouse heavy as compared to?

Ali Andrews: especially when building

Janor Slichter: more keyboard commands to drive menus and actions?

Gus Plisskin: yes, but SL needs mouse heavy. An alternative would be very tough

Janor Slichter: the way gestures work in chat?

buridan Simon: i dunno, i find that i use the arrow keys a fair amount

buridan Simon: /and the tab

otakup0pe Neumann: Hello everyone. I know that lots of builders do just that

otakup0pe Neumann: Rez a cube, and use tab/arrow keys / numpad for the specifics

buridan Simon: /what would be nice is better proximity detection for friends and colleagues with audio cues.... so a friend approaches and a sound could get louder....

Rubaiyat Shatner: I think a big issue with accessibility is to somehow expose the data so that it can be read if it is text and translated if not

Corwin Carillon: if the cleint was JAWS compliant you would get some of the with HUDs buridan

Janor Slichter: Kate, are you referring to being able to add special functions, like with add-ins, to the client, to accomodate certain needs?

Kate Spatula: that's one approach that could be taken, or providing hooks for external software to use (like JAWS requires), or these could all be optiosn built in to SL

otakup0pe Neumann: I sense this is a direction that LL wants to move in... but i really have no idea

Farley Scarborough: JAWS access and keyboard access are both very standard on Windows apps

Kate Spatula: so here's the difficult question, if you had a class where SL was a key facet, and one student was blind, what would you do?

Ellie Brewster: seems to me that you'd have to get them a companion

Ellie Brewster: just as you do in a rl class

buridan Simon: /All of the students that i've had that were visually impaird had companion assigned anyway

Kate Spatula: so would they have an avatar on here or just the companion or both

Farley Scarborough: There are profesional describers we use in RL

Krisjohn Twin: @Kate: I just walked into this room, sat down at a pre-defined spot and started typing. How hard could that be to script for someone who is blind? Most of the 3D interface in SL is wasted.

buridan Simon: /it is true the 3d doesn't matter as much as proximity

Krisjohn Twin: Heck, an IRC bridge to this room would probably be more than enough to participate in this discussion.

Ellie Brewster: you can use sound files as cues. Tie them to the scenery

Farley Scarborough: Ah, but the visual's... they aren't wasted on the blind.

otakup0pe Neumann: And scripting movement will get more interesting with libsl.

Farley Scarborough: Listen to an audio described movie

Kate Spatula: so let's consider this room. could we augment it to make it more accessible beyond just visual

Farley Scarborough: the visual description is very important

Gus Plisskin: For the person who's visually-impaired, rather than blind

Ellie Brewster: what about using a different channel for viz impaired?

Gus Plisskin: with description? that'd work

otakup0pe Neumann: Do you mean chat channel Ellie ?

Ellie Brewster: yes

buridan Simon: i think someone has an irc bridge

otakup0pe Neumann: there are several

otakup0pe Neumann: we have developed one (we being my company)

otakup0pe Neumann: and i knwo tehre is one with libsl

buridan Simon: /Actually i know irc, and im bridge

otakup0pe Neumann: and the #secondlife irc channel runs one

Kate Spatula: i'm loolking right now at pictures of some famous philosophers hanging on the walls. the environment could provide a list of tagged objects to the user

Kate Spatula: which would be useful to scripters as well

otakup0pe Neumann: my company is in the process of developing a "hidden" metadata system for SL object

otakup0pe Neumann: uhh. hidden is a bad word.

otakup0pe Neumann: ubiquitous ? heh.

buridan Simon: /hah good luck with that... tagging perhaps, but object standard metadata...

Kate Spatula: the challenge, as it is in web accessibility, is making sure the data is provided

buridan Simon: /cidoc is a bugger

otakup0pe Neumann: maybe metadata is also a poor word ;)

buridan Simon: /metadata is the word... it means data about data

otakup0pe Neumann: I know. There are many kinds of metadata.

otakup0pe Neumann: And as we just saw, only so much room in a script.

Kate Spatula: i'm sure i could force rubaiyat to tag Trotsky's, but what about *insert random place* here

otakup0pe Neumann: and kate, good point again. tagging the whole grid is a daunting task =O

otakup0pe Neumann: let alone both grids !

otakup0pe Neumann: and having them all work together.

Ali Andrews: but isn't it tagged already, in the edit window?

buridan Simon: /tagging is also an area where you will have a good number of people who vary and some who actively resistantly participate by tagging wrongly

otakup0pe Neumann: That's a idfferent kind of tagging Ali.

Ali Andrews: how is it different? It can list the name, discription... it just needs to be done consistantly as we do when we build web pages

buridan Simon: 'everything is a cube'

otakup0pe Neumann: This is true Ali

otakup0pe Neumann: Consistency is the key.

Kate Spatula: there is a difference, ali.

Ali Andrews: so at least for our educational builds we can start a standard

otakup0pe Neumann: it's metadata ,but not strictly descriptive

otakup0pe Neumann: i wonder how many "objects" are around here.

Kate Spatula: web pages have a structure that supports the use of those descriptions. however, accessing just names and descriptions on here is fairly unstructured

otakup0pe Neumann: lack of consistency....

buridan Simon points out that there are standards, and it is better to attempt to conform to a standard than to create one anew

Bryan Mnemonic: does linden tag any objects with metadata at all?

Pathfinder Linden: not really, not in the sense you're thinking about

otakup0pe Neumann: object name, description, groups, that is all metadata

otakup0pe Neumann: but yeah. not too "descriptive"

Kate Spatula: here's a related issue... avatakind of like all the image alt tags that say "image"

Kate Spatula: rs and disability. aside from pathfinder with his lack of a nose and rubaiyat's inability to store fat, none of our avatars really display signs of disability

Bryan Mnemonic: I wonder if that can be added to the "edit" build window so folks ban begin adding specific tags, or a limited number of them based on a drop down menu

Kate Spatula: going back to our blind student, would his avatar have a white cane an dsunglasses?

otakup0pe Neumann: kate : it's up to them

buridan Simon: /it would seem to me that identity is up to them

Bryan Mnemonic: for instance, we riright click on this couch, and when we click "more"perhaps there could be an additional tab

Bryan Mnemonic: with metadata options

buridan Simon: /metadata that can be filled with 'arse' and related words

Farley Scarborough: Ah, but of course the blind student couldn't see where to click, so wouldn't have a mouse

Kate Spatula: i'm not saying to enforce avatar appearances, but consider the importance of avatar image to reflecting disabilitiy

otakup0pe Neumann: do you think other avatars would requrie that visual cue ?

Kate Spatula: there are two sides ot the issue. one is whether other avtars need the cues, but then there's also the importance of digital identity and one's avatar image

Pathfinder Linden: there was a recent paper about social cues and avatars in SL recently...

Farley Scarborough: To expand Kate's question... Many people with disabilities you can see in RL can go through SL without anyone knowing. Could someone who cant' see?

otakup0pe Neumann: i think it should be the option of the avatar in question

Kate Spatula: that's the point... are the avatar options enough to reflect some of these needs

otakup0pe Neumann: both sides really. does the blind student want to advertise that they are blind ?

otakup0pe Neumann: and do other avatars want those cues ?

Pathfinder Linden: aha, here it is: http://www.stanford.edu/group/vhil/papers/second_life_nonverbal.pdf

Kate Spatula: actually, they might want to advertise that they're blind, or in a wheelchair, etc. it's part of their identity and it's an issue of whether that person wants to express that and if they can express that

otakup0pe Neumann: but it's up to them.

Kate Spatula: RL example: i once roleplayed with a person who insisted that his elf cleric had a wheelchair built by gnomes

Farley Scarborough: It's only up to them if they can get about without anyone knowing.

Kate Spatula: from a disability critical studies viewoint, SL has an extreme bias towards physical perfection and able-bodiness

buridan Simon: /metadata is always a social and organizational issue first

Kate Spatula: for me, i think one of the larger open questions is how do people with disabilities view SL... both in terms of its usability and its potential

Farley Scarborough: There's a slogan among disability rights advocates: "Nothing about us without us."

Krisjohn Twin: For me, a larger question is how do people who barely know how to turn on a PC cope with SL at all? I've got plenty of able-bodied staff that have no chance coping with this environment.

Friday 23 November 2007

The Acoustic Ecology of the First Person Shooter

I posted about Mark Grimshaw's Phd thesis on another blog earlier this year, but as it's relevant to this project too I thought it might be useful to repeat the link, especially as he gave a cut down presentation at DiGRA '07 in Tokyo.

Audio RTS

It sounds like an unusual prospect, but Jean-Luc Pontico has created a convincing Sound RTS. It's cross platform (Windows, Linux, Mac) and localised (French, English, German, Italian, Spanish) and free!

I'm particularly impressed with the menu system and the speech samples used. Very clear and easy to use. Real time 3D audio works reasonably well to give the impression of units moving around, and continual reports of building progress are useful and analogous to visually observing the state of a build or progress bar.

Apparently it has mutliplayer features and allows the players to use their own maps as well. There's an active forum and blog.

This is an impressive game with high quality production values and a complete RTS implementation.

Tuesday 20 November 2007

Crackdown Audio

At the weekend I was speaking to Roland Peddie, one of my old colleagues from the games industry. He mentioned that his last game, Crackdown, received a BAFTA for its use of audio, for which he was the programmer.

Team Xbox have an interview with Raymond Usher, the audio director, about The Audio of Crackdown in which he refers to the code Roland wrote:

"a revolutionary audio shader that uses the surrounding geometry to dynamically color and time delay the direct and indirect audio reflections."
"When we heard the results of our complex Reverb/Reflections/Convolution or “Audio-Shader” system in Crackdown, we knew that we could make our gunfights sound like that, only in real-time! Because we are simulating true reflections on every 3D voice in the game, with the right content we could immerse the player in a way never before heard."


Real-time "early-reflection" processing might be useful for our current project as a way of situating the user in a complex and dynamic environment.

Funkhouser, Thomas A.; Tsingos, Nicolas; Carlbom, Ingrid; Elko, Gary; Sondhi, Mohan; West, Jim. Interactive Acoustic Modeling for Virtual Environments <http://www.cs.princeton.edu/~funk/acoustics.html> (Last accessed 22nd November 2007)



Crackdown. Real Time Worlds (Microsoft Game Studios: 20th February 2007). Xbox 360.

See also Metacritic and VGChartz

Monday 19 November 2007

Gesture Self-Voicing

It's not my intention to continue adding self-voicing capabiliities to this viewer, but I did just throw together some code which announces gestures. This currently only works on your own avatar, but clearly it would be similar to announce (visual only) gestures and animations from other nearby users.

I've commited my code changes to the repository. Please let me know if you're able to use them as I haven't tested getting the code and building from scratch, and I'm also only commiting those changes that I think are needed, but I could very well be wrong.

Wednesday 14 November 2007

Self Voicing : Proof of concept

Without wanting to jump the gun and accounce this project as a success before it's got started, I just had a nice affirmation that self voicing is useful.

I wanted to take some screenshots to illustrate accessibility problems for another post to this blog, but I was dealing with some other applications and still had my headphones on with my self-voicing SL viewer was running in the background with my avatar at the start location on Orientation Island. This is clearly a busy thoroughfare, so it's perhaps not surprising that after a while someone would try to chat to me. What did surprise me was when my viewer spoke the chat text and interrupted me from my other work.

I'd only tested it with bots and alts previously which is a bit contrived. This occasion was less of a technical affirmation and more of a social one - the viewer alerted me to something that needed my response similar to the way instant message clients use audio to alert the user. In that sense it worked: someone in SL wrote me a message and wanted me to respond. Although I was AFK for the purposes of that application I was still able to keep my presence 'alive' even without needing to see the chat message. The same thing happened a little later with a friendship notification.

Also interesting to reflect on the adage "One person's accessibility issue is another's usability issue." I am ostensibly working on an accessibility project, but it turns out to have positive usability results.

Friday 9 November 2007

Audio Game Walkthrough

I find it difficult to play audio games. I imagine this is because I do not have the quality of auditory attention that blind players do, and despite having many many years of experience playing and developing conventional games, this difference in auditory skill clearly affects my ability to design a game for a blind player.

In order to try to immerse myself in the space of an audio game I've been listening to Ivan Fegundez's walkthrough of GMA Games' Lone Wolf Mission 2.

My initial reactions to this recording and to my own playthrough of Terraformers were similar in that I felt confused and alienated by the audioscape. I wonder though if this is simply a question of interface, context and meaning that presents a similar state when playing or watching someone else play a new game.



Aside from the mildly comic interuptions from the speaker's mother and the ringing phone, I found it very interesting to listen to this game as it gave me an opportunity to try to get inside the head of an accomplished audio gamer. One of the most interesting aspects was the way I tried to adapt to the audio-only stimulus: by shutting my eyes I found that I could increase my concentration on the sounds of the game, and despite the extremely fast speech announcements after some time I found that I was filtering for only the relevant information based on pattern recognition. After hearing the spoken announcements from the game I became used to the structure of the sentences, and was able to focus my attention only on those key phrases which contained the variable data. For example "Island 100 off port twenty three hundred yards", and with this data I was able to construct a mental model in real time. Projecting myself into this mental space I felt my relation to the other game entities in terms of direction and distance, such that when the submarine's engine was running I could imagine myself moving forwards through the space, using the announcements to maintain triangulation between myself and the other objects in the water.

Accessibility SDKs

The Mozilla developers have an extensive extensive article on MSAA, with lots of general advice for developers.
  • Use MSAA whenever you have created custom controls where you're handling the drawing, mouse and keyboard accessibility in your own code. MSAA is the only way to let all AT's know what your code is doing.

  • Don't use MSAA if you're not running on a Windows platform ... hope that one was obvious.

  • Don't use MSAA for exposing documents or other specialized data, if it's important that the user get access to formatting information.

The RNIB as usual has some good advice on Effective Keyboard Access
"All input functionality needs to be keyboard operable but not necessarily all user interface elements. If there are multiple ways to perform a task, only one of them needs to be available from the keyboard, though it is best if all possible forms are keyboard operable."
ISO/FDIS 9241-171:2007, 9.3.2 Enable full use via keyboard, Note 5.

In particular they highlight the following issues pertinent to SL,
We often come across screens that contain a huge number of controls. There are sometimes good reasons for this but less cluttered screens are often more usable.

Tab order is a critical aspect of accessibility for a keyboard user.

It should be possible to move the keyboard focus to non-editable interface elements

This is followed by a section on The Programmatic Interface with the following key points
Access technologies need to be able to identify accurately all the different types of controls and their labels

Visible focus - This is the 'I-beam', highlight or outline that indicates which screen element has the input focus, ie where an action from the keyboard will take place. This is essential information for a keyboard or voice input user who doesn't have the luxury of just moving the mouse and clicking.

Compatibility with access technologies - This is mainly achieved by using standard accessibility services provided by the operating system and software toolkits.

In terms of Second Life, there are clients/viewers for 3 different operating systems which would imply using (at least) 3 different accessibility SDKs: OSX
Windows
KDE, Gnome (Unix)

This current pilot project will only attempt a Windows prototype client. In order to be fully cross platform, something like an abstracted accessibility API would need to be implemented in the application (similar to Mozilla's technique), wrapping the OS-specific API.

This approach would seem to be appropriate for the user navigating around the window-like elements of SL, but something more is needed to describe the main content of the screen. Whether this is sonification similar to that used in Terraformers, or a Guide-Bot as imagined by Josh Markwodt, or the desciptive and radar techniques prototyped by IBM, is as-yet unclear. User testing on a variety of prototypes would need to be conducted to have a better idea which way to procede.

Local Services (Brighton, UK)

These resources might be useful for contacting visually impaired people in the local area, for interviews and application testing:

National Association of Local Societies for Visually Impaired People, region 2 (South East) has a number of local societies, including The Brighton Society for the Blind.

The RNIB has a residential home in Brighton, Wavertree House

Brighton and Hove City Council Sensory Services team includes Rehabilitation Officers for the Visually Impaired (ROVI).

Thursday 8 November 2007

Accessibility Analysis Literature Review

I've been considering the work others have already conducted on analysing the inaccessiblility of SL:

Abrahams Accessibility
The client does not run in a browser, it runs in its own window, it does not use HTML to any great extent and therefore the Web Accessibility standards (WAI) are not sufficient and in some cases not relevant.
Anyone that has a vision impairment and uses a screen reader to access a computer and the web can not access SL, because even the textual information displayed in the client is not accessible by the screen reader.
  1. Include an accessibility section in the help.
  2. Make the help screens accessible without a mouse.
  3. Make the text in help sizable.
  4. Make any text on the client configurable for size and color, including the menus, the avatar names, messages.
  5. Enable the numbering of objects on the screen so that instead of having to click on an object you can choose the object by number (rather like the 'say what you see feature' in Vista).
  6. A text-to-voice feature for chat, in stereo so that the avatars location can be estimated, and the ability to configure the voice to fit the avatar.
  7. Provide a text list of avatars in the vicinity and voice announcements of entries and exits.
  8. Simulation of an electronic white stick.


Second Life Class Action Suit
The first, one-time barrier, is that the registration process uses a catpcha that a blind person cannot use; for a solution to this problem see ‘Bloor helps ITA do it better than Google'.
But the real problem comes with the user interface, which gives a visual representation of the SL terrain, any avatars in your vicinity, any object you can interact with, and any instruction displayed on SL notice displays. None of this information is available via a screen reader and none of it can be pointed at without a mouse. Further, the controls such as chat, search, help can only be activated by a mouse click.


No Second Life For Visually Impaired
If you access the Second Life Client viewer with a screen reader like Hal, Jaws or Window-Eyes, nothing will be spoken aloud, nothing will apear on your braille line.
Presently, not only is SL not compatible with screen readers, the SL website itself is largely inaccessible to people with visual impairments. Feedback from an online questionnaire I designed demonstrates that 8 out of 10 visually impaired users were unable to register for an account on the SL website. This is due to the fact that the site does not conform with W3C accessibility guidelines. Linked images have no alt attributes and form fields do not link correctly.

After attempting to register for an account one questionnaire participant responded by saying:

“I found no easy step by step guide that would say what to expect, or even give me any reason to overcome the obstacles for joining”… their reasons for wanting to join SL - “..an online community to join. But only if it represented a cross-section of real life. I’m not interested in anything that so flagrantly excludes disabled people”.


Accessibility and Second Life (Revised)
A student relying solely on a screen reader will be shut out from Second Life.

What to do if you have a visually impaired student in a course using Second Life? Think about what learning objectives made you choose Second Life. Is it communication? Maybe alternate chatrooms or Skype could be enabled.

Is it a visual experience? Then you can treat Second Life as you would other graphics or animation - that is, provide lots of descriptive text.


Accessibility and democracy in Second Life
It would require a tremendous amount of Alt tagging and/or audio describing to make the rich and evolving virtual world of "Second Life" intelligible,
useful and enjoyable to blind and low-vision users.


[SLED] Blind people in SL - Idle speculation
This would be easier with a client SDK that could trap text, use text to speech and allow keyboard macros, but given the existing client could we not have a HUD or head mounted scripted object that 'spoke' information. Location, people's names as they came and went, object IDs. Within the current system, these would probably have to be pre-recorded and linked to specific text, say in a notecard. Alternatively, objects in an 'accessible' area could be able to self report, say if someone approached them within a certain distance for a certain time. This area could be made the home location for participants. We could even run a competition to design accessible vending machines that used sound instead/as well as text.

To aid people with visual impairments - most people who are blind aren't actually 'blind' - it would be great to have control over field of view in the client, which could effectively allow mouse view of a small angle to be the equivalent of a magnified image, much as PC viewing software allows the whole screen to be enlarged. Sadly, this would not easily include text. However, if we had a HUD object repeating any 'heard' text in the mouselook view, then even this might be possible. This would require chat in the mouselook view...

Ah well, maybe when I have a PhD student to throw at it...


[SLED] Re: Blind people in SL - Idle speculation
However, the Second Life client doesn't currently give screen reader access to chat or IM text. In fact, you can't even read the menus with JAWS. If the client did have that most basic accessibility--chat, IM and menus--blind users would still need some assistance getting around.


[IxDA Discuss] Target.com Loses Accessibility Law Suit
I was part of a discusssion of accessibility of virtual worlds like Second Life, for people who "browse with their ears". It turned out that the first problem wasn't even in Second Life itself. It was that the login page was designed inaccessibly. People using a screen reader couldn't even get into the worlds to find out if they could use them or not. Nothing special, new or difficult. Just a login screen. But just as much a barrier as any locked door.


Three Dimensional Web Interfaces
Perhaps we should not focus exclusively on screen readers and haptics to provide access for blind people in 3D virtual reality. If the aim of virtual reality is to become more and more life like, lets think about the actual real life experience of individuals moving about in the real world and how they interact with other people.

Blind and low vision people generally are mobile outside familiar surroundings with the aid of a cane, a guide dog or a sighted companion. When more assistance is needed, there is usually a store staff person or a passerby to whom one can ask for directions or other information. This latter is not something that just blind people do. It is natural human behaviour.

Why not have a service avatar to provide a similar service. Imagine a humanoid robot like C3PO, the protocol android in Starwars, who could guide the avatar of a player, give verbal directions, describe scenes and activities, etc. This is rather like a personal tour guide. Add some more services, like language translation for players in other countries, ASL for players who are deaf, information retrieval to answer questions knowlegeably and you broaden the appeal and usefulness of such an avatar. They would serve more than just the sight impaired players.

I think there is a lot of technology that is already out there that could be brought to bear on this. In Japan, for example, some stores have robots that can greet customers and even take them to a particular department. voice and natural language recognition, text to speech and text to ASL engines, language translation software are already very advanced and improving. The underlying architecture of the virtual space must have some basic navigation functions that might respond to verbal commands in lieu of a joystick or whatever it is that players use to travel about in Second Life.

A service companion avatar should probably become a standard feature in 3D virtual reality in the same way that online help is a ubiquitous feature in Windows.

Tuesday 6 November 2007

Code Available

I've set up a project page on Google code where you can download the source to my viewer. You should follow the instructions on how to download and build the default viewer first, then once you've successfully got that built locally you can try using my indra directory instead.

Good luck!
Please post on the project page or here if there are any problems.

Thursday 1 November 2007

Self Voicing

I've just added self voicing to the Windows viewer.
Here's an example.

I launch SL from Visual Studio, walk up to an object called "Healthy" who chats to me. Everything he and I write in chat is spoken.
I also demo clicking the object, to which it responds with a chat and also issues me with a notification which is also spoken.



I got my inspiration for this test from the following films,





You can purchase this product from SL Exchange

Friday 26 October 2007

RNIB: Web Access Centre

At the end of July, Henny Swan, a web accessibility consultant for the Web Access Centre, asked for the community's evaluation of Second Life in terms of accessibility.

This is encouraging because it means that others are considering SL in the same way as us, but particularly interesting were the comments from one user who observed that blind people can't even create an account, let alone use the client!

Also Reported on this site is that Judy Brewer, Director of the Web Accessibility Initiative (WAI), World Wide Web Consortium (W3C), gave a presentation at a Second Life conference sponsored by the U.S. Department of State Bureau of International Information Programs (IIP) and the University of Southern California Annenberg School for Communication. She is reported as saying the following,

"If a person has a visual disability, they need an alternative to the visual environment on Second Life. Maybe a space could be magnified to make it easier to see. A speech reader could speak the text typed into the chat."
"There need to be easy and reliable ways to be able to add text descriptions to all content created in SL."


Additionally I read the BBC's article on the IBM's "Accessibility In Virtual Worlds" project by Extreme Blue and the Human Ability and Accessibility Centre, which uses Active Worlds rather than SL as it can be run within a web browser (although when I tried it simply launched an external browser).

"When the user comes into the world, the items are described as well as their positions," explained Colm O'Brien, one of the team of four researchers who worked on the project.

"There is also sound attached - for example, if there's a tree nearby you will hear a rustling of leaves," said Mr O'Brien.

The work also developed tools which uses text to speech software that reads out any chat from fellow avatars in the virtual world that appears in a text box.

Characters in the virtual world can have a "sonar" attached to them so that the user gets audible cues to alert them to when they are approaching, from which direction and how near they are.


The BBC provide an audio example of the interface, and there's a section on Radio 4's In Touch programme with interviews from interns Esmond Walsh and Antony Clinton, though unfortunately I can't source any more information about this fascinating sounding project.

Text to Speech Code

Today I've mostly been investigating text-to-speech and accesability in Windows applications.

Microsoft offer an API called Active Accessibility which defines a standard way for clients (for example screen readers such as JAWS) to communicate with regular applications (called servers) which might not provide their own accessibility features.

This is one possible way to address the accessability of SecondLife, by taking the existing viewer code and making it conform to the Active Accessibility API. The alternative, which I was toying with earlier today, is to modify the SLeek viewer* to add self-voicing using SAPI 5.1 through SpeechLib in .Net 2.0

* It seems inappropriate to call it a viewer when there's nothing much to view, and especially as we're intending to create a piece of software that allows the user to hear the game.

Apparently in order to use SpeechLib in .Net 2.0 the SAPI DLL might need to be converted with TlbImp thus,

C:\Program Files\Microsoft Visual Studio .NET 2003\SDK\v1.1\Bin\TlbImp.exe "C:\Program Files\Common Files\Microsoft Shared\Speech\sapi.dll" /out:Interop.SpeechLib.dll

This is some example C# code demonstrating voicing:

using SpeechLib;
SpVoice objSpeech = new SpVoice();
objSpeech.Speak(textBox1.Text,SpeechVoiceSpeakFlags.SVSFlagsAsync);
objSpeech.WaitUntilDone(Timeout.Infinite);



Downloads:

SpeechSDK51.exe from SAPI 5.1
.Net Framework V3.5 Beta 2 and Redistributable
JAWS 8 demo

Thursday 25 October 2007

Viewer Call Stack Notes

Given that I have no SL budget for land or audio uploads to the main grid, and my OpenSim grid doesn't support scripting I've spent the day looking into the structure of Linden's viewer ("newview").

Here are my notes from the callstack.


Startup

crt0.cpp
WinMainCRTStartup()

viewer.cpp
WinMain()
lldxhardware.cpp
LLDXHardware::getInfo()
CoInitialize(NULL);
CoUninitialize();

gViewerWindow = new LLViewerWindow(...);

llviewerwindow.cpp
LLViewerWindow::LLViewerWindow()
mWindow = LLWindowManager::createWindow(...);

pipeline.cpp
LLPipeline::init()
LLPipeline::getPool(LLDrawPool::POOL_ALPHA); // + others such as Glow, etc
LLDrawPool::createPool()
LLPipeline::addPool()
mPools.insert(new_poolp);

mRootView = new LLRootView(...);

LLViewerWindow::initBase()
gFloaterView = new LLFloaterView(...);

gConsole = new LLConsole(...);
mRootView->addChild(gConsole);

mRootView->addChild(gFloaterView, -1);

main_loop()
idle()
llviewerdisplay.cpp
display()
llstartup.cpp
idle_startup()
messaging.cpp
start_messaging_system()
LLMessageSystem::setHandlerFuncFast(...) // eg, _PREHASH_StartPingCheck with process_start_ping_check()
llmessagetemplate.h
setHandlerFunc()

gWorldp = new LLWorld()
llworld.cpp
LLWorld::addRegion()
llviewerregion.cpp
LLViewerRegion::LLViewerRegion()
llsurface.cpp
LLSurface::create()
LLSurface::initTextures()
pipeline.cpp
LLPipeline::addObject()
LLDrawable::createDrawable(this); // eg, LLVOWater
LLPipeline::getPool()
LLPipeline::addPool()
mPools.insert(new_poolp); // eg, DrawPoolWater,Terrain,SkyStars,Ground


register_viewer_callbacks()
msg->setHandlerFuncFast(_PREHASH_ChatFromSimulator, process_chat_from_simulator);

llviewerobject.cpp
LLViewerObjectList::update()
LLViewerObject.idleUpdate();

Menu Bar

llwindowwin32.cpp
LLWindowWin32::mainWindowProc()
llkeyboardwin32.cpp
LLKeyboardWin32::handleKeyDown()
llkeyboard.cpp
LLKeyboard::handleTranslatedKeyDown()
llviewerwindow.cpp
LLViewerWindow::handleTranslatedKeyDown()
llviewerkeyboard.cpp
LLViewerKeyboard::handleKey()
llviewerwindow.cpp
LLViewerWindow::handleKey()
llmenugl.cpp
LLMenuBarGL::handleAcceleratorKey()
llmenugl.cpp
LLMenuGL::handleAcceleratorKey()
LLMenuItemBranchDownGL::handleAcceleratorKey()
LLMenuGL::handleAcceleratorKey()
LLMenuItemCallGL::handleAcceleratorKey()
LLMenuItemGL::handleAcceleratorKey()
LLMenuItemCallGL::doIt()
LLPointer fired_event = new LLEvent(this);
fireEvent(fired_event, "on_click");
llevent.cpp
LLObservable::fireEvent()
mDispatcher->fireEvent()
LLEventDispatcher::fireEvent()
impl->fireEvent()
LLSimpleDispatcher::fireEvent()
llviewermenu.cpp
LLWorldAlwaysRun::handleEvent() // Sends SetAlwaysRun message
LLMenuItemGL::doIt();


Login
todo

Launch external website from Login panel

crt0.cpp
WinMainCRTStartup()
viewer.cpp
WinMain()
main_loop()
idle()
llmortician.cpp
LLMortician::updateClass()
llalertdialog.cpp
LLAlertDialog::~LLAlertDialog()
llpanellogin.cpp

LLPanelLogin::newAccountAlertCallback() // Passes CREATE_ACCOUNT_URL from llsecondlifeurls.cpp
llweb.h
LLWeb::loadURL()
llweb.cpp
LLWeb::loadURL()
LLWeb::loadURLExternal()
llwindowwin32.cpp
spawn_web_browser()
ShellExecute() // Win32 API

Messaging


main_loop()
idle()
idle_network()
message.cpp
LLMessageSystem::checkAllMessages()
LLMessageSystem::checkMessages()
lltemplatemessagereader.cpp
LLTemplateMessageReader::readMessage()
LLTemplateMessageReader::decodeData()
llmessagetemplate.h
LLMessageTemplate::callHandlerFunc()
llviewermessage.cpp

process_object_update()
llviewerobjectlist.cpp
LLViewerObjectList::processObjectUpdate()
LLViewerObjectList::createObject()
LLViewerObject::createObject(...);
LLViewerObjectList::updateActive()
mActiveObjects.insert(...); // LLVOAvatar, LLVOClouds, etc.

LLViewerObjectList::processUpdateCore()
pipeline.cpp
LLPipeline::addObject()
LLDrawable->createDrawable(this); // LLVOAvatar,Tree

llvoavatar.cpp
LLVOAvatar::createDrawable()
pipeline.cpp
LLPipeline::getPool()
LLPipeline::addPool(); // LLDrawPoolAvatar,Tree


Render

viewer.cpp
WinMain()
main_loop()
idle()
llviewerdisplay.cpp
display()
pipeline.cpp
LLPipeline::updateGeom()
LLPipeline::updateDrawableGeom()
lldrawable.cpp
LLDrawable::updateGeometry()
mVObjp->updateGeometry(this); // where mVObjp is LLVOWater, LLVOSurfacePatch

LLPipeline::renderGeom()
LLDrawPool->prerender(); // LLDrawPoolSky, LLDrawPoolStars, LLDrawPoolGround, LLDrawPoolTerrain, LLDrawPoolSimple, LLDrawPoolBump, LLDrawPoolAvatar, LLDrawPoolTree, LlDrawPoolGlow, LLDrawPoolWater, LLDrawPoolAlphaPostWater
LLDrawPool->render(i); // Same types as above

Notification
(e.g., click on an object, it gives you a card)

message.cpp
LLMessageSystem::checkMessages()

lltemplatemessagereader.cpp
LLTemplateMessageReader::decodeData()

llviewermessage.cpp
process_improved_im()
msg->getU8Fast( _PREHASH_MessageBlock, _PREHASH_Dialog, d); // IM_TASK_INVENTORY_OFFERED, IM_MESSAGEBOX, IM_GROUP_INVITATION, IM_INVENTORY_ACCEPTED, IM_GROUP_VOTE ... IM_GROUP_NOTICE ( LLGroupNotifyBox::show(), LLFloaterGroupInfo::showNotice() ),
inventory_offer_handler()
llnotify.cpp
LLNotifyBox::showXml()
notify = new LLNotifyBox(...);
gNotifyBoxView->addChildAtEnd(notify);
LLNotifyBox::moveToBack()
LLNotifyBoxView::showOnly()
LLNotifyBox::setVisible()
llview.cpp
LLPanel::setVisible() // Actually resolves to LLView
LLView::setVisible()

viewer.cpp
WinMain()
gViewerWindow = new LLViewerWindow()

llviewerdisplay.cpp
display_startup()
gViewerWindow->setup2DRender()

viewer.cpp
main_loop()
llviewerdisplay.cpp
display()
render_ui_and_swap()
render_ui_2d()
llviewerwindow.cpp
LLViewerWindow::draw()
llview.cpp
LLView::draw()
for (child_list_reverse_iter_t child_iter = mChildList.rbegin(); child_iter != mChildList.rend(); ++child_iter)
llnotify.cpp
LLNotifyBox::draw()

// The following are also rendered in this stack,

llconsole.cpp
LLConsol::draw()

llview.cpp
LL::draw()

llnotify.cpp
LLNotifyBox::draw()

llhudview.cpp
LLHUDView::draw()

llfloater.cpp
LLFloaterView::draw()

llfloatermap.cpp
LLFloaterMap::draw()

lldraghandle.cpp
LLDragHandleTop::draw()

llnetmap.cpp
LLNetMap::draw()

lltextbox.cpp
LLTextBox::draw()

llresizehandle.cpp
LLResizeHandle::draw()

llbutton.cpp
LLButton::draw()

llviewerwindow.cpp
LLBottomPanel::draw()

llpanel.cpp
LLPanel::draw()

lloverlaybar.cpp
LLOverlayBar::draw()

llvoiceremotectrl.cpp
LLVoiceRemoteCtrl::draw()

lliconctrl.cpp
LLIconCtrl::draw()

llmediaremotectrl.cpp
LLMediaRemoteCtrl::draw()

llslider.cpp
LLSlider::draw()

llhoverview.cpp
LLHoverView::draw() // Tooltips - could use this to speak currently selected interface element

llstatgraph.cpp
LLStatGraph::draw()

llmenugl.cpp
LLMenuHolderGL::draw()
LLMenuBarGL::draw()
LLMenuItemBranchDownGL::draw()

llprogressview.cpp
LLProgressView::draw() // Loading bar?

Chat

llstartup.cpp
register_viewer_callbacks()
msg->setHandlerFuncFast(_PREHASH_ChatFromSimulator, process_chat_from_simulator);

llviewermessage.cpp

process_chat_from_simulator()
llfloaterchat.cpp
LLFloaterChat::addChat()

llconsole.cpp
LLConsole::addLine()

addChatHistory()
llpanelactivespeakers.cpp
LLPanelActiveSpeakers::setSpeaker()
llfloateractivespeakers.cpp
LLSpeakerMgr::setSpeaker()

SelfVoicing::Speak() // Added by me

Windows
crt0.cpp
WinMainCRTStartup()
viewer.cpp
WinMain()
gViewerWindow = new LLViewerWindow();
mRootView = new LLRootView();

gViewerWindow->initBase();
llviewerwindow.cpp
LLViewerWindow::initBase()
gFloaterView = new LLFloaterView();
mRootView->addChild(gFloaterView, -1);

gSnapshotFloaterView = new LLSnapshotFloaterView();
mRootView->addChild(gSnapshotFloaterView);

gConsole = new LLConsole();
mRootView->addChild(gConsole);

gDebugView = new LLDebugView();
mRootView->addChild(gDebugView);

gHUDView = new LLHUDView();
mRootView->addChild(gHUDView);

gNotifyBoxView = new LLNotifyBoxView();
mRootView->addChild(gNotifyBoxView, -2);

mProgressView = new LLProgressView();
mRootView->addChild(mProgressView);
llviewerwindow.cpp
LLViewerWindow::initWorldUI()
gChatBar = new LLChatBar("chat", chat_bar_rect);
gToolBar = new LLToolBar("toolbar", bar_rect);
gOverlayBar = new LLOverlayBar("overlay", bar_rect);

gBottomPanel = new LLBottomPanel()
gBottomPanel->addChild(gChatBar);
gBottomPanel->addChild(gToolBar);
gBottomPanel->addChild(gOverlayBar);

mRootView->addChild(gBottomPanel);

gHoverView = new LLHoverView("gHoverView", full_window);
gFloaterMap = new LLFloaterMap("Map");
gFloaterWorldMap = new LLFloaterWorldMap();
gFloaterTools = new LLFloaterTools();
gStatusBar = new LLStatusBar("status", status_rect);
gViewerWindow->getRootView()->addChild(gStatusBar);

crt0.cpp
WinMainCRTStartup()
viewer.cpp
WinMain()
main_loop()
idle()
llstartup.cpp
idle_startup()
init_stat_view()
llstatview.cpp
LLStatView::LLStatView()

gDebugView->mStatViewp->addChildAtEnd();


MSAA

This is where the windows mmessage queue is dealt with (callback).

llwindowwin32.cpp

LLWindowWin32::mainWindowProc()

LLView is responsible for handling input and so is perhaps one place to insert MSAA code.
In particular during the startup procedure documented above, mRootView is created as the top level view.
Note the following members:

LLView::tab_order_t;
LLView::focusNextRoot();
LLView::focusPrevRoot();
LLView::focusNextItem();
LLView::focusPrevItem();
LLView::focusFirstItem();
LLView::focusLastItem();

Additionally important is llfocusmgr.h with its class LLFocusMgr

Upon state change (eg, focus moved to a different UI element), issue,
NotifyWinEvent(EVENT_OBJECT_STATECHANGE, hWnd, (LONG)&lpData->btnSelf, CHILDID_SELF)

Focus
crt0.cpp
WinMainCRTStartup()
viewer.cpp
WinMain()
main_loop()
idle()
llstartup.cpp
idle_startup()
login_show()
llpanellogin.cpp
LLPanelLogin::show()
LLPanelLogin::setFocus()
LLPanelLogin::giveFocus()
lllineeditor.cpp
LLLineEditor::setFocus()
lluictrl.cpp
LLUICtrl::setFocus()
gFocusMgr.setKeyboardFocus()
llfocusmgr.cpp
LLFocusMgr::setKeyboardFocus()


Click on Username box in login screen:

llwindowwin32.cpp
LLWindowWin32::mainWindowProc()
case WM_LBUTTONDOWN
llviewerwindow.cpp
LLViewerWindow::handleMouseDown()
llview.cpp LLView::handleMouseDown()
LLView::childrenHandleMouseDown()
LLView::handleMouseDown()
LLView::childrenHandleMouseDown()
lllineeditor.cpp
LLLineEditor::handleMouseDown()
LLLineEditor::setFocus()
LLUICtrl::setFocus()
llfocusmgr.cpp
LLFocusMgr::setKeyboardFocus()
llwebbrowserctrl.cpp
LLWebBrowserCtrl::onFocusLost()
llviewerwindow.h
LLViewerWindow::focusClient()
llwindowwin32.cpp
LLWindowWin32::focusClient()
Platform SDK - SetFocus(HWND)

Tab from Username to Password

llwindowwin32.cpp
LLWindowWin32::mainWindowProc()
case WM_KEYDOWN
llkeyboardwin32.cpp
LLKeyboardWin32::handleKeyDown()
llkeyboard.cpp
LLKeyboard::handleTranslatedKeyDown()
llviewerwindow.cpp
LLViewerWindow::handleTranslatedKeyDown()
llviewerkeyboard.cpp
LLViewerKeyboard::handleKey()
llviewerwindow.cpp
LLViewerWindow::handleKey()
llpanel.cpp
LLPanel::handleKey()
LLUICtrl* keyboard_focus = gFocusMgr.getKeyboardFocus();

llfocusmgr.cpp
LLFocusMgr::childHasKeyboardFocus()

llview.cpp
LLView::isFocusRoot()

LLView::focusNextItem()
LLView::getTabOrderQuery()
query.addPreFilter( LLVisibleFilter::getInstance() );
query.addPreFilter( LLEnabledFilter::getInstance() );
query.addPreFilter( LLTabStopFilter::getInstance() );
query.addPostFilter( LLUICtrl::LLTabStopPostFilter::getInstance() );

LLView::focusNext()
// For example
lllineeditor.cpp
LLLineEditor::setFocus()
lluictrl.cpp
LLUICtrl::setFocus()
llfocusmgr.cpp
gFocusMgr.setKeyboardFocus()

llpanel.cpp
LLPanel::handleKey()
llview.cpp
LLView::handleKey()

In game, focus currently on Inventory window.
Click into main 3D display.

llwindowwin32.cpp
LLWindowWin32::mainWindowProc()
LLWindowWin32 *window_imp = (LLWindowWin32 *)GetWindowLong(h_wnd, GWL_USERDATA);
case WM_LBUTTONDOWN:
window_imp->mCallbacks->handleMouseDown()

llviewerwindow.cpp
LLViewerWindow::handleMouseDown()
gToolMgr->getCurrentTool()->handleMouseDown()

lltoolpie.cpp
LLToolPie::handleMouseDown()
gViewerWindow->hitObjectOrLandGlobalAsync()
llviewerwindow.cpp
LLViewerWindow::hitObjectOrLandGlobalAsync()
llfocusmgr.cpp
gFocusMgr.setKeyboardFocus()
lllineeditor.cpp
LLLineEditor::onFocusLost()

Gestures
Typing "/yes" into the chat window to activate a gesture

llwindowwin32.cpp

LLWindowWin32::mainWindowProc()
llwindow.cpp
LLWindow::handleUnicodeUTF16()
llviewerwindow.cpp
LLViewerWindow::handleUnicodeChar()
llviewerkeyboard.cpp
LLViewerKeyboard::handleKey()
llviewerwindow.cpp
LLViewerWindow::handleKey()
llview.cpp
LLView::handleKey()
llpanel.cpp
LLPanel::handleKey()
llview.cpp

LLView::handleKey()
llchatbar.cpp
LLChatBar::handleKeyHere()
LLChatBar::sendChat()
llgesturemgr.cpp
LLGestureManager::triggerAndReviseString()
LLGestureManager::playGesture()
SelfVoicing::Speak() // Added by me


viewer.cpp
main_loop()
idle()
llgesturemgr.cpp
LLGestureManager::update()
LLGestureManager::stepGesture()
LLGestureManager::runStep()
llagent.cpp
LLAgent::sendAnimationRequest() // Sends AgentAnimation message


Audio

LLAudioSource* findAudioSource( const LLUUID& source_id );
void addAudioSource( LLAudioSource* asp );
LLAudioChannel* getFreeChannel( const F32 priority );
BOOL hasLocalFile( const LLUUID& uuid );
BOOL preloadSound( const LLUUID& uuid );
void setListener( LLVector3 pos, LLVector3 vel, LLVector3 up, LLVector3 at );
void triggerSound( const LLUUID& sound_id, const LLUUID& owner_id, const F32 gain, const LLVector3d& pos_global = LLVector3d::zero );

audioengine.cpp
LLAudioEngine* gAudiop = NULL;

llstartup.cpp
BOOL idle_startup()
gAudiop = (LLAudioEngine *) new LLAudioEngine_FMOD();
BOOL init = gAudiop->init(kAUDIO_NUM_SOURCES, window_handle);

viewer.cpp
void init_audio()
gAudiop->preloadSound(LLUUID(gSavedSettings.getString("UISndAlert"))); // Lots of other preloaded sounds too

lscript_library.cpp
LLScriptLibrary::init()
addFunction(new LLScriptLibraryFunction(10.f, 0.f, dummy_func, "llPlaySound", NULL, "sf", "llPlaySound(string sound, float volume)\nplays attached sound once at volume (0.0 - 1.0)"));

llpreviewsound.cpp
LLPreviewSound::playSound( void *userdata )
llviewermessage.cpp
send_sound_trigger(const LLUUID& sound_id, F32 gain)
msg->newMessageFast(_PREHASH_SoundTrigger);

llpreviewsound.cpp
LLPreviewSound::auditionSound( void *userdata )
gAudiop->triggerSound( ... )

llvoavatar.cpp
LLVOAvatar::updateCharacter(LLAgent &agent)
gAudiop->triggerSound(step_sound_id, getID(), gain, foot_pos_global);

audioengine.cpp

LLAudioEngine::triggerSound(const LLUUID &audio_uuid, const LLUUID& owner_id, const F32 gain, const LLVector3d &pos_global)
LLAudioSource *asp = new LLAudioSource(source_id, owner_id, gain);
gAudiop->addAudioSource(asp);
asp->play(audio_uuid);

BOOL LLAudioSource::play(const LLUUID &audio_uuid)
LLAudioData *adp = gAudiop->getAudioData(audio_uuid);
addAudioData(adp);
getChannel()->play();

audioengine_fmod.cpp
LLAudioChannelFMOD::play()
getSource()->setPlayedOnce(TRUE);

Object Detection

llvolume.h
const LLPCode LL_PCODE_CUBE = 1;
const LLPCode LL_PCODE_LEGACY_AVATAR = 0x20 | LL_PCODE_LEGACY; // PLAYER

llviewerobject.cpp
LLViewerObject *LLViewerObject::createObject(const LLUUID &id, const LLPCode pcode, LLViewerRegion *regionp)
case LL_PCODE_VOLUME:
res = new LLVOVolume(id, pcode, regionp); break;
case LL_PCODE_LEGACY_AVATAR:
res = new LLVOAvatar(id, pcode, regionp); break;

llviewerobjectlist.cpp
void LLViewerObjectList::processObjectUpdate( ... )
objectp = createObject(pcode, regionp, fullid, local_id, gMessageSystem->getSender());

LLViewerObject *LLViewerObjectList::createObjectViewer(const LLPCode pcode, LLViewerRegion *regionp)
LLViewerObject *objectp = LLViewerObject::createObject(fullid, pcode, regionp);
mUUIDObjectMap[fullid] = objectp;
mObjects.put(objectp);

llviewerobjectlist.h
LLDynamicArrayPtr<>, 256 > LLViewerObjectList::mObjects;


llviewerobjectlist.cpp
LLViewerObjectList::processObjectUpdate( ... )
objectp = createObject(pcode, regionp, fullid, local_id, gMessageSystem->getSender());
LLViewerObject *LLViewerObjectList::createObject( ... )
LLViewerObject *objectp = LLViewerObject::createObject(fullid, pcode, regionp);
llviewerobject.cpp
LLViewerObject *LLViewerObject::createObject( ... )
case LL_PCODE_LEGACY_AVATAR:
res = new LLVOAvatar(id, pcode, regionp); break;


llviewermessage.cpp
process_object_update()

llviewerobjectlist.cpp
LLViewerObjectList::processObjectUpdate( ... )
LLViewerObjectList::processUpdateCore( ... )

llvovolume.cpp
LLVOVolume::processUpdateMessage()
if (update_type == OUT_FULL)
BOOL LLVOVolume::setVolume( ... )
LLPrimitive::setVolume( ... )


pipeline.cpp
LLPipeline::updateGeom()
LLPipeline::updateDrawableGeom()

lldrawable.cpp
LLDrawable::updateGeometry()
mVObjp->updateGeometry(this);

llvovolume.cpp
LLVOVolume::updateGeometry(LLDrawable *drawable)


llviewermessage.cpp
process_object_update()
llviewerobjectlist.cpp
LLViewerObjectList::processObjectUpdate()
LLViewerObjectList::createObject()
LLViewerObject::createObject(...);
LLViewerObjectList::updateActive()
mActiveObjects.insert(...); // LLVOAvatar, LLVOClouds, etc.

SL Groups

Some groups I found in SL that might be relevant:


Disability Support Workers Int.
18 visible members.

For people working in the field of disability, looking for a place to chat, relax and unwind.

Join us to talk about anything from strategies to songwriters, legislation to landscaping :)

This group is only a few days old, I'll be trying to get the word out IRL ASAP !! :)

-Filter Miles


Disabled SL-Peoples Association
15 visible members

A group for physical disabled people.

Founded by the Dane Arcadian Meili for handicapped and physical disabled bodys or people.

You have to be 18+ to join and the invite you will get from the group owner, Arcadian Meili. Send an Instant Message and tell why you wanna join.

People without disabilities can become members but need a VERY good reason.

Purpose of the group is:
1: as a community for disabled
2: communication between disabled and care givers or other people in the healthcare area
+ more.

Monday 22 October 2007

Server Built

Here's my OpenSim running,



I'm now going to try to populate it with accessible contents. This could be an easy way to prototype a framework, set of standards, or test environment in which we could let blind people interact with one another.
Unfortunately at present script support in OpenSim is extremely limited, so audio playback is not possible.

The only current way to create such a test environment will be by renting use of Linden's commercial servers.

Sleek

Sleek, a lightweight client might be a useful starting point for developing an audio only viewer. It builds very quickly and looks like a small, straightforward C# codebase with no world rendering included.

Haptic Wearables

Engadget reports from the E for All Expo about a force feedback vest, initially designed to provide a tactile effect when your game avatar is shot in Call of Duty.

This got me thinking about wearables and haptic feedback generally as it could provide a useful interface for this current project - and as they say your accessability issue is my usability issue, hence the tactile feedback is a powerful feature for gaming generally.

It also stuck a chord because I was discussing military flight simulators with a guy who's recently been accepted into the RAF. He told me about the kind of physical feedback those machines are equipped with - the pilot is strapped in as they might be in a normal jet, but the straps are used to simulate the sensation of increased gravitation when flying the aircraft in tight corners etc. They pull the pilot into the seat with a force comparable to that which would be experienced in an actual aircraft.

Friday 19 October 2007

National Science Foundation

So the NSF have a similar project for Eelke Folmer (from HelpYouPlay.com) who's scope is much greater than our own. With 12 months and a budget of $90,448 they're clearly ones to watch, though I have a couple of thoughts about their statement:

In this exploratory project, he will develop a prototype client for Second Life that offers a basic level of accessibility, and which will allow him to assess the feasibility of and technical requirements for a client that is fully accessible to blind players. The prototype client will initially allow blind players to navigate the environment using voice commands alone; it will then be enhanced and extended, as time and resources allow, so as to enable these players to interact in meaningful ways with other players.

That's interesting. Most audio games use keyboard navigation. I don't understand why voice commands are preferred, and why they're developed during the initial stages of the prototype when it would seem to me that the first thing you need is feedback from the world (i.e., spatial audio cues) before you start to move around in it.

Achieving these objectives is not straightforward, because the client and server of Second Life have only recently been made open source and no one has yet attempted to create an accessible client for the environment.

I didn't think the server was open sourced yet, though it is apparently planned for some as-yet unspecified point in the future. I have heard that some people have reversed engineered the network traffic (or merely extracted it from the client source) and have extrapolated their own server based on how it appears to work. The official line from Linden is,

What source code won't you be releasing?
We don't (yet) plan to release the code that runs our simulators or other server code ("the Grid"). We're keeping an open mind about the possibility of opening more of the Second Life Grid; the level of success we have with open sourcing our viewer will direct the speed and extent of further moves in this arena.


There's an interview with Eelke, for further reading too.

Second Life Client Built

Following instructions on the Wiki, I've just built my first Second Life client.
Here I am debugging it in Visual Studio,



I had a few problems building it, but after following the instructions properly it worked out.

There is a small bug in the newview project though:

Properties->Configuration Properties->Custom Build Step->General->Command Line

It should read

copy "$(TargetDir)\$(TargetFileName)" "$(ProjectDir)"

Instead of

copy $(TargetDir)\$(TargetFileName) $(ProjectDir)

That makes sure that the executable gets copied when you have spaces in your path.

I also had a problem building the source,

llcompilequeue.obj : error LNK2019: unresolved external symbol "int __cdecl lscript_compile(char const *,char const *,char const *,int)" (?lscript_compile@@YAHPBD00H@Z) referenced in function "protected: void __thiscall LLFloaterCompileQueue::compile(char const *,class LLUUID const &)" (?compile@LLFloaterCompileQueue@@IAEXPBDABVLLUUID@@@Z)

llpreviewscript.obj : error LNK2001: unresolved external symbol "int __cdecl lscript_compile(char const *,char const *,char const *,int)" (?lscript_compile@@YAHPBD00H@Z)

This is mentioned in the Wiki, but only for .NET 2006, whereas I was using the (recommended) 2003. Upon further investigation it turned out to be a problem in the compilation of the lscript_compile or lscript_compile_fb projects. Flex was crashing for some reason. I realised that I had earlier cancelled an update of cygwin which was probably the reason for the current failure, so I just started my update again and once that was complete the projects compiled fine without Flex barfing.

Anyway, I finally built and ran the executable.

The significance of this is that I could (potentially) now develop a non-visual client, using only audio feedback. That's got to be the ultimate goal of an accessible client but is unfortunately beyond the scope of this current project. All I'll be able to do within this remit is evaluate the feasibility of that development and make suggestions for the future.

Gameplay conventions

I've been thinking about game audio recently, and was having a conversation with a friend about Valve's Deathmatch FPS release, TF2. I watched a video of some gameplay footage to get an idea what the game was like and was surprised that I recognised some of the audio effects from another of Valve's seminal titles, Half Life (which were also used in HL2).



Specifically I recognised the 'heal' sound that the stations make when they recover your health, shields or ammo, and the weapon select confirmation noise (possibly also one of the pistols and shotgun?). While it's natural to use the same audio in a sequel (HL to HL2), I was surprised that they used the same effects in a title from a totally independent game world (TF2). It works extremely well, though. I instantly understood the significance of the audio cues and hence what was happening in gameplay terms.

This in turn made me think about gameplay mores, about the tropes and aesthetics that have become de facto standards, and how they help familiarise us to new games. But what then of audio games? I wonder if they suffer from underdevelopment such that no standards have emerged yet.

This reminds me a little bit of gaming during the 1980s. This period was characterised by the diversity of games that didn't seem to fit into genres yet. By the 90s I feel that the commercial market had evolved and certain conventions had emerged, for example using the WASD keys for navigating first person games.

This is a particularly interesting point for me as my MA dissertation dealt with embodiment in games, and developed on the extension thesis of Marshall McLuhan and the phenomenology of Maurice Merleau-Ponty, amongst others. The basic premise is that our sense of self is predicated on our sensory experience, which depends on our situated body and it's relation to the rest of the world. In a game environment, mediated by a keyboard, WASD becomes a naturalised and pre-reflective expression of our intentions. The reuse of this form allows us to build up what Merleau-Ponty refers to as the "habitual body image".

The absence of consistent interface semiotics in audio games as with the early 80s games results in the inability to transfer any continuity between any of them.

On the one hand the 80s was a very creative time which I think a lot of people yearn for in their renewed interest in retro gaming, but on the other hand the lack of a shared language of gameplay acts as some kind of barrier, or increasing the learning curve of each and every game. This in turn was an obstacle the had to be overcome on the way to mass commercial viability for the industry.

One possibility for this project I'm currently engaged in might be to investigate and define standards for audio interaction rather than to create a client. Another aspect of Second Life that is interesting in this regard is the possibility to own land and create environments which can be controlled to be more accessible. For example, I could imagine an island designed for blind users, where all objects emitted audio cues. This might be an easier way to prototype the requirements of a client.

This idea came from thinking about AudioQuake as a mod for an existing game. Second Life is more complicated because the environment is so much more diverse, volatile and not under control as it is in Quake or other games.

Also there's a problem with my current plan for developing a prototype client using just Linden Scripting Language: the only feasible technique for creating spatial audio is to create an invisible object that follows the target object and emits sound, thus indicating the target's location to a blind user. However, this audio will be heard by everyone, and especially the target, which, even though they have the ability to mute the emitter, is very anti-social behaviour! The optimal solution is to develop a dedicated client so 3D audio can be triggered on the local rather than server side, which is approach being followed by the National Science Foundation, and to a certain extent also evaluated in our project.

Perhaps the quickest and most effective solution in the time frame is to simply buy land on which to develop an accessible environment. However, this would require a modest investment of real world money as land in Second Life is sold commercially (at least for now, until the server is open sourced).

A preferable and free solution would be to simply run our own server, but the current Open Source version is quite limited in what scripts it can run.

Half Life (Windows). Valve, Electronic Arts. (19th November, 1998).
Half Life 2 (Windows). Valve. (16th November, 2004).
Team Fortress (Windows). Caughley, Ian; Cook, John; Walker, Robin. (Australia: 1996).
Team Fortress 2 (Windows). Valve, Electronic Arts. (2007)

McLuhan, Marshall. Understanding Media: The Extensions of Man. (New York: McGraw Hill, 1964)
Merleau-Ponty, Maurice. The Phenomenology of Perception. trans. by Colin Smith (New York: Humanities Press, 1962). Originally published as Phénoménologie de la perception (Paris: Gallimard, 1945).
White, Gareth. Embodied Evil - The Aesthetics of Embodiment in Resident Evil 4: Wii Edition. (The University of the West of England, 2007)