At the end of July, Henny Swan, a web accessibility consultant for the Web Access Centre, asked for the community's evaluation of Second Life in terms of accessibility.
This is encouraging because it means that others are considering SL in the same way as us, but particularly interesting were the comments from one user who observed that blind people can't even create an account, let alone use the client!
Also Reported on this site is that Judy Brewer, Director of the Web Accessibility Initiative (WAI), World Wide Web Consortium (W3C), gave a presentation at a Second Life conference sponsored by the U.S. Department of State Bureau of International Information Programs (IIP) and the University of Southern California Annenberg School for Communication. She is reported as saying the following,
"If a person has a visual disability, they need an alternative to the visual environment on Second Life. Maybe a space could be magnified to make it easier to see. A speech reader could speak the text typed into the chat."
"There need to be easy and reliable ways to be able to add text descriptions to all content created in SL."
"When the user comes into the world, the items are described as well as their positions," explained Colm O'Brien, one of the team of four researchers who worked on the project.
"There is also sound attached - for example, if there's a tree nearby you will hear a rustling of leaves," said Mr O'Brien.
The work also developed tools which uses text to speech software that reads out any chat from fellow avatars in the virtual world that appears in a text box.
Characters in the virtual world can have a "sonar" attached to them so that the user gets audible cues to alert them to when they are approaching, from which direction and how near they are.
Today I've mostly been investigating text-to-speech and accesability in Windows applications.
Microsoft offer an API called Active Accessibility which defines a standard way for clients (for example screen readers such as JAWS) to communicate with regular applications (called servers) which might not provide their own accessibility features.
This is one possible way to address the accessability of SecondLife, by taking the existing viewer code and making it conform to the Active Accessibility API. The alternative, which I was toying with earlier today, is to modify the SLeek viewer* to add self-voicing using SAPI 5.1 through SpeechLib in .Net 2.0
* It seems inappropriate to call it a viewer when there's nothing much to view, and especially as we're intending to create a piece of software that allows the user to hear the game.
Apparently in order to use SpeechLib in .Net 2.0 the SAPI DLL might need to be converted with TlbImp thus,
This is some example C# code demonstrating voicing:
using SpeechLib; SpVoice objSpeech = new SpVoice(); objSpeech.Speak(textBox1.Text,SpeechVoiceSpeakFlags.SVSFlagsAsync); objSpeech.WaitUntilDone(Timeout.Infinite);
Given that I have no SL budget for land or audio uploads to the main grid, and my OpenSim grid doesn't support scripting I've spent the day looking into the structure of Linden's viewer ("newview").
Windows crt0.cpp WinMainCRTStartup() viewer.cpp WinMain() gViewerWindow = new LLViewerWindow(); mRootView = new LLRootView();
gViewerWindow->initBase(); llviewerwindow.cpp LLViewerWindow::initBase() gFloaterView = new LLFloaterView(); mRootView->addChild(gFloaterView, -1);
gSnapshotFloaterView = new LLSnapshotFloaterView(); mRootView->addChild(gSnapshotFloaterView);
gConsole = new LLConsole(); mRootView->addChild(gConsole);
gDebugView = new LLDebugView(); mRootView->addChild(gDebugView);
gHUDView = new LLHUDView(); mRootView->addChild(gHUDView);
gNotifyBoxView = new LLNotifyBoxView(); mRootView->addChild(gNotifyBoxView, -2);
mProgressView = new LLProgressView(); mRootView->addChild(mProgressView); llviewerwindow.cpp LLViewerWindow::initWorldUI() gChatBar = new LLChatBar("chat", chat_bar_rect); gToolBar = new LLToolBar("toolbar", bar_rect); gOverlayBar = new LLOverlayBar("overlay", bar_rect);
gBottomPanel = new LLBottomPanel() gBottomPanel->addChild(gChatBar); gBottomPanel->addChild(gToolBar); gBottomPanel->addChild(gOverlayBar);
mRootView->addChild(gBottomPanel);
gHoverView = new LLHoverView("gHoverView", full_window); gFloaterMap = new LLFloaterMap("Map"); gFloaterWorldMap = new LLFloaterWorldMap(); gFloaterTools = new LLFloaterTools(); gStatusBar = new LLStatusBar("status", status_rect); gViewerWindow->getRootView()->addChild(gStatusBar);
MSAA This is where the windows mmessage queue is dealt with (callback). llwindowwin32.cpp LLWindowWin32::mainWindowProc()
LLView is responsible for handling input and so is perhaps one place to insert MSAA code. In particular during the startup procedure documented above, mRootView is created as the top level view. Note the following members:
LLView::focusNext() // For example lllineeditor.cpp LLLineEditor::setFocus() lluictrl.cpp LLUICtrl::setFocus() llfocusmgr.cpp gFocusMgr.setKeyboardFocus()
llviewerobject.cpp LLViewerObject *LLViewerObject::createObject(const LLUUID &id, const LLPCode pcode, LLViewerRegion *regionp) case LL_PCODE_VOLUME: res = new LLVOVolume(id, pcode, regionp); break; case LL_PCODE_LEGACY_AVATAR: res = new LLVOAvatar(id, pcode, regionp); break;
Disability Support Workers Int. 18 visible members.
For people working in the field of disability, looking for a place to chat, relax and unwind.
Join us to talk about anything from strategies to songwriters, legislation to landscaping :)
This group is only a few days old, I'll be trying to get the word out IRL ASAP !! :)
-Filter Miles
Disabled SL-Peoples Association 15 visible members
A group for physical disabled people.
Founded by the Dane Arcadian Meili for handicapped and physical disabled bodys or people.
You have to be 18+ to join and the invite you will get from the group owner, Arcadian Meili. Send an Instant Message and tell why you wanna join.
People without disabilities can become members but need a VERY good reason.
Purpose of the group is: 1: as a community for disabled 2: communication between disabled and care givers or other people in the healthcare area + more.
I'm now going to try to populate it with accessible contents. This could be an easy way to prototype a framework, set of standards, or test environment in which we could let blind people interact with one another. Unfortunately at present script support in OpenSim is extremely limited, so audio playback is not possible.
The only current way to create such a test environment will be by renting use of Linden's commercial servers.
Sleek, a lightweight client might be a useful starting point for developing an audio only viewer. It builds very quickly and looks like a small, straightforward C# codebase with no world rendering included.
Engadget reports from the E for All Expo about a force feedback vest, initially designed to provide a tactile effect when your game avatar is shot in Call of Duty.
This got me thinking about wearables and haptic feedback generally as it could provide a useful interface for this current project - and as they say your accessability issue is my usability issue, hence the tactile feedback is a powerful feature for gaming generally.
It also stuck a chord because I was discussing military flight simulators with a guy who's recently been accepted into the RAF. He told me about the kind of physical feedback those machines are equipped with - the pilot is strapped in as they might be in a normal jet, but the straps are used to simulate the sensation of increased gravitation when flying the aircraft in tight corners etc. They pull the pilot into the seat with a force comparable to that which would be experienced in an actual aircraft.
So the NSF have a similar project for Eelke Folmer (from HelpYouPlay.com) who's scope is much greater than our own. With 12 months and a budget of $90,448 they're clearly ones to watch, though I have a couple of thoughts about their statement:
In this exploratory project, he will develop a prototype client for Second Life that offers a basic level of accessibility, and which will allow him to assess the feasibility of and technical requirements for a client that is fully accessible to blind players. The prototype client will initially allow blind players to navigate the environment using voice commands alone; it will then be enhanced and extended, as time and resources allow, so as to enable these players to interact in meaningful ways with other players.
That's interesting. Most audio games use keyboard navigation. I don't understand why voice commands are preferred, and why they're developed during the initial stages of the prototype when it would seem to me that the first thing you need is feedback from the world (i.e., spatial audio cues) before you start to move around in it.
Achieving these objectives is not straightforward, because the client and server of Second Life have only recently been made open source and no one has yet attempted to create an accessible client for the environment.
I didn't think the server was open sourced yet, though it is apparently planned for some as-yet unspecified point in the future. I have heard that some people have reversed engineered the network traffic (or merely extracted it from the client source) and have extrapolated their own server based on how it appears to work. The official line from Linden is,
What source code won't you be releasing? We don't (yet) plan to release the code that runs our simulators or other server code ("the Grid"). We're keeping an open mind about the possibility of opening more of the Second Life Grid; the level of success we have with open sourcing our viewer will direct the speed and extent of further moves in this arena.
There's an interview with Eelke, for further reading too.
Following instructions on the Wiki, I've just built my first Second Life client. Here I am debugging it in Visual Studio,
I had a few problems building it, but after following the instructions properly it worked out.
There is a small bug in the newview project though:
Properties->Configuration Properties->Custom Build Step->General->Command Line
It should read copy "$(TargetDir)\$(TargetFileName)" "$(ProjectDir)"
Instead of copy $(TargetDir)\$(TargetFileName) $(ProjectDir)
That makes sure that the executable gets copied when you have spaces in your path.
I also had a problem building the source, llcompilequeue.obj : error LNK2019: unresolved external symbol "int __cdecl lscript_compile(char const *,char const *,char const *,int)" (?lscript_compile@@YAHPBD00H@Z) referenced in function "protected: void __thiscall LLFloaterCompileQueue::compile(char const *,class LLUUID const &)" (?compile@LLFloaterCompileQueue@@IAEXPBDABVLLUUID@@@Z)
This is mentioned in the Wiki, but only for .NET 2006, whereas I was using the (recommended) 2003. Upon further investigation it turned out to be a problem in the compilation of the lscript_compile or lscript_compile_fb projects. Flex was crashing for some reason. I realised that I had earlier cancelled an update of cygwin which was probably the reason for the current failure, so I just started my update again and once that was complete the projects compiled fine without Flex barfing.
Anyway, I finally built and ran the executable.
The significance of this is that I could (potentially) now develop a non-visual client, using only audio feedback. That's got to be the ultimate goal of an accessible client but is unfortunately beyond the scope of this current project. All I'll be able to do within this remit is evaluate the feasibility of that development and make suggestions for the future.
I've been thinking about game audio recently, and was having a conversation with a friend about Valve's Deathmatch FPS release, TF2. I watched a video of some gameplay footage to get an idea what the game was like and was surprised that I recognised some of the audio effects from another of Valve's seminal titles, Half Life (which were also used in HL2).
Specifically I recognised the 'heal' sound that the stations make when they recover your health, shields or ammo, and the weapon select confirmation noise (possibly also one of the pistols and shotgun?). While it's natural to use the same audio in a sequel (HL to HL2), I was surprised that they used the same effects in a title from a totally independent game world (TF2). It works extremely well, though. I instantly understood the significance of the audio cues and hence what was happening in gameplay terms.
This in turn made me think about gameplay mores, about the tropes and aesthetics that have become de facto standards, and how they help familiarise us to new games. But what then of audio games? I wonder if they suffer from underdevelopment such that no standards have emerged yet.
This reminds me a little bit of gaming during the 1980s. This period was characterised by the diversity of games that didn't seem to fit into genres yet. By the 90s I feel that the commercial market had evolved and certain conventions had emerged, for example using the WASD keys for navigating first person games.
This is a particularly interesting point for me as my MA dissertation dealt with embodiment in games, and developed on the extension thesis of Marshall McLuhan and the phenomenology of Maurice Merleau-Ponty, amongst others. The basic premise is that our sense of self is predicated on our sensory experience, which depends on our situated body and it's relation to the rest of the world. In a game environment, mediated by a keyboard, WASD becomes a naturalised and pre-reflective expression of our intentions. The reuse of this form allows us to build up what Merleau-Ponty refers to as the "habitual body image".
The absence of consistent interface semiotics in audio games as with the early 80s games results in the inability to transfer any continuity between any of them.
On the one hand the 80s was a very creative time which I think a lot of people yearn for in their renewed interest in retro gaming, but on the other hand the lack of a shared language of gameplay acts as some kind of barrier, or increasing the learning curve of each and every game. This in turn was an obstacle the had to be overcome on the way to mass commercial viability for the industry.
One possibility for this project I'm currently engaged in might be to investigate and define standards for audio interaction rather than to create a client. Another aspect of Second Life that is interesting in this regard is the possibility to own land and create environments which can be controlled to be more accessible. For example, I could imagine an island designed for blind users, where all objects emitted audio cues. This might be an easier way to prototype the requirements of a client.
This idea came from thinking about AudioQuake as a mod for an existing game. Second Life is more complicated because the environment is so much more diverse, volatile and not under control as it is in Quake or other games.
Also there's a problem with my current plan for developing a prototype client using just Linden Scripting Language: the only feasible technique for creating spatial audio is to create an invisible object that follows the target object and emits sound, thus indicating the target's location to a blind user. However, this audio will be heard by everyone, and especially the target, which, even though they have the ability to mute the emitter, is very anti-social behaviour! The optimal solution is to develop a dedicated client so 3D audio can be triggered on the local rather than server side, which is approach being followed by the National Science Foundation, and to a certain extent also evaluated in our project.
Perhaps the quickest and most effective solution in the time frame is to simply buy land on which to develop an accessible environment. However, this would require a modest investment of real world money as land in Second Life is sold commercially (at least for now, until the server is open sourced).
A preferable and free solution would be to simply run our own server, but the current Open Source version is quite limited in what scripts it can run.
Half Life (Windows). Valve, Electronic Arts. (19th November, 1998). Half Life 2 (Windows). Valve. (16th November, 2004). Team Fortress (Windows). Caughley, Ian; Cook, John; Walker, Robin. (Australia: 1996). Team Fortress 2 (Windows). Valve, Electronic Arts. (2007)
I've begun my research this week by investigating existing games which cater to visually impaired users, and my initial assessment is that this project is going to be problematic.
I had most success with AudioQuake as it allows the greatest degree of customisation, and hence control over the gameplay experience. I found it incredibly immersive to shut my eyes and concentrate on the audio in my headphones, trying to navigate around the space presented. I'd occasionally open my eyes to confirm that I was where I thought I was and most of the time I was right. It gave me some kind of impression of what life would be like without sight, and also suggested something of the aesthetics of audio gaming.
FPS games are clearly interesting for their physical action based largely around rapid navigation through space, and this is something that lots of us have probably enjoyed in the actual world as children as well as vicariously through film and games as adults. However, I have to wonder what kind of gaming pleasures are available to blind people. It would be great to speak to some blind gamers though to find out what games they like in the actual as well as virtual worlds. I guess they're less about gross but controlled movement and more about localised physical action, thought and imagination, but that's entirely the assumption of a fully-sighted person.
We might think in terms of Caillois' classic categories, ludus and paidia, and illinx (vertigo), mimesis, alea and agon. All of these basic forms are clearly available to blind gamers, though I wonder how effective audio computer games can be at stimulating illinx? The giddying excitement of fast movement through virtual worlds is a good example of the form this is found in visual computer games, but I wonder what is analogous in audio?
Perhaps we can learn something here from musicology. Some kinds of jazz or experimental music can perhaps produce a form of illinx (e.g., Mr Bungle), and the audio effects in Hollywood action movies can be evocative in themselves. Wouldn't it be reasonable to expect this to heightened if the user were interactively involved in the audioscape?
Access Invaders is an interesting case study as it has a different levels for blind players. In this mode the aliens group together in a single column and the player has to listen for where they (effectively as a singular entity) are. Again this makes me think that controlling the environment is an easy way to achieve accessibility (think about the bumpy surface in front of street that can be felt by people with sticks, or the changes we make to public buildings for wheelchair access). As a game it's not terribly exciting, though I was capable of adapting to the style of play. It has to be said that in the contemporary gaming age, even the classic Space Invaders isn't terribly exciting either. The only positive thing I can say about this game qua game is that it demonstrates 2D audio as a feedback device.
So much for the excitement of AudioQuake, I got even less enjoyment from the critically acclaimed Terraformersas I found the synthetic audio cues quite unpleasant to listen to. Personally I'd prefer more naturalistic or at least more integrated audio, that is, audio icons that bore more of a resemblance to what they represented rather than the current style of earcons. The obvious problem is what sound should a wall make to indicate it's presence? I found the low pitched throbbing to be quite evocative, and again musicology might be a good place to start for appropriate audio cues.
Access Invaders (Windows). Centre for Universal Access and Assistive Technologies, Human-Computer Interaction Laboratory, Institute of Computer Science, Foundation for Research and Technology (Hellas, Greece: January 2006) <http://www.ics.forth.gr/hci/ua-games/access-invaders/> (Last accessed 19th October 2007).
AudioQuake 0.3.0rc1 ``glacier' (Windows). Accessible Gaming Rendering Independence Possible (27th June 2007) <http://www.agrip.org.uk/AudioQuake> (Last accessed 19th October 2007).