Friday 26 October 2007

RNIB: Web Access Centre

At the end of July, Henny Swan, a web accessibility consultant for the Web Access Centre, asked for the community's evaluation of Second Life in terms of accessibility.

This is encouraging because it means that others are considering SL in the same way as us, but particularly interesting were the comments from one user who observed that blind people can't even create an account, let alone use the client!

Also Reported on this site is that Judy Brewer, Director of the Web Accessibility Initiative (WAI), World Wide Web Consortium (W3C), gave a presentation at a Second Life conference sponsored by the U.S. Department of State Bureau of International Information Programs (IIP) and the University of Southern California Annenberg School for Communication. She is reported as saying the following,

"If a person has a visual disability, they need an alternative to the visual environment on Second Life. Maybe a space could be magnified to make it easier to see. A speech reader could speak the text typed into the chat."
"There need to be easy and reliable ways to be able to add text descriptions to all content created in SL."


Additionally I read the BBC's article on the IBM's "Accessibility In Virtual Worlds" project by Extreme Blue and the Human Ability and Accessibility Centre, which uses Active Worlds rather than SL as it can be run within a web browser (although when I tried it simply launched an external browser).

"When the user comes into the world, the items are described as well as their positions," explained Colm O'Brien, one of the team of four researchers who worked on the project.

"There is also sound attached - for example, if there's a tree nearby you will hear a rustling of leaves," said Mr O'Brien.

The work also developed tools which uses text to speech software that reads out any chat from fellow avatars in the virtual world that appears in a text box.

Characters in the virtual world can have a "sonar" attached to them so that the user gets audible cues to alert them to when they are approaching, from which direction and how near they are.


The BBC provide an audio example of the interface, and there's a section on Radio 4's In Touch programme with interviews from interns Esmond Walsh and Antony Clinton, though unfortunately I can't source any more information about this fascinating sounding project.

Text to Speech Code

Today I've mostly been investigating text-to-speech and accesability in Windows applications.

Microsoft offer an API called Active Accessibility which defines a standard way for clients (for example screen readers such as JAWS) to communicate with regular applications (called servers) which might not provide their own accessibility features.

This is one possible way to address the accessability of SecondLife, by taking the existing viewer code and making it conform to the Active Accessibility API. The alternative, which I was toying with earlier today, is to modify the SLeek viewer* to add self-voicing using SAPI 5.1 through SpeechLib in .Net 2.0

* It seems inappropriate to call it a viewer when there's nothing much to view, and especially as we're intending to create a piece of software that allows the user to hear the game.

Apparently in order to use SpeechLib in .Net 2.0 the SAPI DLL might need to be converted with TlbImp thus,

C:\Program Files\Microsoft Visual Studio .NET 2003\SDK\v1.1\Bin\TlbImp.exe "C:\Program Files\Common Files\Microsoft Shared\Speech\sapi.dll" /out:Interop.SpeechLib.dll

This is some example C# code demonstrating voicing:

using SpeechLib;
SpVoice objSpeech = new SpVoice();
objSpeech.Speak(textBox1.Text,SpeechVoiceSpeakFlags.SVSFlagsAsync);
objSpeech.WaitUntilDone(Timeout.Infinite);



Downloads:

SpeechSDK51.exe from SAPI 5.1
.Net Framework V3.5 Beta 2 and Redistributable
JAWS 8 demo

Thursday 25 October 2007

Viewer Call Stack Notes

Given that I have no SL budget for land or audio uploads to the main grid, and my OpenSim grid doesn't support scripting I've spent the day looking into the structure of Linden's viewer ("newview").

Here are my notes from the callstack.


Startup

crt0.cpp
WinMainCRTStartup()

viewer.cpp
WinMain()
lldxhardware.cpp
LLDXHardware::getInfo()
CoInitialize(NULL);
CoUninitialize();

gViewerWindow = new LLViewerWindow(...);

llviewerwindow.cpp
LLViewerWindow::LLViewerWindow()
mWindow = LLWindowManager::createWindow(...);

pipeline.cpp
LLPipeline::init()
LLPipeline::getPool(LLDrawPool::POOL_ALPHA); // + others such as Glow, etc
LLDrawPool::createPool()
LLPipeline::addPool()
mPools.insert(new_poolp);

mRootView = new LLRootView(...);

LLViewerWindow::initBase()
gFloaterView = new LLFloaterView(...);

gConsole = new LLConsole(...);
mRootView->addChild(gConsole);

mRootView->addChild(gFloaterView, -1);

main_loop()
idle()
llviewerdisplay.cpp
display()
llstartup.cpp
idle_startup()
messaging.cpp
start_messaging_system()
LLMessageSystem::setHandlerFuncFast(...) // eg, _PREHASH_StartPingCheck with process_start_ping_check()
llmessagetemplate.h
setHandlerFunc()

gWorldp = new LLWorld()
llworld.cpp
LLWorld::addRegion()
llviewerregion.cpp
LLViewerRegion::LLViewerRegion()
llsurface.cpp
LLSurface::create()
LLSurface::initTextures()
pipeline.cpp
LLPipeline::addObject()
LLDrawable::createDrawable(this); // eg, LLVOWater
LLPipeline::getPool()
LLPipeline::addPool()
mPools.insert(new_poolp); // eg, DrawPoolWater,Terrain,SkyStars,Ground


register_viewer_callbacks()
msg->setHandlerFuncFast(_PREHASH_ChatFromSimulator, process_chat_from_simulator);

llviewerobject.cpp
LLViewerObjectList::update()
LLViewerObject.idleUpdate();

Menu Bar

llwindowwin32.cpp
LLWindowWin32::mainWindowProc()
llkeyboardwin32.cpp
LLKeyboardWin32::handleKeyDown()
llkeyboard.cpp
LLKeyboard::handleTranslatedKeyDown()
llviewerwindow.cpp
LLViewerWindow::handleTranslatedKeyDown()
llviewerkeyboard.cpp
LLViewerKeyboard::handleKey()
llviewerwindow.cpp
LLViewerWindow::handleKey()
llmenugl.cpp
LLMenuBarGL::handleAcceleratorKey()
llmenugl.cpp
LLMenuGL::handleAcceleratorKey()
LLMenuItemBranchDownGL::handleAcceleratorKey()
LLMenuGL::handleAcceleratorKey()
LLMenuItemCallGL::handleAcceleratorKey()
LLMenuItemGL::handleAcceleratorKey()
LLMenuItemCallGL::doIt()
LLPointer fired_event = new LLEvent(this);
fireEvent(fired_event, "on_click");
llevent.cpp
LLObservable::fireEvent()
mDispatcher->fireEvent()
LLEventDispatcher::fireEvent()
impl->fireEvent()
LLSimpleDispatcher::fireEvent()
llviewermenu.cpp
LLWorldAlwaysRun::handleEvent() // Sends SetAlwaysRun message
LLMenuItemGL::doIt();


Login
todo

Launch external website from Login panel

crt0.cpp
WinMainCRTStartup()
viewer.cpp
WinMain()
main_loop()
idle()
llmortician.cpp
LLMortician::updateClass()
llalertdialog.cpp
LLAlertDialog::~LLAlertDialog()
llpanellogin.cpp

LLPanelLogin::newAccountAlertCallback() // Passes CREATE_ACCOUNT_URL from llsecondlifeurls.cpp
llweb.h
LLWeb::loadURL()
llweb.cpp
LLWeb::loadURL()
LLWeb::loadURLExternal()
llwindowwin32.cpp
spawn_web_browser()
ShellExecute() // Win32 API

Messaging


main_loop()
idle()
idle_network()
message.cpp
LLMessageSystem::checkAllMessages()
LLMessageSystem::checkMessages()
lltemplatemessagereader.cpp
LLTemplateMessageReader::readMessage()
LLTemplateMessageReader::decodeData()
llmessagetemplate.h
LLMessageTemplate::callHandlerFunc()
llviewermessage.cpp

process_object_update()
llviewerobjectlist.cpp
LLViewerObjectList::processObjectUpdate()
LLViewerObjectList::createObject()
LLViewerObject::createObject(...);
LLViewerObjectList::updateActive()
mActiveObjects.insert(...); // LLVOAvatar, LLVOClouds, etc.

LLViewerObjectList::processUpdateCore()
pipeline.cpp
LLPipeline::addObject()
LLDrawable->createDrawable(this); // LLVOAvatar,Tree

llvoavatar.cpp
LLVOAvatar::createDrawable()
pipeline.cpp
LLPipeline::getPool()
LLPipeline::addPool(); // LLDrawPoolAvatar,Tree


Render

viewer.cpp
WinMain()
main_loop()
idle()
llviewerdisplay.cpp
display()
pipeline.cpp
LLPipeline::updateGeom()
LLPipeline::updateDrawableGeom()
lldrawable.cpp
LLDrawable::updateGeometry()
mVObjp->updateGeometry(this); // where mVObjp is LLVOWater, LLVOSurfacePatch

LLPipeline::renderGeom()
LLDrawPool->prerender(); // LLDrawPoolSky, LLDrawPoolStars, LLDrawPoolGround, LLDrawPoolTerrain, LLDrawPoolSimple, LLDrawPoolBump, LLDrawPoolAvatar, LLDrawPoolTree, LlDrawPoolGlow, LLDrawPoolWater, LLDrawPoolAlphaPostWater
LLDrawPool->render(i); // Same types as above

Notification
(e.g., click on an object, it gives you a card)

message.cpp
LLMessageSystem::checkMessages()

lltemplatemessagereader.cpp
LLTemplateMessageReader::decodeData()

llviewermessage.cpp
process_improved_im()
msg->getU8Fast( _PREHASH_MessageBlock, _PREHASH_Dialog, d); // IM_TASK_INVENTORY_OFFERED, IM_MESSAGEBOX, IM_GROUP_INVITATION, IM_INVENTORY_ACCEPTED, IM_GROUP_VOTE ... IM_GROUP_NOTICE ( LLGroupNotifyBox::show(), LLFloaterGroupInfo::showNotice() ),
inventory_offer_handler()
llnotify.cpp
LLNotifyBox::showXml()
notify = new LLNotifyBox(...);
gNotifyBoxView->addChildAtEnd(notify);
LLNotifyBox::moveToBack()
LLNotifyBoxView::showOnly()
LLNotifyBox::setVisible()
llview.cpp
LLPanel::setVisible() // Actually resolves to LLView
LLView::setVisible()

viewer.cpp
WinMain()
gViewerWindow = new LLViewerWindow()

llviewerdisplay.cpp
display_startup()
gViewerWindow->setup2DRender()

viewer.cpp
main_loop()
llviewerdisplay.cpp
display()
render_ui_and_swap()
render_ui_2d()
llviewerwindow.cpp
LLViewerWindow::draw()
llview.cpp
LLView::draw()
for (child_list_reverse_iter_t child_iter = mChildList.rbegin(); child_iter != mChildList.rend(); ++child_iter)
llnotify.cpp
LLNotifyBox::draw()

// The following are also rendered in this stack,

llconsole.cpp
LLConsol::draw()

llview.cpp
LL::draw()

llnotify.cpp
LLNotifyBox::draw()

llhudview.cpp
LLHUDView::draw()

llfloater.cpp
LLFloaterView::draw()

llfloatermap.cpp
LLFloaterMap::draw()

lldraghandle.cpp
LLDragHandleTop::draw()

llnetmap.cpp
LLNetMap::draw()

lltextbox.cpp
LLTextBox::draw()

llresizehandle.cpp
LLResizeHandle::draw()

llbutton.cpp
LLButton::draw()

llviewerwindow.cpp
LLBottomPanel::draw()

llpanel.cpp
LLPanel::draw()

lloverlaybar.cpp
LLOverlayBar::draw()

llvoiceremotectrl.cpp
LLVoiceRemoteCtrl::draw()

lliconctrl.cpp
LLIconCtrl::draw()

llmediaremotectrl.cpp
LLMediaRemoteCtrl::draw()

llslider.cpp
LLSlider::draw()

llhoverview.cpp
LLHoverView::draw() // Tooltips - could use this to speak currently selected interface element

llstatgraph.cpp
LLStatGraph::draw()

llmenugl.cpp
LLMenuHolderGL::draw()
LLMenuBarGL::draw()
LLMenuItemBranchDownGL::draw()

llprogressview.cpp
LLProgressView::draw() // Loading bar?

Chat

llstartup.cpp
register_viewer_callbacks()
msg->setHandlerFuncFast(_PREHASH_ChatFromSimulator, process_chat_from_simulator);

llviewermessage.cpp

process_chat_from_simulator()
llfloaterchat.cpp
LLFloaterChat::addChat()

llconsole.cpp
LLConsole::addLine()

addChatHistory()
llpanelactivespeakers.cpp
LLPanelActiveSpeakers::setSpeaker()
llfloateractivespeakers.cpp
LLSpeakerMgr::setSpeaker()

SelfVoicing::Speak() // Added by me

Windows
crt0.cpp
WinMainCRTStartup()
viewer.cpp
WinMain()
gViewerWindow = new LLViewerWindow();
mRootView = new LLRootView();

gViewerWindow->initBase();
llviewerwindow.cpp
LLViewerWindow::initBase()
gFloaterView = new LLFloaterView();
mRootView->addChild(gFloaterView, -1);

gSnapshotFloaterView = new LLSnapshotFloaterView();
mRootView->addChild(gSnapshotFloaterView);

gConsole = new LLConsole();
mRootView->addChild(gConsole);

gDebugView = new LLDebugView();
mRootView->addChild(gDebugView);

gHUDView = new LLHUDView();
mRootView->addChild(gHUDView);

gNotifyBoxView = new LLNotifyBoxView();
mRootView->addChild(gNotifyBoxView, -2);

mProgressView = new LLProgressView();
mRootView->addChild(mProgressView);
llviewerwindow.cpp
LLViewerWindow::initWorldUI()
gChatBar = new LLChatBar("chat", chat_bar_rect);
gToolBar = new LLToolBar("toolbar", bar_rect);
gOverlayBar = new LLOverlayBar("overlay", bar_rect);

gBottomPanel = new LLBottomPanel()
gBottomPanel->addChild(gChatBar);
gBottomPanel->addChild(gToolBar);
gBottomPanel->addChild(gOverlayBar);

mRootView->addChild(gBottomPanel);

gHoverView = new LLHoverView("gHoverView", full_window);
gFloaterMap = new LLFloaterMap("Map");
gFloaterWorldMap = new LLFloaterWorldMap();
gFloaterTools = new LLFloaterTools();
gStatusBar = new LLStatusBar("status", status_rect);
gViewerWindow->getRootView()->addChild(gStatusBar);

crt0.cpp
WinMainCRTStartup()
viewer.cpp
WinMain()
main_loop()
idle()
llstartup.cpp
idle_startup()
init_stat_view()
llstatview.cpp
LLStatView::LLStatView()

gDebugView->mStatViewp->addChildAtEnd();


MSAA

This is where the windows mmessage queue is dealt with (callback).

llwindowwin32.cpp

LLWindowWin32::mainWindowProc()

LLView is responsible for handling input and so is perhaps one place to insert MSAA code.
In particular during the startup procedure documented above, mRootView is created as the top level view.
Note the following members:

LLView::tab_order_t;
LLView::focusNextRoot();
LLView::focusPrevRoot();
LLView::focusNextItem();
LLView::focusPrevItem();
LLView::focusFirstItem();
LLView::focusLastItem();

Additionally important is llfocusmgr.h with its class LLFocusMgr

Upon state change (eg, focus moved to a different UI element), issue,
NotifyWinEvent(EVENT_OBJECT_STATECHANGE, hWnd, (LONG)&lpData->btnSelf, CHILDID_SELF)

Focus
crt0.cpp
WinMainCRTStartup()
viewer.cpp
WinMain()
main_loop()
idle()
llstartup.cpp
idle_startup()
login_show()
llpanellogin.cpp
LLPanelLogin::show()
LLPanelLogin::setFocus()
LLPanelLogin::giveFocus()
lllineeditor.cpp
LLLineEditor::setFocus()
lluictrl.cpp
LLUICtrl::setFocus()
gFocusMgr.setKeyboardFocus()
llfocusmgr.cpp
LLFocusMgr::setKeyboardFocus()


Click on Username box in login screen:

llwindowwin32.cpp
LLWindowWin32::mainWindowProc()
case WM_LBUTTONDOWN
llviewerwindow.cpp
LLViewerWindow::handleMouseDown()
llview.cpp LLView::handleMouseDown()
LLView::childrenHandleMouseDown()
LLView::handleMouseDown()
LLView::childrenHandleMouseDown()
lllineeditor.cpp
LLLineEditor::handleMouseDown()
LLLineEditor::setFocus()
LLUICtrl::setFocus()
llfocusmgr.cpp
LLFocusMgr::setKeyboardFocus()
llwebbrowserctrl.cpp
LLWebBrowserCtrl::onFocusLost()
llviewerwindow.h
LLViewerWindow::focusClient()
llwindowwin32.cpp
LLWindowWin32::focusClient()
Platform SDK - SetFocus(HWND)

Tab from Username to Password

llwindowwin32.cpp
LLWindowWin32::mainWindowProc()
case WM_KEYDOWN
llkeyboardwin32.cpp
LLKeyboardWin32::handleKeyDown()
llkeyboard.cpp
LLKeyboard::handleTranslatedKeyDown()
llviewerwindow.cpp
LLViewerWindow::handleTranslatedKeyDown()
llviewerkeyboard.cpp
LLViewerKeyboard::handleKey()
llviewerwindow.cpp
LLViewerWindow::handleKey()
llpanel.cpp
LLPanel::handleKey()
LLUICtrl* keyboard_focus = gFocusMgr.getKeyboardFocus();

llfocusmgr.cpp
LLFocusMgr::childHasKeyboardFocus()

llview.cpp
LLView::isFocusRoot()

LLView::focusNextItem()
LLView::getTabOrderQuery()
query.addPreFilter( LLVisibleFilter::getInstance() );
query.addPreFilter( LLEnabledFilter::getInstance() );
query.addPreFilter( LLTabStopFilter::getInstance() );
query.addPostFilter( LLUICtrl::LLTabStopPostFilter::getInstance() );

LLView::focusNext()
// For example
lllineeditor.cpp
LLLineEditor::setFocus()
lluictrl.cpp
LLUICtrl::setFocus()
llfocusmgr.cpp
gFocusMgr.setKeyboardFocus()

llpanel.cpp
LLPanel::handleKey()
llview.cpp
LLView::handleKey()

In game, focus currently on Inventory window.
Click into main 3D display.

llwindowwin32.cpp
LLWindowWin32::mainWindowProc()
LLWindowWin32 *window_imp = (LLWindowWin32 *)GetWindowLong(h_wnd, GWL_USERDATA);
case WM_LBUTTONDOWN:
window_imp->mCallbacks->handleMouseDown()

llviewerwindow.cpp
LLViewerWindow::handleMouseDown()
gToolMgr->getCurrentTool()->handleMouseDown()

lltoolpie.cpp
LLToolPie::handleMouseDown()
gViewerWindow->hitObjectOrLandGlobalAsync()
llviewerwindow.cpp
LLViewerWindow::hitObjectOrLandGlobalAsync()
llfocusmgr.cpp
gFocusMgr.setKeyboardFocus()
lllineeditor.cpp
LLLineEditor::onFocusLost()

Gestures
Typing "/yes" into the chat window to activate a gesture

llwindowwin32.cpp

LLWindowWin32::mainWindowProc()
llwindow.cpp
LLWindow::handleUnicodeUTF16()
llviewerwindow.cpp
LLViewerWindow::handleUnicodeChar()
llviewerkeyboard.cpp
LLViewerKeyboard::handleKey()
llviewerwindow.cpp
LLViewerWindow::handleKey()
llview.cpp
LLView::handleKey()
llpanel.cpp
LLPanel::handleKey()
llview.cpp

LLView::handleKey()
llchatbar.cpp
LLChatBar::handleKeyHere()
LLChatBar::sendChat()
llgesturemgr.cpp
LLGestureManager::triggerAndReviseString()
LLGestureManager::playGesture()
SelfVoicing::Speak() // Added by me


viewer.cpp
main_loop()
idle()
llgesturemgr.cpp
LLGestureManager::update()
LLGestureManager::stepGesture()
LLGestureManager::runStep()
llagent.cpp
LLAgent::sendAnimationRequest() // Sends AgentAnimation message


Audio

LLAudioSource* findAudioSource( const LLUUID& source_id );
void addAudioSource( LLAudioSource* asp );
LLAudioChannel* getFreeChannel( const F32 priority );
BOOL hasLocalFile( const LLUUID& uuid );
BOOL preloadSound( const LLUUID& uuid );
void setListener( LLVector3 pos, LLVector3 vel, LLVector3 up, LLVector3 at );
void triggerSound( const LLUUID& sound_id, const LLUUID& owner_id, const F32 gain, const LLVector3d& pos_global = LLVector3d::zero );

audioengine.cpp
LLAudioEngine* gAudiop = NULL;

llstartup.cpp
BOOL idle_startup()
gAudiop = (LLAudioEngine *) new LLAudioEngine_FMOD();
BOOL init = gAudiop->init(kAUDIO_NUM_SOURCES, window_handle);

viewer.cpp
void init_audio()
gAudiop->preloadSound(LLUUID(gSavedSettings.getString("UISndAlert"))); // Lots of other preloaded sounds too

lscript_library.cpp
LLScriptLibrary::init()
addFunction(new LLScriptLibraryFunction(10.f, 0.f, dummy_func, "llPlaySound", NULL, "sf", "llPlaySound(string sound, float volume)\nplays attached sound once at volume (0.0 - 1.0)"));

llpreviewsound.cpp
LLPreviewSound::playSound( void *userdata )
llviewermessage.cpp
send_sound_trigger(const LLUUID& sound_id, F32 gain)
msg->newMessageFast(_PREHASH_SoundTrigger);

llpreviewsound.cpp
LLPreviewSound::auditionSound( void *userdata )
gAudiop->triggerSound( ... )

llvoavatar.cpp
LLVOAvatar::updateCharacter(LLAgent &agent)
gAudiop->triggerSound(step_sound_id, getID(), gain, foot_pos_global);

audioengine.cpp

LLAudioEngine::triggerSound(const LLUUID &audio_uuid, const LLUUID& owner_id, const F32 gain, const LLVector3d &pos_global)
LLAudioSource *asp = new LLAudioSource(source_id, owner_id, gain);
gAudiop->addAudioSource(asp);
asp->play(audio_uuid);

BOOL LLAudioSource::play(const LLUUID &audio_uuid)
LLAudioData *adp = gAudiop->getAudioData(audio_uuid);
addAudioData(adp);
getChannel()->play();

audioengine_fmod.cpp
LLAudioChannelFMOD::play()
getSource()->setPlayedOnce(TRUE);

Object Detection

llvolume.h
const LLPCode LL_PCODE_CUBE = 1;
const LLPCode LL_PCODE_LEGACY_AVATAR = 0x20 | LL_PCODE_LEGACY; // PLAYER

llviewerobject.cpp
LLViewerObject *LLViewerObject::createObject(const LLUUID &id, const LLPCode pcode, LLViewerRegion *regionp)
case LL_PCODE_VOLUME:
res = new LLVOVolume(id, pcode, regionp); break;
case LL_PCODE_LEGACY_AVATAR:
res = new LLVOAvatar(id, pcode, regionp); break;

llviewerobjectlist.cpp
void LLViewerObjectList::processObjectUpdate( ... )
objectp = createObject(pcode, regionp, fullid, local_id, gMessageSystem->getSender());

LLViewerObject *LLViewerObjectList::createObjectViewer(const LLPCode pcode, LLViewerRegion *regionp)
LLViewerObject *objectp = LLViewerObject::createObject(fullid, pcode, regionp);
mUUIDObjectMap[fullid] = objectp;
mObjects.put(objectp);

llviewerobjectlist.h
LLDynamicArrayPtr<>, 256 > LLViewerObjectList::mObjects;


llviewerobjectlist.cpp
LLViewerObjectList::processObjectUpdate( ... )
objectp = createObject(pcode, regionp, fullid, local_id, gMessageSystem->getSender());
LLViewerObject *LLViewerObjectList::createObject( ... )
LLViewerObject *objectp = LLViewerObject::createObject(fullid, pcode, regionp);
llviewerobject.cpp
LLViewerObject *LLViewerObject::createObject( ... )
case LL_PCODE_LEGACY_AVATAR:
res = new LLVOAvatar(id, pcode, regionp); break;


llviewermessage.cpp
process_object_update()

llviewerobjectlist.cpp
LLViewerObjectList::processObjectUpdate( ... )
LLViewerObjectList::processUpdateCore( ... )

llvovolume.cpp
LLVOVolume::processUpdateMessage()
if (update_type == OUT_FULL)
BOOL LLVOVolume::setVolume( ... )
LLPrimitive::setVolume( ... )


pipeline.cpp
LLPipeline::updateGeom()
LLPipeline::updateDrawableGeom()

lldrawable.cpp
LLDrawable::updateGeometry()
mVObjp->updateGeometry(this);

llvovolume.cpp
LLVOVolume::updateGeometry(LLDrawable *drawable)


llviewermessage.cpp
process_object_update()
llviewerobjectlist.cpp
LLViewerObjectList::processObjectUpdate()
LLViewerObjectList::createObject()
LLViewerObject::createObject(...);
LLViewerObjectList::updateActive()
mActiveObjects.insert(...); // LLVOAvatar, LLVOClouds, etc.

SL Groups

Some groups I found in SL that might be relevant:


Disability Support Workers Int.
18 visible members.

For people working in the field of disability, looking for a place to chat, relax and unwind.

Join us to talk about anything from strategies to songwriters, legislation to landscaping :)

This group is only a few days old, I'll be trying to get the word out IRL ASAP !! :)

-Filter Miles


Disabled SL-Peoples Association
15 visible members

A group for physical disabled people.

Founded by the Dane Arcadian Meili for handicapped and physical disabled bodys or people.

You have to be 18+ to join and the invite you will get from the group owner, Arcadian Meili. Send an Instant Message and tell why you wanna join.

People without disabilities can become members but need a VERY good reason.

Purpose of the group is:
1: as a community for disabled
2: communication between disabled and care givers or other people in the healthcare area
+ more.

Monday 22 October 2007

Server Built

Here's my OpenSim running,



I'm now going to try to populate it with accessible contents. This could be an easy way to prototype a framework, set of standards, or test environment in which we could let blind people interact with one another.
Unfortunately at present script support in OpenSim is extremely limited, so audio playback is not possible.

The only current way to create such a test environment will be by renting use of Linden's commercial servers.

Sleek

Sleek, a lightweight client might be a useful starting point for developing an audio only viewer. It builds very quickly and looks like a small, straightforward C# codebase with no world rendering included.

Haptic Wearables

Engadget reports from the E for All Expo about a force feedback vest, initially designed to provide a tactile effect when your game avatar is shot in Call of Duty.

This got me thinking about wearables and haptic feedback generally as it could provide a useful interface for this current project - and as they say your accessability issue is my usability issue, hence the tactile feedback is a powerful feature for gaming generally.

It also stuck a chord because I was discussing military flight simulators with a guy who's recently been accepted into the RAF. He told me about the kind of physical feedback those machines are equipped with - the pilot is strapped in as they might be in a normal jet, but the straps are used to simulate the sensation of increased gravitation when flying the aircraft in tight corners etc. They pull the pilot into the seat with a force comparable to that which would be experienced in an actual aircraft.

Friday 19 October 2007

National Science Foundation

So the NSF have a similar project for Eelke Folmer (from HelpYouPlay.com) who's scope is much greater than our own. With 12 months and a budget of $90,448 they're clearly ones to watch, though I have a couple of thoughts about their statement:

In this exploratory project, he will develop a prototype client for Second Life that offers a basic level of accessibility, and which will allow him to assess the feasibility of and technical requirements for a client that is fully accessible to blind players. The prototype client will initially allow blind players to navigate the environment using voice commands alone; it will then be enhanced and extended, as time and resources allow, so as to enable these players to interact in meaningful ways with other players.

That's interesting. Most audio games use keyboard navigation. I don't understand why voice commands are preferred, and why they're developed during the initial stages of the prototype when it would seem to me that the first thing you need is feedback from the world (i.e., spatial audio cues) before you start to move around in it.

Achieving these objectives is not straightforward, because the client and server of Second Life have only recently been made open source and no one has yet attempted to create an accessible client for the environment.

I didn't think the server was open sourced yet, though it is apparently planned for some as-yet unspecified point in the future. I have heard that some people have reversed engineered the network traffic (or merely extracted it from the client source) and have extrapolated their own server based on how it appears to work. The official line from Linden is,

What source code won't you be releasing?
We don't (yet) plan to release the code that runs our simulators or other server code ("the Grid"). We're keeping an open mind about the possibility of opening more of the Second Life Grid; the level of success we have with open sourcing our viewer will direct the speed and extent of further moves in this arena.


There's an interview with Eelke, for further reading too.

Second Life Client Built

Following instructions on the Wiki, I've just built my first Second Life client.
Here I am debugging it in Visual Studio,



I had a few problems building it, but after following the instructions properly it worked out.

There is a small bug in the newview project though:

Properties->Configuration Properties->Custom Build Step->General->Command Line

It should read

copy "$(TargetDir)\$(TargetFileName)" "$(ProjectDir)"

Instead of

copy $(TargetDir)\$(TargetFileName) $(ProjectDir)

That makes sure that the executable gets copied when you have spaces in your path.

I also had a problem building the source,

llcompilequeue.obj : error LNK2019: unresolved external symbol "int __cdecl lscript_compile(char const *,char const *,char const *,int)" (?lscript_compile@@YAHPBD00H@Z) referenced in function "protected: void __thiscall LLFloaterCompileQueue::compile(char const *,class LLUUID const &)" (?compile@LLFloaterCompileQueue@@IAEXPBDABVLLUUID@@@Z)

llpreviewscript.obj : error LNK2001: unresolved external symbol "int __cdecl lscript_compile(char const *,char const *,char const *,int)" (?lscript_compile@@YAHPBD00H@Z)

This is mentioned in the Wiki, but only for .NET 2006, whereas I was using the (recommended) 2003. Upon further investigation it turned out to be a problem in the compilation of the lscript_compile or lscript_compile_fb projects. Flex was crashing for some reason. I realised that I had earlier cancelled an update of cygwin which was probably the reason for the current failure, so I just started my update again and once that was complete the projects compiled fine without Flex barfing.

Anyway, I finally built and ran the executable.

The significance of this is that I could (potentially) now develop a non-visual client, using only audio feedback. That's got to be the ultimate goal of an accessible client but is unfortunately beyond the scope of this current project. All I'll be able to do within this remit is evaluate the feasibility of that development and make suggestions for the future.

Gameplay conventions

I've been thinking about game audio recently, and was having a conversation with a friend about Valve's Deathmatch FPS release, TF2. I watched a video of some gameplay footage to get an idea what the game was like and was surprised that I recognised some of the audio effects from another of Valve's seminal titles, Half Life (which were also used in HL2).



Specifically I recognised the 'heal' sound that the stations make when they recover your health, shields or ammo, and the weapon select confirmation noise (possibly also one of the pistols and shotgun?). While it's natural to use the same audio in a sequel (HL to HL2), I was surprised that they used the same effects in a title from a totally independent game world (TF2). It works extremely well, though. I instantly understood the significance of the audio cues and hence what was happening in gameplay terms.

This in turn made me think about gameplay mores, about the tropes and aesthetics that have become de facto standards, and how they help familiarise us to new games. But what then of audio games? I wonder if they suffer from underdevelopment such that no standards have emerged yet.

This reminds me a little bit of gaming during the 1980s. This period was characterised by the diversity of games that didn't seem to fit into genres yet. By the 90s I feel that the commercial market had evolved and certain conventions had emerged, for example using the WASD keys for navigating first person games.

This is a particularly interesting point for me as my MA dissertation dealt with embodiment in games, and developed on the extension thesis of Marshall McLuhan and the phenomenology of Maurice Merleau-Ponty, amongst others. The basic premise is that our sense of self is predicated on our sensory experience, which depends on our situated body and it's relation to the rest of the world. In a game environment, mediated by a keyboard, WASD becomes a naturalised and pre-reflective expression of our intentions. The reuse of this form allows us to build up what Merleau-Ponty refers to as the "habitual body image".

The absence of consistent interface semiotics in audio games as with the early 80s games results in the inability to transfer any continuity between any of them.

On the one hand the 80s was a very creative time which I think a lot of people yearn for in their renewed interest in retro gaming, but on the other hand the lack of a shared language of gameplay acts as some kind of barrier, or increasing the learning curve of each and every game. This in turn was an obstacle the had to be overcome on the way to mass commercial viability for the industry.

One possibility for this project I'm currently engaged in might be to investigate and define standards for audio interaction rather than to create a client. Another aspect of Second Life that is interesting in this regard is the possibility to own land and create environments which can be controlled to be more accessible. For example, I could imagine an island designed for blind users, where all objects emitted audio cues. This might be an easier way to prototype the requirements of a client.

This idea came from thinking about AudioQuake as a mod for an existing game. Second Life is more complicated because the environment is so much more diverse, volatile and not under control as it is in Quake or other games.

Also there's a problem with my current plan for developing a prototype client using just Linden Scripting Language: the only feasible technique for creating spatial audio is to create an invisible object that follows the target object and emits sound, thus indicating the target's location to a blind user. However, this audio will be heard by everyone, and especially the target, which, even though they have the ability to mute the emitter, is very anti-social behaviour! The optimal solution is to develop a dedicated client so 3D audio can be triggered on the local rather than server side, which is approach being followed by the National Science Foundation, and to a certain extent also evaluated in our project.

Perhaps the quickest and most effective solution in the time frame is to simply buy land on which to develop an accessible environment. However, this would require a modest investment of real world money as land in Second Life is sold commercially (at least for now, until the server is open sourced).

A preferable and free solution would be to simply run our own server, but the current Open Source version is quite limited in what scripts it can run.

Half Life (Windows). Valve, Electronic Arts. (19th November, 1998).
Half Life 2 (Windows). Valve. (16th November, 2004).
Team Fortress (Windows). Caughley, Ian; Cook, John; Walker, Robin. (Australia: 1996).
Team Fortress 2 (Windows). Valve, Electronic Arts. (2007)

McLuhan, Marshall. Understanding Media: The Extensions of Man. (New York: McGraw Hill, 1964)
Merleau-Ponty, Maurice. The Phenomenology of Perception. trans. by Colin Smith (New York: Humanities Press, 1962). Originally published as Phénoménologie de la perception (Paris: Gallimard, 1945).
White, Gareth. Embodied Evil - The Aesthetics of Embodiment in Resident Evil 4: Wii Edition. (The University of the West of England, 2007)

What is the audible equivalent of the visual presence of a wall?

I've begun my research this week by investigating existing games which cater to visually impaired users, and my initial assessment is that this project is going to be problematic.

Three games I've considered are AudioQuake, Terraformers and AccessInvaders.

I had most success with AudioQuake as it allows the greatest degree of customisation, and hence control over the gameplay experience. I found it incredibly immersive to shut my eyes and concentrate on the audio in my headphones, trying to navigate around the space presented. I'd occasionally open my eyes to confirm that I was where I thought I was and most of the time I was right. It gave me some kind of impression of what life would be like without sight, and also suggested something of the aesthetics of audio gaming.

FPS games are clearly interesting for their physical action based largely around rapid navigation through space, and this is something that lots of us have probably enjoyed in the actual world as children as well as vicariously through film and games as adults. However, I have to wonder what kind of gaming pleasures are available to blind people. It would be great to speak to some blind gamers though to find out what games they like in the actual as well as virtual worlds. I guess they're less about gross but controlled movement and more about localised physical action, thought and imagination, but that's entirely the assumption of a fully-sighted person.

We might think in terms of Caillois' classic categories, ludus and paidia, and illinx (vertigo), mimesis, alea and agon. All of these basic forms are clearly available to blind gamers, though I wonder how effective audio computer games can be at stimulating illinx? The giddying excitement of fast movement through virtual worlds is a good example of the form this is found in visual computer games, but I wonder what is analogous in audio?

Perhaps we can learn something here from musicology. Some kinds of jazz or experimental music can perhaps produce a form of illinx (e.g., Mr Bungle), and the audio effects in Hollywood action movies can be evocative in themselves. Wouldn't it be reasonable to expect this to heightened if the user were interactively involved in the audioscape?

Access Invaders is an interesting case study as it has a different levels for blind players. In this mode the aliens group together in a single column and the player has to listen for where they (effectively as a singular entity) are. Again this makes me think that controlling the environment is an easy way to achieve accessibility (think about the bumpy surface in front of street that can be felt by people with sticks, or the changes we make to public buildings for wheelchair access). As a game it's not terribly exciting, though I was capable of adapting to the style of play. It has to be said that in the contemporary gaming age, even the classic Space Invaders isn't terribly exciting either. The only positive thing I can say about this game qua game is that it demonstrates 2D audio as a feedback device.

So much for the excitement of AudioQuake, I got even less enjoyment from the critically acclaimed Terraformers as I found the synthetic audio cues quite unpleasant to listen to. Personally I'd prefer more naturalistic or at least more integrated audio, that is, audio icons that bore more of a resemblance to what they represented rather than the current style of earcons. The obvious problem is what sound should a wall make to indicate it's presence? I found the low pitched throbbing to be quite evocative, and again musicology might be a good place to start for appropriate audio cues.


Access Invaders (Windows). Centre for Universal Access and Assistive Technologies, Human-Computer Interaction Laboratory, Institute of Computer Science, Foundation for Research and Technology (Hellas, Greece: January 2006) <http://www.ics.forth.gr/hci/ua-games/access-invaders/> (Last accessed 19th October 2007).

AudioQuake 0.3.0rc1 ``glacier' (Windows). Accessible Gaming Rendering Independence Possible (27th June 2007) <http://www.agrip.org.uk/AudioQuake> (Last accessed 19th October 2007).

Terraformers (Windows). Pin Interactive (Norway: 2003) <http://www.terraformers.nu/> (Last accessed 19th October 2007).