It’s easy to forget that for all the horrible gender related troubles swimming around, there are other sectors that are being completely neglected and in terms of incorporation all different kinds of gamers, the disability sector being left behind is extremely irritating to me, possibly more so than the current trend of phobic related scandals.
The reason for that isn’t because these scandals don’t matter but seen as they are a continuous problem, with no sign of dwindling anytime soon, the ability for companies to include people suffering with various disabilities is something that can be done and implemented as soon as possible, causing a rush of positivity in the gaming community.
This time I’m going to focus on people who are hard of hearing or who are completely deaf.
The Kinect is a peripheral that has been a bit of a hit and miss plastic add on to theXbox 360, a lot of people bought the Kinect and a lot of those people rarely use it. The Kinect promised, for better or worse, depending on your viewpoint of human-tech integration these days, to offer a variety of wonderful motion controlled game play and voice command actions. You didn’t need a controller; all you needed was your body.
Other people that aren’t officially affiliated with any major gaming company or on the Microsoft team have been working on their own versions of Kinect Sign Language. Though nothing major has come out of this, the fact that people are making an effort is admirable and makes Microsoft look even worse, when they have the budget, the means and the official title to really push this forward. Why on earth wait?
The original patent and promises claims by Microsoft when Kinect first came out was to ensure that it would also features functions that could be used for the deaf gaming community. Yet it takes a hack, by a team in France to give even a small offering of that promise to consumers.
NHK, the prominent popular Japanese news channel has been investing in research to widen options and ease for it’s sign language viewer base. Their idea is to translate text into animated sign language.
“Subtitles are fine for people who understand Japanese, and who lost their hearing at some point. Meanwhile, people who are deaf from birth learn sign language first, naturally they study Japanese after that, but they find that sign language is easier to understand than subtitles, so we are conducting research in sign language.”
“Currently we are able to translate at the text level, but the text that can be translated is extremely limited. The technology must convert the Japanese language that is input into a string of sign language words. We use samples that have the same content in both Japanese and sign language, compare them to what is actually input, and then replace words that differ to achieve the translation. Meanwhile, to create the sign language CG we need to make automatic transitions between words, and to a degree we are able to create that. However, there are parts that we are not able to express well yet, and we have made an interface in which a human can correct those parts.”
Personally I think that’s a great idea and a wonderful start, already as it is it will help a lot of people with something as simple as being up to date with the news.
Remember this piece of gaming news? That was back in 2009 and everyone’s initial reaction was “This means we might get a deaf character in Half Life 3?”. Now as we all know the wait for a Half Life 3 has been so long and so void of any information or updates from Valve, that a lot of people have given up hope. But this isn’t about that game, it’s about what was really important with regards to the such footage as the following:
Initially thought this was Valve having some form of sign language recognition within their game, but it seems that it was primarily about an in-game AI expression of sign language, especially between Alyx Vance and Dog. Alyx had a crush on someone in her past who was hard of hearing and so that’s why her and Dog communicate via sign language. Having sign language feature predominantly in a new Valve game (or any) would have been wonderful, but imagine the sign language movements and emotions, being done by you the player and having an in-game effect on actions, dialogues and story telling.
Now there’s a bunch of technicalities that make Sign Language software a bit tricky to get right in one go. The Kinect needs to recognise not just hands but finger movement and also in a more detailed manner perhaps lip movements, though the hand motions would be a big leap forward if integrated well enough.
There are languages and variants in grammar within Sign, similar to spoken language, but again I don’t see that being such a deterrent to a large company such as Microsoft. Being afraid of making an effort does nobody any good.
Technology is at a point where it can greatly improve gaming and ease of immersion for people who have a disability. Many tweaks and tests are obviously needed to hone ideas and execute them properly, but for the deaf community In personally believe they have waited long enough and if companies truly cared and were focused, the wait would have been over perhaps a couple of years ago.