Originally Posted by Leo_Ames
I actually meant does it work well in Mass Effect 3. I've only seen Skyrim players compliment it like you did so I know that they did a nice job implementing it in logical ways. Even then though, I don't see ever using much of that if I had Kinect 1.0. I imagine even those making heavy use of Kinect are still regularly utilizing conventional means for a good bit of it.
But I can certainly see how it has a place and how it can be a nice supplement. The follower commands is one area where I could envision relying on it where as I doubt something like the map shortcuts would get used much, if at all.
it works well in Mass Effect 3 (particularly if you play with headphones so you don't have to depend on the abilities of the Kinect noise cancellation). however, it's done in real-time so a lot of players don't get that breather away from the action that the radial menu provides. if you don't like the idea of stopping everything when you bring up the menu, the kinect will feel pretty awesome. if you can't walk and chew gum at the same time, you'll feel overwhelmed by the real-time aspect of the kinect control. at least the voice commands are not nested and do provide a shortcut.
as for people that say voice commands could easily be added to other systems, it's not a trivial process. Microsoft has been working on this stuff for a very long time (since 1993). Microsoft had created a standard set of tools and API for speech that make it easy to implement. It's also localized the speech to numerous languages for easy implementation. Just adding a mic and a few lines of code isn't going to cut it. if it was easy, everybody would have it.
Right now on other platforms, usually it's left up to the developer to implement it. They have to do all the work or license some speech engine like Apple did with Nuance for Siri. It's why there's so few games outside of Kinect games that implement it. A waste of resources when games are on budget and time constraints. Now, Sony can pay for the licensing and then make it available to developers but they don't. fend for yourself.
You take Skyrim as the example. It actually uses 2 engines. for regular voice commands, it uses Microsoft's set of tools. You just speak and the Kinect interprets everything and spits out a response.
for dragon shouts, you need to hold down a button which kicks in the custom speech recognition for a very select few commands which of course they needed since dragon shouts are completely incoherent to any of Microsoft's defined languages.