Opinion: Kinect in a hardcore FPS: Extrapolating the possibilities

Five years from now… It’s a typical spec ops mission set in the terrorist country called Fakeasthistan. I’m not however playing a typical First Person Shooter. Sure, I’m using an Xbox 360 controller and it employs the classic controller mapping (left stick moves, right stick looks, the two triggers distribute lead-based justice), but there’s something that makes this much more immersive. I have a Kinect connected and its HAL 9000 eye is scrutinising everything my body is doing whilst I play. Most of my movements will not register on screen, but key subtle movements will – and the result is a sense of immersion that I’ve never felt before.

As my elite squad and I approach the town square we get the drop on a few absent-minded insurgents. I stop my AI teammates by taking one hand off my controller and raising it vertically in a clenched fist. “Affirmative, holding position,” is the whispered response that comes through the speakers. I physically thrust the same hand out at a forty-five degree angle and my AI minions respond instantly “roger, flanking right”. I could, of course, issue all of these orders by speaking them at the Kinect, but I don’t want my enemies to “hear me”. Incidentally, this ties into why I have also been holding in a truly epic fart.

I take cover behind a low wall and physically lean my “real-life head” to cause my “virtual one” to tilt around the corner of my cover. Leaning around corners has, of course, been in PC FPS games for years but have been discarded in console shooters, mostly thanks to a lack of buttons to assign to such a minor tactical feature. With Kinect I can now pot-shot around cover without exposing too much of my virtual body mass and without holding a complex amount of buttons to do so.

We spring the ambush but as the surprised insurgents fall, more enemies start spawning out of the woodwork. HQ send audio instructions to my avatar, but in the real world a ringing phone causes me to miss it. I lift a hand off my controller and put two fingers to my ear, the game translates this as me trying to get a more air-tight seal on my avatar’s communications headpiece. “Say again, Eagle One” I say aloud to the Kinect. The missed call from HQ is repeated through the TV with more clarity than before (the volume levels of explosions and gunfire have been automatically toned down, and the volume of speech has been boosted).

After taking the call from HQ in the game I decide to go and answer the phone in the real world. I put the controller down and Kinect notices the total separation of my two hands from the controller and automatically pauses the game. As I’m away my kid brother tries to sneak into the room and ruin my game by getting me killed. The game doesn’t unpause for him because it can see that he isn’t me. How? A facial (or possibly a retina) scan can clearly see the difference between him and I, not to mention the length of his limbs are quite a bit shorter than mine. Also, his face, body and voiceprint have been registered with the console previously and I’ve assigned him an age: the game refuses to let him play because it knows he is too young to play it. He gives up and walks off in search of other lives to ruin.

With the phone call completed I return to the lounge. As soon as Kinect sees that I have two firm hands on the controller I am instantly inserted back into the action. Unfortunately I have paused at an odd time and the lack of focus causes me to get shot up a bit. I hit my crouch button to lower my profile but it isn’t enough. As even more bullets slam into me and put me milliseconds away from death, my primal instincts kick in and I flinch my head downwards. Kinect sees this and drops my avatar right down into a prone position. It saves my life faster than any button press could have.

As with most FPS games my damaged state splatters my “virtual eyes” with on-screen blood and sweat. If I wait three seconds that will disappear and I’ll have a new lease on life. I can halve that wait time by letting Kinect see me close my eyes and shake my head a tiny bit – much like I would if I was shaking off a punch to the face. It’s a small effect for big amount of immersion: when my real life eyelids open, my virtual eyes are clear.

I return to the firefight and smoke a few more bad guys. When my on-screen gun jams, I absently slap the side of my 360 controller to “knock” the dud round from the gun chamber. An enemy charges at me and I physically throw an elbow out at the screen to fell him with a gun butt attack. I sight another enemy down my iron-sights using my left trigger and as I lean my head slightly closer to the TV, Kinect subtly rewards this movement by making the zoom magnification increase a few metres more.

With the sniping done I decide to whip out my grenade skills too. When I lift a hand off the controller with clawed fingers (to grasp my imaginary grenade, you see) Kinect registers that I want to hurl some explosive pineapples. From here I have a few options: I can physically lean forward in my chair, and place the grenade at my feet as a fused trap. I can also roll it along the ground into an enemy bunker by using a bowling motion, or I can lob it upwards without ever having to tilt my crosshairs right up at the sky (like some sort of militant birdwatcher).

The town square is soon secured, my AI squad regroups and we’re evaced to our next warzone in a Black Hawk helicopter. What follows is a two minute loading screen, but that wait is handled by Kinect in such a way as to make it a hilarious amount of fun. Welcome to the world’s first real-time machinima load screen…

Kinect has digitally captured my face and body in real-time, but is using a form of augmented reality to put my physical body in an interactive 3D space onscreen. In this case; I’m no longer seated on my couch; I’m digitally placed on a seat in the cargo hold of the Black Hawk helicopter and my AI teammates are positioned all around me. Kinect has superimposed combat fatigues over my existing clothes and the controller in my hand has been digitally replaced with a Desert Eagle.

Similarly, my real-life face has been subtly augmented on-screen with a bit more muscle put on my cheeks; an eye patch placed on my eye and a long battle scar is digitally overlaid across my forehead.  As the next level loads I amuse myself by twisting the “gun” about in my hand. I can also pantomime the actions of cleaning it and inspecting the chamber. Note: if I do take the time to “clean” this imaginary prop the game will give me a +85% chance of it not jamming in my next gun battle.

I have my costume and my prop, now comes the real magic. One of my AI buddies turns to me with fear in his eyes and asks me how I ‘stay in control’ on the battlefield. What happens next is completely up to me, but the AI soldiers will listen to me with rapt attention, regardless. Sometimes I’ll drop into a rough Solid Snake drawl and gruffly tell them “if you wanna survive war, you gotta become war”. Other times I tell them to shut up and continue to clean my gun in silence. If I’ve had a few beers in me I might even quote them a Shakespeare sonnet on war “once more into the breach, dear friends” – and all that. If I’m playing pissed, I’ll belt out a rendition of Survivor’s Eye of the Tiger.

Once the loading has completed a green light in the cargo hold will switch on. I can continue with my soliloquy if I have more to say or I can make the physical motion of sliding the 360 controller down along my thigh. Onscreen my Desert Eagle will be put in a holster and the game will continue. I don’t know it yet, but whatever I chose to do during that minute will be recorded and integrated as a flashback in a future cutscene. Later still, when the game is finished I will even be given the option to upload my acting performance to YouTube through a menu if I wish.

In a different loading screen (a few hours down the track) I’m asked to put the controller down and I’m given a mini-game to pass the time with. The camera has taken an overhead view of my avatar as he sits in the Black Hawk cargo bay. Sitting in the virtual space in front of me are the digital projections of dismantled parts of a field-stripped M4 Carbine. I’m given a minute to rebuild the weapon – if I can, it’ll be more effective in the next battle, if I can’t – it won’t be available for use at all. Physically raising my two hands to the screen (much like Tom Cruise did in Minority Report), I can grab two separate parts of the gun and pantomime the actions of bolting, screwing and latching them all together.

It’s a fairly complex puzzle to complete, but I manage to do it quickly. “Hot damn, Sergeant, I think you just broke the company record”, my AI Lieutenant whistles in appreciation. What is the reward for looking like a bad arse professional to my AI teammates? Their level of “confidence” in me as a leader will raise from Level 2 to Level 3 and the game will make them more potent allies on the battlefield. Game critics reviewing this FPS will mention that they’ve never played a game that has made them look forward to the loading screens so much.

After trawling the net and becoming familiar with the open-source Kinect hacks in the wild, the aforementioned is extremely possible given a few years, a few more eggheads and possibly a newer version of Kinect. Truly, there’s never been a better time to be a gamer. See you on Youtube.

The following two tabs change content below.