This is such an easy topic that gets ridiculously over mystified.
3d lighting is a linear model: just like the real world. The maths is all linear.
What makes the end imagery a given colorspace is the rendering intent.
This means whatever the lighter ( and texture artist) was looking at when they created the imagery. .
For example if they used a nominal sRGB monitor with a end display gamma of 2.2 and no additional display correction or color management anywhere in the pipeline the resultant imagery will look subjectively correct when viewed back in the same display enviroment. Even though the artist has just been rendering with linear maths what they are actually creating is imagery that has a gamma of the inverse of 2.2: ie 0.4. This is kinda like a video gamma (I'm simplifying here , the point is though that the resulting image gamma is not linear : ie 1)
If you wanted an artist to create imagery that atually had a gamma of 1 they would need to do the lighting with an end display enviroment that was itself linear (1).
How you give them this sort of enviroment can vary:
The most common way is to apply the inverse of the display hardware's gamma to the graphics card . Then you have a graphics card doing 0.4 and a display doing 2.2 and which gives you a working enviroment of 1. (this is an oversimplification)
You could if you wanted then render your linear CGI through a LUT ( in a shader for example) that transformed your linear cgi into video. If you do this perfectly you could then display your imagery in a video enviroment and it would look exactly the same as it did on your linear environment.
The way to conceptionalise this way of working is that the CGI is actually trying to recreate a real world linear scene ( like real life) and your final shader LUT is actually an analogue to the way a video camera (or film or whatever notional capture system) would make that scene look like if it was captured by said device in real life.
This is where things like HDR become useful because you are rendering a large ( notionally infinite) dynamic range and then making decisions to capture or map some of that range into a smaller exposure envelope mimicing the transfer characteristics of a real world mechanical system: ie a video or film camera. But obviously because you have to carry out the lighting calculations in HDR you have significant overhead compared with just rendering what actually ends up being visible ( of course there are all sorts of cheap effective ways way to limit the actual data requirements).
So in the case of rendering for a "video" console:
What colorspace is the end output?
Lets assume for an video console with 8bit output thats a display with a gamma around 2.2 and a black ref of 16 and a white ref of 235.
So your end imagery needs to be 0-255 designed to look correct if everything below 16 is clipped to black on display and everything over 235 is clipped and has a notional gamma of the inverse of 2.2 ...ie 0.4.
However is you want to differentiate your imagery from not looking rubbish like your competitors ensure that you render intensity variation all the way from 1-255. I actually keep everything in the whites , if you think about how white ref scales against the real world peak white point of a given display its easy to see why.
How you get from a linear lighting model to 0.4 is really down to you : apply an end LUT to take 1 to 0.4 or just light the thing whilst looking at a video display enviroment with no correction in the pipe.
How the 360 behaves is something I'm sure is easy enough to find out maybe it assumes all 3d is linear then it just LUTS it on output (doubt it) ,maybe it doesn't touch it, maybe it expects a PC 0-255 range with a 0.4 gamma that it remaps to video , maybe it does this dependant on the actual hardware output type. ( It would actually be a lot more straight forward if the 360 is totally transparent and then you guys would know exactly what your target was...I'm not saying this isn't the case)
You also need to realise that "gamma" itself is a simplification and to be really accurate ( and probably make your imagery look better than your competitors) you want to deal with more accurate curves in your LUTs that properly reflect real video colorspaces. Especially behavior towards blackpoint and whitepoint.
Some devs I thnk get it right:
Bungie ; all the Halo games look as if they are outputting video levels to me on a 360 with a component connection to a properly calibrated video display.
EA: COD4 looks right on the money to me. So does Mass Effect
Some devs I think get it wrong:
Ubisoft: Far Cry Instincts looks like PC levels so on a video dispay the blacks look crushed.
King Kong had a well publicised issue with looking too dark that ubisoft (rather pathetically) blamed on the 360 hardware rather than their own lack of defined pipeline.
Bethesda: Oblivion looks like PC level to me: Fallout 3 looks like video ...maybe they realised something.
If all else fails you can stick a gamma or brightness slider on your games: a massive telltale that a given developer has no confidence in their color pipeline if ever there was one. Although its possibly not a bad idea assuming you actually get the deafult levels right in the first place and give people a meaningful pluge to set it by.
When's the ;last time you saw a dvd with its own gamma and brightness settings