December 12, 2007
Thoughts on Industry / Academic collaboration
I've just arrived home from a trip to London today to attend the kick-off meeting for the UK's EPSRC-funded AI / Games Research Network which is set to run for the next three years. As I came in my fiance asked me "How did it go? Was it nice?" and I struggled to express my feelings on the whole occasion.
I'd already been thinking about the question "how did it go?" pretty much all the way home on the train for 2 hours, in conjunction with reading some more of the fascinating book The Answer to How is Yes : acting on what matters by Peter Block. The two lines of thought have collided in my brain and produced this short post.
Ever since I left the event today, I've been cursing myself for getting stressed and angry while hearing the proposals of those in traditional academic AI research for possible games industry collaborations. I was constantly grinding my teeth and thinking "these guys have no idea about games! they're so naive!" etc. and unfortunately I let these feelings of they are getting it wrong dominate my thoughts at times, and even spill over into somewhat aggressive questions and posturing. I spent a lot of the journey home thinking "why did I let myself get wound up like that? what's the point in that?"
I have now concluded that what was really going on was the event tapped into my own latent frustrations with myself over the last few years for not acting on what matters, or more specifically, for not pursuing my own desires for game AI but instead remaining beholden to whatever current organisational agenda I was working under. These other people at today's academic/industry networking event all had their own agendas to pursue, their own backgrounds, and their own unique naivete and wisdom. Nothing wrong with that at all, long may it continue. At the same time I shall endeavour to pursue my own agendas constructively within whatever organisational context I find myself. These agendas will only ever partially synchronise with the organisation within which I pursue them or with the agendas of other experts, but that's perfectly natural! At the end of the day we're all blind men touching the same elephant!
Posted by GardenerOfEden at December 12, 2007 01:39 PM
I find it encouraging that game AI is getting such attention from academia, but the whole setup seems a bit odd/suspicious to me also.
If you're going to be doing useful research for industry, then the big publishers will happily hire you, fund you, or pay you in any way they can! (That's the case with animation, EA/Ubisoft/R* happily contract the best researchers and sponsor projects.) So why do you need government funding for that? It's a bit like admitting failure :-)
On the other hand, if they're doing research for the sake of research, why do they need input from industry?
The only benefit I would see is finding mentors from industry for research students who want them. Academic supervisors have a specific role in helping their pupils complete their research and contribute to the field, but having a balancing influence from a professional developer would help IMHO. (Especially if the students intend to go into industry or apply their ideas in practice.)
So often you see research that's almost useful, but ends up being just research for the sake of research. A few hours of mentoring/feedback at the start of the project, then at regular intervals would prevent these altogether..
Anyway, something to think about. Rant over!
My feeling is that Game AI researchers should take funding where ever they can get it!
I agree with Alex that getting input from industry is a good idea -- keeps the research grounded in reality, but I don't know if I agree that "the big publishers will happily hire you, fund you, or pay you in any way they can".
It depends if we're really talking about research, or consulting. Academia and Industry have different priorities, and work on different time scales. Academia is trying to break new ground, while Industry is trying to "break the bank." Industry tends to be risk averse, but some of the biggest break throughs come from exploratory research which may lead to a dead end. Part of research involves discovering what doesn't work, which is educational, but not very profitable.
This type of research might take 5+ years, while Industry is focused on the products shipping in the next 2 or 3 years. If you are working directly for a developer, it's rare to get more than a few months of R&D for new technology during preproduction. At Monolith, I had an unusual stretch of about 9 months of R&D time to develop the FEAR planner and NavMesh, but that was mainly because the engine guys were busy writing a new renderer and integrating the physics system. If you're at a developer that licenses tech like Unreal, it's harder to justify a long preproduction.
It's difficult to make big advances "in the trenches" while keeping up with the day to day needs of a dev team, which is why I opted for a hiatus from industry, and am hiding out in academia for a while.
Adam, I enjoyed your post, but I think it needs to be said that everyone needs a reality check.
You were absolutely right to be angry about research that you knew was wasteful.
Part of the evolution of game AI requires natural selection. We need to kill off the ideas that are holding us back. And a little bit of anger is sometimes necessary to kill off those bad ideas.
Take Chris Crawford's "research" as an example. I noticed a forum post on thechaosengine.com recently that summed up the situation perfectly, so forgive me if I just paste it here:
"[...] Chris Crawford [...] has spent twenty years *failing* to make an 'interactive story.' And he's *still* harping on about interactive stories as 'the future' and saying 'games are dead.' When there's one - ONE - interactive story that achieves any modicum of success (commercial or otherwise) people might pay attention, but as there's two decades or more of failure to provide the counter-argument, until then the onus is on him to justify his opinion."
Chris spoke at the first AIIDE and ranted for a full hour on things completely unrelated to artificial intelligence while attempting to criticize the industry for not living up to his standards, all the while making it entirely too clear that he hadn't played an actual game in at least 10 years.
We're not all necessarily "touching the same elephant."
Many of us are, sure, but there are some who go on fondling rocks and bushes for decades while screaming about how they've found the REAL elephant, and that thing we're riding on with four giant legs that squirts water from its trunk is just a mirage.
We shouldn't hesitate to send such individuals packing with all due enthusiasm. We can't afford to waste our time, resources, and credibility on research that's truly frivolous. You shouldn't feel bad for a second that you got a bit angry when you saw it happening.
As for advancing the state of the art in game AI, I think Jeff's post sums it up. Some of the improvements we can make are small, incremental improvements, but some of the most interesting challenges are longer-term and can't be addressed in a single product cycle.
Which is exactly why we need more people like Jeff -- people who are willing to ask big questions and do the research the industry can't do on its own while staying grounded in the reality of what actually works.
I think part of the problem that continues to be the fence between the game AI world and academia is the game worlds continued insistence that we have to strip down our AI to "fake AI" in order to wedge it into games.
I'm getting tired of the rubber stamp statements that "our players don't want realistic behaviors... they want FUN behaviors!" And yet, in review after review of the latest games, people bitch about the AI not being realistic enough. We hear it. We acknowledge it. But when it comes to developing the next cycle, the edict from on high is "we don't have enough clock cycles to do that nifty XYZ technique."
As Moore's Law trips merrily along from year to year, we have more and more processing available to us. In theory, that should give us, as game developers, the overhead we need to close the gap between the need for 60 FPS in our games and the academics who don't really care if they are rendering their half-ass, low poly bots at 4 FPS.
Another point on this subject... I'm sick of hearing designers - and even AI programmers - make the statement "but it's not predictable!" about agent-based, emergent AI. Uh... isn't that the point? Again, look at the reviews and the comments from our customers. "The AI sucks because it is too predictable." Even the implication via statements such as "you can beat this level by doing XYZ to the AI because..." means that there is a shallowness to our creations. Why? Is it because we are lazy and don't want to write more complicated code? Is it because we are scared of the unpredictability of non-deterministic models? Is it because our designers would better be served writing static movie screen-plays than game levels? What holds us back?
I'm not saying that academia is the answer. Sometimes it seems that they can get so wrapped up in an esoteric sojourn that they cease to realize that what they are doing is not even remotely relevant. However, some of the concepts and techniques that they take the time to explore (because they don't have producers and ship-dates) are things that can map over into the game world. And, if we are truly interested in putting realism into our games (which can be fun for the player!), then what academia comes up with should be noted by us. Adapted maybe, but noted nonetheless.
I agree completely with Dave Mark. Found out about this post from his site, and just going to repost my response:
1) Currently going through the horror that is academic AI, I can confirm: hardly any of it is relevant for games. Sure, there are plenty of pratical applications for this stuff in general, but not for games. And when academics say they use games to test their AI systems, they mean stuff like Pac Man or something similarly simplified.
2) The resource vs. complexity trade-off is getting rather ridiculous. You want a "fun experience"? Try making the games more thought-provoking and emotional. Or more imaginative. There are enough WW2, racing, football, and movie-based games out there. Get creative. I swear if I had it my way I would do away with the publishing corporations.
This is turning into a great discussion! It's great to see this blog coming back to life.
> I'm getting tired of the rubber stamp statements that "our
> players don't want realistic behaviors... they want FUN behaviors!"
I wince every time I hear that.
I think you hit the nail on the head, Dave. The industry is never going to have until designers understand that:
1) Fun, believability, and non-trivial AI are not mutually exclusive.
2) AI is a part of your toolbox.
3) Learning how to design game AI is part of your job.
4) Disparaging your own tools doesn't make you look any smarter.
Interesting discussion. A few points:
- I believe that the greatest barrier to 'better AI' isn't the AI itself; it is our ability to communicate and present it to the player. In many ways, I think our 'brains' have outstripped our presentation abilities.
For example, as long as we are sequencing animation instead of generating motion at runtime, our set of actions is severely constrained. Lots of promising work coming from both sides on the montion front. Audio may be be tougher problem given localization requirements, etc.
- Fundamentals like high performance, high quality, low memory cost, path generation in dynamic environments are still major challenges in the game AI world. It seems like most of the academic world has moved on to slightly different set of constraints (unmapped environments, optimizing massive data sets, etc). Game industry needs may be a bit too specialized here to get significant academic focus.
- People widely acknowledge that content creation is the new primary bottleneck. It seems like this is a huge area for automated systems. For example, the Assassin Creed used automation to tag environment traversal locations. While this was likely more of math issue than an AI issue, there are plenty of interesting possibilities here.
- Game AI programmers will be dealing with clock speed advances slowing down. This means more focus on multithreading/SPU coding for the next round of AI. Along with potential hardware scaleability come a new set of problems - inability to access data on demand.
For example, game AI programmers have been able to get skeletal bone positions on demand in the past. As animation evaluation moves to other cores, access to data like this may be more limiting. Short term, it means game AI production time will be going to adapting to new hardware which will steal production cycles from expanding gameplay/AI focus.
Designers are itching to massively increase the number of AI in the environment. 16 or 33 ms isn't a long time to do everything needed for a frame. :) All of this is a long way to say - I doubt we'll see an increase in CPU per NPC/agent unless the design constrains the count for other reasons.
- When people express 'its not predictable', they may be trying to express a few different concerns:
* Some may not have a testing pipeline set up to deal with this sort of system.
* Some may not have tools for level/game designers to construct the experience in the same way they desire.
* Some may be concerned that gamers need higher contrast differences due to our presentation fidelity.
* Some may be building games which are about the player understanding behavior and learning how to react. Introducting any form of randomness reduces the players ability to internally model the cause and effect of their actions.
None of these issues are showstoppers for exploring new concepts. In some ways it seems like the set of problems games are tackling at the moment are constrained enough that simpler solutions work well. This may be something of a chicken and egg problem.
Gah. Apologies for the random brain dump. I'm hoping post is slightly coherent. Posting at 2:00 in the morning during an 80 hour work week - that's something familiar to people on both sides of the isle. :)
> - When people express 'its not predictable', they
> may be trying to express a few different concerns:
I think this is valid, but from what I've seen, when people say it's not "predictable," they usually mean "understandable" -- only they avoid that word because they don't want to sound stupid.
In other words, "I don't have any way of figuring out WHY it just did that," or, "I need better diagnostic tools."
More and more I've been working toward building integrated diagnostic systems that make it possible to peek inside the AIs' brains in a user-friendly way and really explain all of the decision-making processes that led up to a particular behavior. When you're in a design meeting and someone seems confused by what they just saw, it's very handy to be able to pause the game, hit a button, and then bring up detailed diagnostics that explain everything that just happened.
That is definitely a big part of it. Another major piece is the toolkit the designers have to direct the AI (ie act on what they learn from the inspection).
Part of the disconnect may be the fidelity of control some designers expect. I've worked with some in the past who want AI to run specific paths at specific times, etc. Anything that 'reduces their direct control' was a threat to their ability to create levels/gameplay.
These designers also happen to be the ones who make the most work for themselves. And who spent significant time at the end of a project bulletproofing their wiring so the player couldn't break their setups. At the same time, they did succeed at creating memorable moments.
Moving more tools to specify intent in the place of instruction seems like the way forward. Tying some of this to player intent modelling may be a good way to push this approach forward.
I can certainly understand the scary nature of attempting to debug emergent behavior. In fact, a few years back, I had the opportunity to ask a question of a panel that consisted of Will Wright, Peter Molyneux and [the designer of the original GTA?]. I asked them specifically about the challenges of testing and debugging their emergent behavior. There was an uncomfortable moment of silence before one of them [Peter?] said "That's a very good question." The consensus was that it was anxiety-inducing to watch all this exciting, neat stuff happening but knowing that at any moment it may completely implode upon itself.
That being said, if all the time spent on tweaking numerous specific, scripted actions was spent focused on the lowest level thought processes that would generate similar behaviors, it may not only be a net 0 expenditure of effort... but would generate AI that could handle all those OTHER situations that you didn't script for.
Again, I ask... why are we scared of doing this? In the end, it doesn't matter what we learn from academia if we convince ourselves that we can't (or shouldn't) use it in our games.
I was interested to see that the network has been set up. Imperial has been running an AI event for the last few years focused on academic research with its own research programmes at the heart of the event - and when chatting to the academics later it was immediately apparent that they have no idea about commercial development or the needs of developers/publishers. This was one reason why we ran a show-case AI event - Apply AI in London in June and will run it again in 2008. Alex Champandard ran his successful AI roundtable and the event was keynoted by Peter Molyneux. I believe we made a stab at some of the issues raised here and want to do better next year. if any of you have ideas of what we should have as topics and speakers then please let me know. We are open for papers right now. email me martine at applygroup.com
I've also posted some thoughts at Grand Text Auto:
Off the cuff, I'm thinking that part of the problem here may be that game AI and game graphics don't have similar relationships to their academic counterparts.
Noah, I agree completely.
In the case of graphics, decades of academic work since the 1960s have gone into developing the rendering algorithms we use in modern games (the annual SIGGRAPH conferences and related research being the most notable examples). Academic research took several decades to find the best way to get the job done, and the game industry was able to adopt all of those techniques in a fraction of the time it took to develop them in the first place.
The problems are very well-defined, and the solutions are well-understood.
In the case of game AI, that sort of massive body of work doesn't really exist. In fact, it couldn't, because game AI is built on top of game design, and modern game design hasn't been around very long yet -- videogames themselves have only been around since the 1970s, and there are entire game genres that are less than a decade old. We can apply a lot of bits and pieces from the field of academic artificial intelligence, but it's trying to solve a very different problem, and a great deal of it isn't relevant or useful for games.
The problem is compounded by the fact that it's very much a chicken-and-egg situation -- that is, design and AI very often go hand-in-hand. The current state of the art in game AI is very limited by the fact that so many game designers intentionally avoid using AI because they don't understand what's possible ... or they watched other designers make wildly unrealistic promises about AI, and took the wrong lessons from that experience ... or they mistakenly believe we're still stuck in the 1980s and only heavily scripted AI can work. We need to grow out of all that.
Part of the challenge of developing AI is going to involve working on the design side, and pushing designers out of the narrow "comfort zones" they build for themselves. Too many designers are still perfectly happy making zombie games, and that holds us all back.
I think Brian Legge and Alexjc have made some interesting comments that I'd like to respond to. Brian makes good points about the problems that developers actually face vs the problems academia is working on. Alex points out that the hard work of integrating new AI solutions into games (80% of the details) is left to developers (on AIGameDev.com: http://aigamedev.com/discussion/academia-industry-collaboration).
On the one hand, much academic research doesn't make it into commercial games because academics aren't interested in working out the mundane, practical implementation details. On the other hand, we shouldn't expect academic researchers to limit their horizons to only those things that make sense for the current generation of games.
Consider this -- as Alex points out, FEAR's innovation was working out the details of running a planner in the context of a real-time, commercial FPS. Academia doesn't care about FEAR's planner, because it's a simplified version of STRIPS planning, first introduced in 1971. The state of the art in video games in 1971 was Pong (actually, even Pong was 1972). If I approached the developers of Pong in 1971, and suggested that they really should consider integrating a planner, they would have kicked me in the balls. Or at the very least they would laughed hysterically. This technology was totally irrelevant to games at that time, but (in my opinion) hugely relevant to games now. Academic research on planners was valuable to the game industry, just not for the genres of games that had been conceived of in 1971.
So, I don't think the disconnect is entirely due to either party being on the wrong track; they are just on different tracks with different goals. I do think communication could be improved, to raise awareness of existing solutions in academia whose time might finally be right for games, or to showcase new ideas to inspire future generations of games. And I do think that developers could do more to facilitate the ability to test new ideas from AI research in commercial game engines.
The biggest problems I face in academic research are the lack of good, full-featured testbeds with access to source code and a large catalog of content (models, animations, textures, audio), and the lack of a development team. Unreal is too expensive. Torque has been a nice, affordable solution for an engine, but the content available for it is inadequate (so I've had to "borrow" from commercial games to compensate). We don't typically have teams of artists to animate or build levels. Collaborating directly with commercial development teams is something worth exploring, but our work is often exploratory, so we can't promise any concrete deliverables will come from our research projects. And, as we can see from the Pong example, there is often a disconnect between what academia wants to explore, and what industry thinks it needs.
Integrating academic research and industry is a hard problem. We can't expect dev studios to have a 20 year horizon (or longer), because most of them won't be around that long. At a minimum, as a start, I'd like to see more Game AI researchers releasing prototypes publicly to get new ideas out there, and start interesting discussions. Regardless of your feelings towards Facade or NERO, you have to commend these researchers for getting their ideas into public hands in a playable form, and working through the mundane implementation details while contributing new experiences.
I agree with Adam's original post -- that of the academic research in AI that attempts to apply itself towards games, only a minority is useful or progressive. I too get frustrated having to separate the wheat from the chaff so often. It's important to realize that with academic research, much like with any endeavor, some of it is just lousy, derivative and pointless. So try to look for the diamonds in the rough. For example, there were a several exciting things to be found at last month's Intelligent Narrative Technologies AAAI workshop ( http://grandtextauto.org/2007/11/14/intelligent-narrative-technologies-2007/ ).
Paul makes great points about the close integration of AI and game design, making it hard to work on one without the other.
Jeff's right, that on an institutional level, it is really tough to get game companies and universities to collaborate. Both sides say they're interested, but issues of logistics and funding tend to snag things up.
A solution: motivated individuals can figure out ways to make industry/academia collaborations happen.
For one, experienced industry developers can temporarily leave the industry and do research in academia. Jeff is doing this, I did this, and a few others I know have as well. It's a great life experience, and you can come back to your industry projects with all kinds of new knowledge, prototypes, and even code. If you need time to develop and nurture your ideas outside the confines of a conservative game company, go for it! There are several good game labs around you could apply to as a Masters or PhD student.
Likewise, academic researchers can temporarily leave academia and join a game company, to apply your knowledge to a mainstream game. Returning to academia can be tricky though: you'd need to publish while working in industry, to keep yourself academically relevant. At a minimum, academics can consult part-time on industry projects. To drum up this opportunity requires networking at game conferences, to meet developers and propose collaborations.
Some academic researchers just make the switch to industry, bringing with them academic techniques; Damian and Rob Zubek come to mind. This is a form of industry/academia collaboration too, IMO.
If you are a researcher who never plans to leave academia, do more to focus your research to be directly relevant to games. Pay close attention to the game industry, play lots of games, attend game conferences. (It helps if you're already an avid gamer, for your interest level to already be there.) Then, in your own research, choose to work on projects that can result in publicly playable games being built, even if they are rough. You need to realize -- assuming that this is important to you -- that without something playable that demonstrates your techniques, it is unlikely your work will get used by others, especially by industry developers. This is do-able; someone like Ken Perlin is an example.
Finally, game developers could do a lot more to talk to academic researchers, to communicate these intentions. It's one thing to lurk and read papers and blog posts, and another to have conversations and network. AIIDE is good, but not yet "home" to many academic researchers. I was dismayed to be perhaps the only game industry person at Intelligent Narrative Technologies AAAI symposium, a rare North American event on the topic...
Some interesting discussion here.
Of those who've posted comments, its not clear who actually attended the event.
I was there (I'm one of the academic leaders of the network) and thought it was a great success.
Sure, there are lots of things to work out, but there was a great deal of interesting and passionate debate. And many of the academics clearly play games, and could point to specific weaknesses of existing game AI (to the point of top notch geekiness).
One of the main difficulties (as others have mentioned) is that the problem of developing better game AI is much less well defined than developing better graphics - one key difference is there are many benchmarks for graphics developers to aim for (polygons per second, shading models etc).
Academic (and industry) researchers are good at developing more effective AI for solving problems with well defined objectives. You want your NPCs to be smarter? Define smartness and it will get done. But more FUN?
That's not a well defined problem.
However, I for one have no doubt that as the NPCs (and other aspects of gampeplay) become more intelligent, and better at learning, this will lead to much more satisfying gameplay.
There are many challenges, but the network (and the field as a whole of course) has enormous potential. Academic researchers need to look at more complex games - this is happening, but slowly. One reason is that from a machine learning viewpoint, we still have much more to learn even from relatively simple games. Complex video games are hard to tap into. Simple and open APIs would be a great help.
Also, it was good to see so many industry people making very positive statements about the event.
I'm very much looking forward to the next one!
This is perhaps a bit more summary of previous comments than anything, but one of the causes of the gap between industry & academia in AI is that the toughest problems facing developers are those of *communication*, and not intelligence.
And that's communication in a number of forms:
* Communicating the intent of AI characters to the player to make them understandable, and therefore believable. (For instance, trying to make even the most basic AI behaviors for attacking melee characters believable results in lots of problems - ie. the "kung-fu movie" logic problem).
* Communicating the decision making processes of characters to designers and other non-technical folks (or just non-AI programmers) to be able to work with those characters' behavior - for designers to be able to plan mechanics & level design around the nuances of those behaviors, for instance. There's a lot of skills in visual information design that are relevant here, but otherwise have no connection to AI research.
* Communicating to designers how to achieve their authorial intent without requiring brittle scripted approaches. This actually more of a two-sided problem for AI developers - as Brian and Paul point out, it's up to us to convince designers they will have the same flexibility/authorability. Authorability is also a key distinction as to why some academic AI approaches work in games, and some don't (eg, NN, GA). But a lot of academia still don't understand that this is a hard and fast requirement (with a few notable exceptions *cough*jeff*cough*). We're creating experiences, and not necessarily simulating intelligence, so the ability to override any systemic behavior in very explicit circumstances is just a foregone conclusion in the industry, but very rarely so in research. So we have to explain to designers why they would benefit from more systemic behavior, while at the same time explaining to academia why some undirectable types of systemic behavior just don't work.
As far as processing power goes, I mean, yeah, it is a concern, part of of the onus is on us (AI developers) to find more optimal ways of building those same algorithms. And so I'll admit, there can occasionally be some natural resentment there - even if I choose an "academic" solution for something, I have to make it run multi-threaded, across spus, and in a fraction of the time.
But then again, all these additional constraints are what make it fun - for me, anyway. :)
This post makes me feel less bad about missing the London event -- I got notice too late and was already committed to being in Germany on the 12th. But I've attended two of the previous Imperial events and know what Adam is talking about.
I think what academics need to realize is that bleeding-edge, easily publishable true science does not tend to be what industry needs --- games or others. Basically, 10 year old academic results would often be a huge leap forwards for industry, and a lot more reliable and easy to transfer.
What industry doesn't realize is that getting such research into industry is almost impossible for an academic to fund. It's not just a matter of writing papers, it's a matter of finding time from any source. Try writing a grant to a research council saying you are going to make some of your old code industry-quality bug-free and make some really useful tools that will change the world. No interest whatsoever --- that's not what *their* budget setters think of as "science".
I would say the best solutions are some subset of the following:
* Games companies taking bright undergraduates on placement years, especially ones connected with / already doing research with AI researchers. In the UK, it is unusual to have students doing research this early, but some students are that keen and some academics are that open minded that it does happen.
* Games companies paying academics as consultants (and academics taking the time to take courses to be good ones). Again, hiring former undergraduates and MScs might help make a bridge so that the academics and companies can talk to each other efficiently.
* Academics who really care about games researching stuff that is actually useful now to games companies, not just cutting-edge learning algorithms or logic somesuch. For example, I research making it easier to program humanoid AI. When you come down to that, that's as much education and HCI as it is AI (though it certainly *is* AI since it deals with choosing representations that are easy to build *and* can provide intelligent control.
One thing games companies and academia have in common is both sides think they have absolutely no time --- academia because they keep getting shoveled more students, teaching and admin so that doing any research at all is miracle, and games companies because they are constantly afraid of going bankrupt before their next game makes it to market. Deciding that you will make time / money / effort to make games AI work is a risky strategy. I don't see any way around that.
I agree whole heartedly with Andrew (Stern)'s observation and suggestions on the merits of movement in between the academic and industrial AI realms. I made the move from academic AI research to the games industry last century, and even though I considered myself well informed on "real world" AI it was an eye-opening experience. These arguments were well developed in the late 90s and I took a stance then that as Adam noted in his original post was naive. The experiences that I have had since have framed my opinions better.
In addition to the many good points made here I would also like to add the following.
Firstly, when I've talked to academics in the past they often seem to forget that there are many people working in the games industry who have completed post-graduate studies. In the last eighteen months I've worked with 7 PhDs and at least a score of people with Master degrees and many of those have been in AI and physics. We still haunt good sites (like these) and try where time permits to keep up with pure AI research and on even rarer occasions to experiment ourselves.
Secondly, the overwhelming factor in games development is time, but almost as terrifying is its ugly sister risk. During my stint as a research fellow in particular I had time and opportunity to explore avenues that led to no notable gains at all except to my own understanding of the subject as I was prepared to take risks that I wasn't necessarily able to take on my own study program. So the idea of a games company placing any kind of trust in a project in an external source is misguided as the risks of things going awry when any kind of external body is involved are increased and moreso they have less opportunity to manage that risk. If a games company is reluctant to use established games middleware due to risk, what would attract them to using scholarware (for wont of a better word)?
Finally in a similar way to the need to manage risk there is a more apparent culture of immediacy in industry than in academia. As hard as we try to manage putting a game together as smoothly, schedules and deadlines are fluid and I am yet to work on a game where I get more time to develop rather than suddenly told less. Any significant body of work (and by that I mean 6 months+) is going to be particularly fragile when schedules have to be squeezed and they are on the critical path. This is the main reason games programmers end up working 80 hour weeks to get things done, not that we are bad planners just that a project as complex as a game can be very very complex to plan. Would it be right to expect an academic researcher to have to make the same sort of commitment (they'll have plenty of that during their write up ;) ) ?
It's not all doom and gloom though and I do have some suggestions for anyone in academia wanting to better engage industry.
Firstly, as much as I would love to attend workshops and conferences (one of the very few perks of my own time as a researcher) I like many colleagues find it very hard to find the time or opportunity in my schedule to get to them, especially when you consider that these tend to be diurnal and are sometimes spread over a few days. We're probably missing out a lot, but what can you do? If I were a researcher again hoping to engage industry I'd be far more likely to venture into their territory and try to get to a few IGDA chapter mettings or two. Developers are perversely a very sociable and chatty bunch and we can't resist the offer of (often free) drinks. Five minutes one-to-one in a situation like that might create as many opportunities as a broadcast situation like presenting a paper.
Secondly, shorter self-contained non-open ended projects are far more attractive to the industry than more general, riskier and non-immediate work. There is an entire burgeoning industry of casual gaming (by that I mean shorter episodic games rather than not serious) and these sort of downloadable games are going to be very big business to the games industry (and hence very very attractive). If presented with a smart, neat, fast, stable, focussed and almost complete piece of scholarware any canny designer (and they are a very canny breed trust me) will be a good judge on if and how it can be exploited in a game sense. A few projects like this under the belt will demonstrate the practicalities and needs of both sides. It would ease concerns of risk and help with the immediacy needed by the industry as these projects tend to be sprints rather than marathons.
Thirdly as Joanna and Andrew pointed out migration between academia and the games industry is imperative. I don't know about consultancy fees (its getting increasingly expensive to make full blown AAA games now), but a trade of knowledge in return for partially funding studies could be advantageous to both parties. I've worked with some very talented degree placement students, but less so at post-grad level. Is this something that needs addressing? How about term-long sabbaticals or occasional course teaching from industrials?
Finally, one thing that shouldn't be forgotten is that what you actually see in a completed game is about 60% of the programming effort so if judgements on how academia can benefit industry are based purely on that then there's plenty that is being missed and by that I mean tools. I've applied a good deal of my own research performing offline tasks - using GAs to optimise landscape streaming, autotuning physics, optimising speeds on circuits. All things that had existing tools but all things that could be improved in terms of speed and performance. As these were improvements to existing technology the risks involved were much, much less and as typically these tools were used over multiple projects there was less worry of immediacy. As these are highly specific problems these are unlikely to be solved without in depth, prolonged communication between us.
By the way this thread has been mentioned on the highly esteemed Kotaku website - http://kotaku.com/ - expect a lot more hits from the games industry after the break.
The URL for the Kotaku article Griff refers to is: http://kotaku.com/337052/on-academic-and-industry-collaboration
Just a few comments on this excellent discussion - I too was at the event at Imperial, and thought it was a very useful early step in the right direction. Many of the points made here about the different constraints and goals of academia and industry are perfectly valid, but we should remember that they also apply to lots of other areas of applied research. If you have a look round other departments at Imperial, you will find that a large part of the research done there is in collaboration with industry, and these collaborations face many of the same difficulties we are experiencing. Nevertheless, all parties must feel that they gain something of value from them. I suspect the real difference is that these other areas have had decades of good quality communication to grow meaningful relationships, typically between academics and industrial researchers who have a sympathy for each other's world. Without this, we just land up blaming the other side for being what they cannot help being - different to us.
The other difference I suspect would be the level of funding by industry and public bodies - I would guess that games research is comparatively under-funded by both sources. Probably a prerequisite to this really working would to have more development companies that are sufficiently large to be able to look further into the future. The issues of cost and risk have been highlighted already here, but it's worth pointing out that most research-active companies in other fields try to have a 'portfolio of risk', which spreads their bets across different kinds of project. Maybe that requires a certain scale and stability that few developers currently have? This will hopefully happen as our industry matures, and comes to a better understanding of just what games are, and what the appropriate technologies are to draw upon.
The trick here I think is to look beyond any one individual project, and to envisage collaborations that involve a whole sequence of projects, with the lessons from the earlier ones being used to make the later ones much more immediately relevant.
Great thread, keep it coming
I jumped from game dev into academia a few years back to see if I could get more time to develop my own ideas. Turns out its hard in academia to get time to develop ANYTHING :)
I think Paul Miller hit the nail on the head in his post though. A lot of this comes down to funding. I think we could see a lot of useful games research coming out of academia if there was actually money involved in it.
One thing I'm doing with my PhD students is putting them in contact with as many people in the industry as I can to act as an occasional supervisor. That being some low-intensity thing where the student can show demo's to the industry guy and get feedback or suggestions. I know, its not truly that useful, but its a good way of starting to bridge the gap. Of course it also helps that we're a games-oriented department and the student is focussing on a game technology (physically based character animation).
I can see a few universities who have good games programmes moving forward with more games related research, but it probably wont be coming from the russell group places.
Which is why I got so bloody irate with the guy from EA at Games:Edu :)
Its amazing, I go to these academic things and want to rant about how they dont get industry. I then go to the industry things and want to rant about how they dont get academia :)
Random thought on funding...
I wonder how much additional funding for advancement in game AI will come from the increase in the "serious games" movement? As more attention is being paid to the game industry by the likes of government, business, science and even academia (as a consumer of the end product) there is also an increase in demands put onto the "realism" in the AI. One major differentiating factor is that, for the Serious Games movement, the focus is on improving the simulation rather than the entertainment value. That runs counter to the game industry mindset that specifically steers us away from advanced AI.
Because of that demand, there will be more attention paid to that aspect of the development and the message of "what else can we inject into this?"
Anyway... just pondering...