Periodically users ask me what my setup is. Usually the user wants to set up a really nice machine to run X-Plane at its best and figures "let's find out what the guys who write X-Plane have."
But ... my main development machine is definitely not selected to be the best possible X-Plane machine. I put together a system to:
- Debug X-Plane productively.
- Test all aspects of X-Plane.
- Create the global scenery as fast as possible.
Being an Intel Mac, the machine is triple-bootable into Vista (someone in the company has to have it) and Ubuntu Linux.
Right now I have a Radeon 4870 in the machine and an 8800 on my shelf. I do recommend the 4870 to Mac users - it's a very nice card. But for my purposes it has one annoying problem: it takes up the space for the second graphics card slot and both power connectors...I may go back to a lower powered card so I can have one NV and one ATI card in the machine at the same time - a great configuration for debugging. (I do not recommend that any user ever mix graphics card brands..."don't try this at home", etc.)
Maxing out X-Plane isn't on the priority list. In particular, past these goals, the faster the machine, the less likely I am to notice a problem.
An example: during 930 development, for some period of time, we had accidentally set the code to allocate an extra 1 GB of RAM at startup. Oops! The embarrassing part: neither Austin and I noticed for weeks. Both of our machines have plenty of RAM, and OS X has a decent VM system, so we just ran, using a lot of memory.
Then one day I try to start X-Plane on my laptop and the whole machine nearly catches on fire. Sure enough...an extra 1 GB of RAM is being grabbed.
The moral of the story: I'd rather not have a machine that hides things from me, if it doesn't affect productivity.
5 comments:
THAT'S the moral of the story? That a possession of a capable computer leads to sloppy, inattentive programming?
Right.
This reminds me of Smeed's law. It's meant to be about traffic, but it isn't hard to see how the concept applies to, well, anything. In computer terms, when hardware improves, software tends to degrade with it, simply because it becomes more tolerable to let it.
I'm not saying the computer experience hasn't improved over the decades. But if you ever wonder why things just aren't as fast today as you had imagined when hardware was a few orders of magnitude slower, this is it. Of course, it can be attributed to all sorts of factors, not just inefficiency (not that a lot of today's software *can't* be a whole lot more efficient without killing functionality). Still, when code naturally runs faster, it will usually get slower one way or another.
Same deal with robustness. You might think that after all this time, people will have learned to write robust code. But with time to learn and new technological help, even deceptively simple software still crashes, has bugs, and is vulnerable to exploits. If this changes, it's mostly due to market pressure, not some new technological improvement or more programmers becoming wise all of a sudden.
People don't care about making things 'perfect' in some sense. They care about making things tolerable.
First Anon: yep. Programmers are human, and few things motivate a programmer to write a FAST program like being faced with the horrid user experience of a slow program EVERY day. :-)
Second Anon: right - the efficiency of the code might decrease with increased hardware, but often we can live with this. I believe the cost of writing software is worse than linear to its complexity - that is, writing a big program isn't just harder than writing a small program, it's _much_ harder...the ability to write at a higher level of abstraction (which also tends to soak up CPU cycles) is necessary to write a program big enough to fully exploit that new hardware. Or something like that.
I wasn't really aiming at micromanagement to squeeze the most out of every single instruction. That's rarely productive, and rather impossible at any non-trivial scale. It could also conflict with other goals. Say, if you use C++ templates or inlines liberally on the grounds that the pieces of code won't do redundant computations, if you get a bazillion instantiations it will be huge, outgrow caches, etc. Not much left of the potential efficiency of specialized code. Side note: wasn't it too much code (generated by templates) that caused the linker error on hacksoflife, actually?
What I had in mind was the 'programmers' who write universally horrid code. I don't even know how to quantify it, but you see things like terrible algorithms if not brute force to the max; odd concepts that align with a few bits of code each, albeit hardly; no elegance at all; just overall low quality. It makes me feel like I've witnessed one of the more probable 'working' results of monkeys with compilers. Hmmm, that might not even be far from the truth. Maybe we are all monkeys. :-)
Hey Benjamin,
First of all I would like to say I appreciate your hard work. I realize coding is your art and X-Plane is your master piece, and you are very fine artist. On a more hardware related note I stumbled upon a web site that compares CPU's and Graphics Cards using a standardized downloadable program that puts the computer through a battery of tests. The program then uploads the test results to a data base where you can compare your computers hardware to those of the data base in a nice easy to read graphical format. The data base and graphical comparisons are free to everyone to look at even if you don't test your computer. Here is the link: http://www.cpubenchmark.net/
Post a Comment