(in which I tried to argue that threading is a "how" and not a "what" when it comes to feature requests) a user made this comment:
I'd like to side-step around the details of cost-benefit analysis (e.g. do the sales from low-end systems pay for the development of a renderer with lower system requirements) but take a second to focus on three general issues:
- Is there a cost to developing a scalable renderer?
- How does the trend of hardware development affect hardware?
- How do marketing forces affect both of the above?
ScalabilityIs there a cost to writing a renderer that can run on a wide range of hardware? Absolutely. Obviously we have to write more code to do that.
But there is an additional cost: there are some rendering engine design decisions that have to be made system-wide. It's not practical to provide different scenery files for different hardware (since we are limited by distribution on DVD). In some cases we have to pick a non-ideal data layout (for the highest end hardware) to support everyone.
But: before you raise up arms against your fellow X-Plane user who is holding you down with his GeForce 2 MX and P-III 800 mhz machine, bear in mind that the problem of picking a data format is a bit unavoidable. Even if we targeted the highest-end machines and told everyone else to jump in a lake, those decisions would appear to target rather quaint machines only a year into the version run. At some point we have to pick a line in the sand.
There is some light at the end of the tunnel when it comes to scalability: as computers become (on average) bigger and faster, we can start to defer at least a little bit of the work of scenery generation to while the sim is running. When we first designed the new sceney system (for X-Plane 8) most users did not have dual-core machines, so the doing work on the scenery was very expensive. We preprocessed as much as possible. This isn't as necessary any more.
So are high-end users limited by having one renderer that fits all sizes? Perhaps a little bit, but any design choice is only going to fit one hardware profile perfectly, and hardware is a moving target; today's shiny new toy is tomorrow's junk.
Hardware GrowthEvery two years (to be very loose about things) the number of transistors we can pack on a chip doubles. This "transistor dividend" can be turned into more cores for a CPU, or more shading units (which are now really just cores) for a GPU.
And this gets to the heart of why I don't think we can say "forge the low-end" any time soon. Imagine that we support 6 years of hardware with X-Plane, and the best hardware is 8 times as powerful as the low-end hardware. Fast-forward two years - we drop two-years of hardware and two-years of new ATI and NV graphics cards come out. What is the result?
Well, the newest hardware is still 8x as powerful as the old hardware, but the
difference in the polygon budget between the two has now doubled! In other words, the gap in absolute performance is doubling every two years, driving the two ends of our hardware spectrum farther apart. (Absolute performance is what Sergio and I have to worry about when we design a new feature. How many triangles can we use is an absolute measurement.)
If we say "okay forget it, only 3 years of supported hardware" that gets us out of jail for a little while, but eventually even the difference between the newest and slightly off-the-run hardware will be very large.
A gap in hardware capability is inevitable and it will only get worse!
Market DivergenceYou may have noticed that the above paragraph makes a really gross assumption: that the lowest end hardware we support is the very best card on the market from a certain number of years ago. Of course this isn't true at all. The lowest end hardware we support was probably pretty lame even when it was first made. The GeForce FX 5200 was never, even for a microsecond, a good graphics card. (It was, however, quite cheap even when first released.)
So the gap we really have is between the oldest low-end and newest high-end hardware, which is really quite wide. Consider that in May 2007 the GeForce 8800 Ultra was capable of 576 GFLOPs. Two months later (July 2007) the GeForce 8300 GS was released, packing a whopping 22 GFLOPs. In other words, in one video card generation the gap between the best and worst new card NVidia was putting out was 26x! (I realize GFLOPs isn't a great metric for graphics card performance - really no one metric is adequate, but this example is to illustrate a point.)
Let's go back in time a few years. In February 2002, NVidia released the GeForce 4 Ti (high-end) and MX (low-end. The slowest MX could fill 1000 MT/s, while the fastest Ti could fill 2400 MT/s. That's a difference in fil rate of "only" 2.4x.
What's going on here? Commodification! Simply put, graphics cards have reached the point where a lot of people just don't care. Very few users need the full power of a GeForce 8800, so a lot of lower-end machines are sold with low-end technology - more than adequate for checking email and watching web videos. This creates a market for low-end parts and creates a wider "gap" for X-Plane. Dedicated returning X-Plane users might do the research and buy the fastest video card, but plenty of new users already have the computer, and it might have something unfortunately (like a Radeon X300 or Intel GMA950) already on the motherboard.
As X-Plane's hardware needs diverge from the needs of mainstream computer users, we can expect some but not all of our users to have the latest and greatest. We can also expect plenty of new users to have underpowered machines.
Let me go out on a limb (I am not a technologist or even a hardware guy, so this opinion isn't worth the bits it is printed on) and suggest this: we're going to see a commodification fall-off in the number of cores everyone has too. Everyone is going to have two cores because it is cheap to put a second core on the main CPU if it lets you get rid of a whole array of special-purpose hardware. Give me multi-core and maybe I can get away with software-driven rendering (who needs hardware acceleration), software-driven sound (goodbye DSP chips), maybe I can even find cheaper ways to build my I/O. But 16 cores? The average user doesn't need 16 cores to check email and run Windows 7.
So as transistors continue to shrink and it becomes possible to pack 8 or 16 cores on a die, I expect some people to have this and others not to. We'll end up in the same situation as the graphics chips.
Summing It UpTo sum it up, sure there may be some drag on X-Plane in supporting a wider range of hardware. But it's an inevitable requirement, because hardware shifts in capability even during a single version run, and as hardware becomes faster, the gap between -end and cheap systems gets wider.