Monday, December 28, 2009
Here's the problem: as hardware has been getting faster, the amount of data (in the form of detailed airplane models) needed to keep the hardware running at max has gone up. But the process of modeling an airplane hasn't gotten any more efficient; all of that 3-d detail simply needs to be drawn, UV mapped and textured. Simply put, NVidia and ATI are making faster GPUs but not faster humans.
That's why I was so excited about order independent transparency. This is a case where new graphics hardware and nicer looking hardware means less authoring work, not more. (The misery of trying to carefully manage ordered one sided geometry will simply be replaced by enabling the effect.)
My Daddy Can Beat Up Your Daddy
Cameron was on FSBreak last week last week discussing the new CRJ...the discussion touched on a question that gets kicked around the forums a lot these days: which allows authors to more realistically simulate a particular airplane...X-Plane 9 or FSX.
This debate is, to be blunt, completely moot. Both FS X and X-Plane contain powerful enough add-on systems that an author can do pretty much anything desired, including replacing the entire host simulation engine. At that point, the question is not "which can do more" because both can do more than any group of humans will ever produce. As Cameron observed, we've reached a point where the simulator doesn't hold the author back, at least when it comes to systems modeling.
(It might be reasonable to ask: which simulator makes it easier to simulate a given aircraft, but given the tendency to replace the simulator's built-in systems on both platforms, it appears the state of the art has gone significantly past built-in sim capabilities.)
When it comes to systems modeling, the ability to put custom code into X-Plane or FS X allows authors to go significantly beyond the scope of the original sim. When it comes to graphics, however, authors on both platforms are constrained to what the sim's native rendering engine can actually draw.
So if there's a challenge to flight simulation next year, I think it is this: for next-generation flight simulators to act as amplifiers for the art content that humans build, rather than as engines that consume it as fuel. The simulator features that get our attention next year can't just be the ability for an author to create something very nice (we're already there), rather it needs to be the ability to make what authors make look even better.
(This doesn't mean that I think that the platforms for building third party "stuff" are complete. Rather, I think it means that we have to carefully consider the amount of input labor it takes to get an output effect.)
Sunday, December 27, 2009
For this reason, I try to keep my ears open any time an author cannot understand the documentation, cannot find the documentation, or in another way has a problem with the scenery website. These "first experiences" give an idea of the real utility of the docs. Writing a reference for people who know things is not very hard - writing an explanation for new authors is much harder.
One author who was converting from MSFS to X-Plane pointed out a problem with the documentation that I hadn't realized before - but I think he's on to something. He pointed out that the information you need to complete a single project is often spread out all over the place. You probably need to look at an overview document, a file format, and a tool manual, each of which describes about 1/3 of the problem. To make it worse, each document assumes you know what the other ones say!
Who would write such poor documentation? A programmer, that's who. (In other words, em.) In computer programming, the following techniques are all considered good design and essential:
- Breaking a problem into smaller, sections.
- Limited cross-references between these sections as much as possible.
- Keeping the sections independent.
- Not duplicating information (code).
This is definitely not the only problem with the documentation, which also suffers from a lack of clarity, is often incomplete, and could use a lot more pictures. I am a programmer, and I am paid to make X-Plane faster, better, etc. It can be very hard to find the time to work on documentation when the next feature needs to be done. And yet...without documentation, who can use these features?
Now that I've at least figured out that factorization is bad, for the short term I am going to try to write "non-factored" documentation. The first test of this will be MeshTool. I am doing a complete rewrite of the MeshTool readme. Like the ac3d readme, MeshTool needs a full large-scale manual of perhaps 10-20 pages, and the task needs to be approached recognizing that the manual is going to contain a huge amount of information.
When I work on the MeshTool manual I will try to approach it from a task perspective, with explanations of the underlying scenery system, rather than a reference perspective that assumes authors might know why the given techniques work. We'll see if this creates a more usable manual.
Friday, December 25, 2009
- MeshTool 2 beta 4.
- WED 1.1 developer preview 2.
MeshTool 2 beta 4 should be the last MeshTool beta short of bug fixes - that is, I've taken feature requests during the MeshTool beta, but I think we have what we need. Future feature enhancement will go in a later version.
Besides some useful bug fixes, there are two new features in this beta:
- You can specify a specific land class terrain with a shapefile.
- You can control the physical properties of orthophotos (are they wet or dry) with a shapefile without cropping out the water. This is useful for authors who want to use translucent orthophotos to create tinted water effects like coral or shallow sandy bottoms.
Also a docs update: I have posted the WED and Ac3d manuals in PDF form directly on the website. The AC3D manual (and motivation to do this) are thanks to Peter, who converted the manual from readme-text to PDF. I was shocked to realize the ac3d README is almost 20 pages. Hopefully having the manual in PDF form will help people realize just how much documentation is crammed in there.
Tuesday, December 15, 2009
Now what you might not have realized is: groups have filters too! So you can hide an entire group with one filter, and you can put premade instruments in the group.
So...even though the FMS is a premade instrument with no filter fields, by grouping it you can show and hide it.
Important: the backgrounds of instruments are not shown or hidden. So if you want to do this, customize the FMS to have a transparent background, and use a generic rotary to put a real background behind it. The rotary goes in the group, and thus it can show and hide.
One more tip: clicking in 2-d goes through one instrument to another, for historical reasons. So if you pop up an instrument, you might want to hide the instruments it popped up over so clicks don't affect them.
OpenGL is not pixel exact, its per-primitive anti-aliasing (e.g. draw me a smooth line) is inconsistent and weird, and you probably won't have full-screen anti-aliasing on your panel. (FSAA is out of your control, as X-Plane sets up the panel render.
For this reason, I recommend "texture anti-aliasing". Basically in this technique you draw everything as rectangles, - for your lines, you use textures that have alpha at the edge. The linear intepollation of the texture forms the anti-aliased edge.*
For a detailed examination of the problem, I recommend this page. (arekkusu is a freaking genius and his treatment of the problem is very thorough!) I have also tried to explain how the hell OpenGL decides what it's going to draw here. There are a few important cases where you can be pretty sure about what OpenGL will draw, and a lot of cases where you're playing pixel roulette.
The "texture" technique also has the advantage that it leads nicely into the use of texture artwork. For example, if you need a line with an arrow-head, well, you're already drawing a line as a textured rectangle with pixels in the center and alpha on the sides. Just ask your artist to draw an arrow-head in photoshop!
* Texture FSAA actually produces better results than FSAA (with non-AA pixels). The reason is that FSAA produces a finite number of intermediate alpha levels. The number of finite levels depends on the FSAA scheme, but it is always discrete levels, hence 16x FSAA looking better than 2x. By comparison, when we use textures, we get a smooth linear blend between our transparent and opaque pixels, giving us the smoothest possible anti-aliasing.
Saturday, December 12, 2009
- Mesh resolution (that is, the spacing between elevation points in a mesh) is a crude way to measure the quality of a mesh. It is horribly inefficient to use 5m triangles to cover a flat plateau just because you need them for some cliffs.
- At some point, the data in a very high mesh becomes misleading. You have a 5m mesh. Great! Are you measuring a 5m change in elevation, or is that a parked car that has been included in the surface?
But it brings up the question: how good is a mesh? If you make a base mesh with MeshTool using a 10m input DEM (the largest DEM you can use right now), the smallest triangles might be 10m. But the quality of the mesh is really determined by the mesh's "point budget" - that is, the number of points MeshTool was allowed to add to minimize error.
MeshTool beta 4 will finally provide authors with some tools to understand this: it will print out the "mesh statistics" - that is, a measure of the error between the original input DEM and the triangulation. Often the error* from using only 1/6th of the triangles from the original DEM might be as little as 1 meter.
I spent yesterday looking at the error metrics of the meshes MeshTool creates. I figured if I'm going to show everyone how much error their mesh has with a stats printout, I'd better make sure the stats aren't terrible! After some debugging, I found a few things:
Vector features induce a lot of "error" from a metrics standpoint. Basically when you induce vector features, you limit MeshTool's ability to put vertices where the need to be to reduce meshing error. The mesh is still quite good even with vectors, but if you could see where the error is coming from, the vast majority will be at vector edges.
For example, in San Diego the vector water is sometimes not quite in the flat part of the DEM, and the result is an artificial flattening of a water triangle that overlaps a few posts of land. If that land is fairly steep (e.g. it gains 10+ meters of elevation right off the coast) we'll pick up a case where our "worst" mesh error is 10+ meters. The standard deviation will be <>
The whole question of how we measure error must be examined. My normal metric is "vertical" error - for a given point, how much is the elevation different. But we can also look at "distance" error: for a given point, how close is the nearest mesh point from the ideal DEM?
"Distance" mesh point gives us lower error statistics. The reason is because when we have a steep cliff, a very slight lateral offset of a triangle results in a huge vertical error, since moving 1m to the right might drop us 20 meters down. But...do we care about this error? If the effective result is the same cliff, offset laterally by 1m, it's probably more reasonable to say we have "1m lateral error" than to say we have "20m vertical error". In other words, small lateral errors become huge vertical errors around cliffs.
Absolute distance metrics take care of that by simply measuring the two cliff surfaces against each other at the actual orientation of the cliff. That is, cliff walls are measured laterally and the cliff floor is measured vertically. I think it's a more reasonable way to measure error. One possible exception: for a landing area, we really want to know the vertical error, because we want the plane to touch pavement at just the right time. But since airplane landing areas tend to be flat, distance measurement becomes a vertical measurement anyway.
So there I am working with a void-filled SRTM DEM for KSBD. I have cranked the mesh to 500,000 points to measure the error (which is very low, btw...worst error 3m and standard deviation less than 15 cm.
But what are those horziontal lines of high density mesh?
I wasn't sure what those were, but they looked way too flat and regular. So I look at the original DEM and I saw this:
Ah - there are ridges in the actual DEM. Well that's weird. What the heck could that be?
This is a view with vector data - and there you go. Those are power lines.
The problem in particular is that SRTM data is "first return" - that is, it is a measurement of the first thing the radar bounces off of from space. Thus SRTM includes trees, some large buildings, sky scrapers, and all sorts of other gunk we might not want. A mesh in a flight simulator usually represents "the ground", but using first return data means that our ground is going to have a bump any time there is something fairly large on that ground. The higher the mesh res, and the lower the mesh error, the more of this real world 3-d coverage gets burned into the mesh.
So Do We Really Care About 5m DEMs?
The answer is actually yes, yes we do, but maybe not for the most obvious reasons.
The problem with raster DEMs (that is, an elevation as a 2-d grid of heights) is that it doesn't handle cliffs very well. A raster DEM cannot, by its very format, handle a truly vertical cliff. In fact, the maximum slope it can create is based on
arctan(cliff height / DEM spacing)Which is math to say: the tighter your DEM spacing, the higher the maximum slope we can represent for a given cliff at a certain height. Note that the total cliff height matters too, so even a crude 90m DEM like SRTM can represent a canyon if it's really huge**, but we need a very high res DEM to get shorter vertical surfaces.
So the moderated version of my rant goes something like this:
- High res DEMs input DEMs are necessary to represent small terrain features that are steep if we are using raster DEMs.
- High res meshes are not necessary - we only need res for part of the mesh where it counts.
- Let's not use mesh res to represent 3-d on the ground, only the ground itself.
The meshing data format inside MeshTool could probably be made to work with contours, but I haven't seen anyone with high quality contour data yet. We'll probably support such a feature some day.
* Really this should be "the additional error", because when you get a DEM, it already has error - that is, the technique for creating the DEM will have some error vs. the real world. For example, if I remember right (and I probably do not) 90% of SRTM data points fall within 8m vertically of the real world values. So add MeshTool and you might be increasing the error from 8m to 9-10m, that is, a 12-25% increase in error.
** For the SRTM this might be a moot point - the SRTM has a maximum cliff slope in certain directions defined by the relationship between the shuttle's orbit and the latitude of the area being scanned. The maximum cliff at any point in the SRTM is 70 degrees, which can be represented by a 247 m cliff using a pair of 90m posts.
Friday, December 11, 2009
It turns out it was an uninitialized variable in code that was never used until NV changed their drivers. As far as I can tell, NV dropped support for FSAA in 16-bit mode a few months ago, at least on some of their newer GPUs. (It is also possible that the incantation necessary to get FSAA has changed a lot and I simply don't know what it is.)
So the dialog between X-Plane and the video card ran something like the Monty Python cheese shop sketch:
X-Plane: So ... can you do full screen anti-aliasing?At this point in the dialog X-Plane would promptly lose track of what it had been doing in the setup process, throw out its notes on the GPU setup, and then freakout a bit later when it realized its note taking left something to be desired.
GeForce 8: Oh yes, of course! (Please, I'm a GeForce 8 card.)
X-Plane: Splendid! So...how about 16x FSAA?
GeForce 8: Sorry, can't I can't do that.
X-Plane: Ah. How about 8x FSAA?
GeForce 8: Sorry, can't do that either.
X-Plane: I see. Well then, how about 4x FSAA?
GeForce 8: Nope.
X-Plane: 2x FSAA?
GeForce 8: No way.
X-Plane: Ah. I see.
This is the first case I've hit where a video card advertises FSAA and can't actually do it.
Anyway, if you have hit this bug:
Update to 941 final - it should fix it.
Stop trying to run with FSAA and 16-bit color. This is a somewhat crazy combination. FSAA attempts to clean up rendering artifacts at the cost of fill rate. 16-bit color creates artifacts to save fill rate. If your GPU needs 16-bit color to run at high framerate, it's time to turn FSAA off.
*This assumes 5551, or 565 pixels. There is a 4-bit alpha 16-bit color format, cleverly called 4444, but if you thought 16-bit looks bad...
Thursday, December 10, 2009
You can read about custom lights here. The short of it is that a custom light is a billboard on an object where you (the author) texture the billboard (with part of the object texture), pick the texture coordinates and color, and optionally run all of these parameters through a dataref* that can modify them.
For named lights, the light texture comes from a texture atlas that Sergio made a few years ago - it's a nice grid 8x8 pretty lights.
So...why can't you use it with custom lights? Why do custom lights use the object texture?
The answer is: future compatibility. Sergio and I are actually already working on a new texture atlas for the sim's built-in lights. (This has been a back-burner project for a while ... I have no idea when we'll actually productize this currently experimental work.) What happens when we create a new texture atlas with all of the lights moved around and scrambled? If your object referenced that texture, the texture coordinates would be incorrect.
Thus, for the lights where you specify texture coordinates (custom lights) you use your own texture. For named lights (where the texture coordinate is generated by X-Plane) it's safe to use ours.
A Dangerous Bug
I found a bug in 940 that's been in the sim for a while now: given the right strange combination of named and custom lights in a row, the sim would accidentally use Sergio's texture atlas rather than the object's texture for custom lights.
This is a mistake, a bug, and it will be fixed in the next 941 release candidate. I certainly hope there aren't any objects out there relying on this erroneous behavior, which violates the OBJ spec and is pretty dangerous from a future compatibility standpoint.
* Dataresfs are normally thought of as data we read, so the idea of using them to "process" data is a bit of a bastardization of the original abstraction. You can read about the dataref scheme in detail here.
(The lists below will contain a number of "specs". Do not panic! At the end I will show you where to look this stuff up on Wikipedia.)
A modern graphics card is basically a computer on a board, and such, it has the following components that you might care about for performance:
VRAM. This is one of the simplest ones to understand. VRAM is the RAM on the graphics card itself. VRAM affects performance in a fairly binary way - either you have enough or your framerate goes way down. You can always get by with less by turning texture resolution down, but of course then X-Plane looks a lot worse.
How much VRAM do you need? It depends on how many add-ons you use. I'd get at least 256 MB - VRAM has become fairly cheap. You might want a lot more if you use a lot of add-ons with detailed textures. But don't expect adding more VRAM to improve framerate - you're just avoiding a shortage-induced fog-fest here.
Graphics Bus. The GPU is connected to your computer by the graphics bus, and if that connection isn't fast enough, it slows everything down. But this isn't really a huge factor in picking a GPU, because your graphics bus is part of your motherboard . You need to buy a GPU that matches your motherboard, and the GPU will slow down if it has to.
Memory Bus. This is one that gets overlooked - a GPU is connected to its own internal memory (VRAM) by a memory bus, and that memory bus controls how fast the GPU can really go. If the GPU can't suck data from VRAM fast enough, you'll have a slow-down.
Evaluating the quality of the internal memory bus of a graphics card is beyond what I can provide as "buying advice". Fortunately, the speed of the bus is usually paired with the speed of the GPU itself. That is, you don't usually need to worry that the GPU was really fast but its bus was too slow. So what we need to do is pick a GPU, and the bus that comes with it should be decent.
Of course the GPU sits on the graphics card. The GPU is the "CPU" of the graphics card, and is a complex enough subject to start a new bullet list. (As if I wouldn't start a new bullet list just because I can.)
Generation. Each generation of GPUs is superior to the previous generation. Usually the GPUs can create new effects, and often they can create old effects more cheaply.
The generation is usually specified in the leading number, E.g. a GeForce 7xxx is from the GeForce 7 series, and the GeForce 8xxx is from the GeForce 8 series. You almost never want to buy a last-generation GPU if you can get a current generation GPU for similar price.
Clock Speed. A GPU has an internal clock, and faster is better. The benefit of clock speed is linear - that is, if you have the same GPU at 450 mhz and 600 mhz, the 600 mhz one will provide about 33% more throughput , usually.
Most of the time, the clock speed differences are represented by that ridiculous alphabet soup of letters at the end of the card. So for example, the difference between A GeForce 7900 GT and a GeForce 7900 GTO is clock speed - the GT runs at 450 mhz and the GTO at 650 mhz.*
Core Configuration. This is where things get tricky. For any given generation, the different card models will have some of their pixel shaders removed. This is called "core configuration". Basically GPUs are fast because they have more than one of all of the parts they need to draw (pixel shaders, etc.) and in computer graphics, many hands make light work. The core configuration is a measure of just how many hands your graphics card has.
Core configuration usually varies with the model number, e.g. an 8800 has 96-128 shaders, whereas an 8600 has 32 shaders, and an 8500 has 16 shaders. In some cases the suffix matters too.
Important: You cannot compare clock speed or core configuration between different generations of GPU or different vendors! A 16-shader 400 mhz GeForce 6 series is not the same as a 16-shader 400 mhz GeForce 7 series card. The GPU designers make serious changes to the card capabilities between generations, so the stats don't apply.
You can see this in the core configuration column - the number of different parts they measure changes! For example, starting with the GeForce 8, NVidia gave up on vertex shaders entirely and started building "unified shaders". Apples to oranges...
Don't Be Two Steps Down
This is my rule of thumb for buying a graphics card: don't be two steps down. Here's what I mean:
The most expensive, fanciest cards for a given generation will have the most shaders in their core config, and represent the fastest that generation of GPU will ever be. The lower models then have significantly less shaders.
Behind the scenes, what happens (more or less) is: NVidia and ATI test all of their chips. If all 128 shaders on a GeForce 8 GPU work, the GPU is labeled "GeForce 8800" and you pay top dollar. But what if there are defects and only some of the shaders work? No problem. NV disables the broken shaders - in fact, they disable so many shaders that you only have 32 and a "GeForce 8600" is born.
Believe me: this is a good thing. This is a huge improvement over the old days when low-end GPUs were totally separate designs and couldn't even run the same effects. (Anyone remember the GeForce 4 Ti and Mx?) Having "partial yield" on a chip set is a normal part of microchip design; being able to recycle partially effective chips means NV and ATI can sell more of the chips they create, and thus it brings the cost of goods down. We wouldn't be able to get a low end card so cheaply if they couldn't reuse the high-end parts.
But here's the rub: some of these low end cards are not meant for X-Plane users, and if you end up with one, your framerate will suffer. Many hands make light work when rendering a frame. I you have too few shaders, it's not enough hands, drawing takes forever, your framerate suffers.
For a long time the X-Plane community was insulated from this, because X-Plane didn't push a lot of work to the GPU. But this has changed over the version 9 run - some of those options, like reflective water, per-pixel lighting, etc. start to finally put some work on the GPU, hitting framerate. If you have a GeForce 8300 GS, you do not have a good graphics card. But you might not have realized it until you had the rendering options to really test it out.
So, "two steps down". My buying advice is: do not buy a card where the core configuration has been cut down more than once. In the GeForce 8 series, you'll see the 8800 with 96-128 shaders, then the first "cut" is the 8600 with 32 shaders, and then the 8500 brings us down to 16.
A GeForce 8800 was a good card. The 8600 was decent for the money. But the 8500 is simply underpowered.
When you look at prices, I think you'll find the cost savings to be "two steps down" is not a lot of money. But the performance hit can be quite significant. Remember, the lowest end cards are being targeted at users who will check their email, watch some web videos, and that's about it. The cards are powerful enough to run the operating sytem's window manager effects, but they're not meant to run a flight simulator with all of the options turned on.
If you do have a "two step" card, the following things can help reduce GPU load:
- Turn down or off full screen anti-aliasing.
- Turn off per pixel lighting, or even turn off shaders entirely.
- Reduce screen size.
Sunday, December 06, 2009
I have read the white papers on how to optimize an application for SLI/CrossFire, and while X-Plane isn't quite a laundry of SLI/CrossFire sins, we're definitely an application that has the potential for optimization.
Now normally more hardware = faster framerate. In particular, the limiting factor of filling in a big display with high shader options and full screen anti-aliasing can be the time it takes to fill in pixels, and more shaders mean more pixels filled in at once.* Why doesn't having an entire second GPU to fill in pixels allow us to go twice as fast?
The answer is: coordination. Normally the process of drawing an X-Plane frame goes a little bit like this:
- Draw a little bit more of the cloud shadows into the cloud shadow texture. (This is a gradual process.)
- Draw the panel into the panel texture.
- Draw the world (as seen from below the water) into the water reflection texture.
- Draw the airplane shadow into the airplane shadow texture.
- Draw the entire world using the above four textures.
In fact, the total dynamic textures can be more so - if you use panel regions, there are 2 panel textures per region, and if you use volumetric fog, there are two more textures with interim renderings of the world, used to create fog effects.
Okay, so we have a lot of textures we drew. What does that have to do with multiple video cards?
Well, one reason why dynamic textures are normally fast is because, when a dynamic texture is finished, it lives exactly where we want it to live: on the video card!
But...what if there are two video cards? Uh oh. Now maybe one video card drew the water, and another drew the clouds. We have to copy those textures to every video card that will help draw the final frame.
There is a sequence to draw the right textures on the right card at the right time to make X-Plane run faster with two video cards...but the video drivers that manage SLI or CrossFire may have no way to know what that sequence is. The driver has to make some guesses, and if it puts the wrong textures in the wrong places, framerate may be slower, due to the overhead of shuffling textures around.
So SLI and CrossFire are not simple, no-brainer ways to get more framerate, the way having a faster GPU might be.
* If you have a huge number of objects, your framerate is suffering due to the CPU being overloaded, and this is all entirely moot!
Saturday, December 05, 2009
The short answer is: you could change it, but the results would be so unsatisfying that it's probably not worth adding the feature.
The global scenery is using GLCC land use data - it's a 1 km data set with about 100 types of land class based on the OGE2 spec.
Now here's the thing: the data sucks.
That's a little harsh, and I am sure the researchers tried hard to create the data set. But using the data set directly in a flight simulator is immensely problematic:
- With 1 km spatial resolution (and some alignment error) the data is not particularly precise in where it puts features.
- The categorizations are inaccurate. The data is derived from thermal imagery, and it is easily fooled by mixed-use land. For example, mixing suburban houses into trees will result in a new forest categorization, because of the heat from the houses.
- The data can produce crazy results: cities on top of mountains, water running up steep slopes, etc.
To give a trivial example, the placement of rock cliffs is based on the steepness of terrain, and overrides land use. So if we have a city on an 80 degree incline, our rule set says "you can't have a city that slanted - put a rock face there instead."
Sergio made something on the order of 1800 rules. (No one said he isn't thorough!!) And when we were done, we realized that we barely use landuse.
In developing the rule set, Sergio looked for the parameters that would best predict the real look of the terrain. And what he found was that climate and slope are much better predictors of land use than the actual land use data. If you didn't realize that we were ignoring the input data, well, that speaks to the quality of his rule set.
No One Is Listening
Now back to MeshTool. MeshTool uses the rule set Sergio developed to pick terrain when you have an area tagged as terrain_Natural. If you were to change the land use data, 80% of your land would ignore your markings because the ruleset is based on many other factors besides landuse. Simply put, no one would be listening.
(We could try some experiments with customizing the land use data..there is a very small number of land uses that are keyed into the rule set. My guess is that this would be a very indirect and thus frustrating way to work, e.g. "I said city goes here, why is it not there?")
I am working with alpilotx - he is producing a next-gen land-use data set, and it's an entirely different world from the raw GLCC that Sergio and I had a few years ago. Alpilotx's data set is high res, extremely accurate, and carefully combined and processed from several modern, high quality sources.
This of course means the rules have to change, and that's the challenge we are looking at now - how much do we trust the new landuse vs. some of the other indicators that proved to be reliable.
Someday MeshTool may use this new landuse data and a new ruleset that follows it. At that point it could make sense to allow MeshTool to accept raster landuse data replacements. But for now I think it would be an exercise in frustration.
X-Plane has been growing a larger number of independently simulated landing lights with each patch. We started with one, then four, now we're up to sixteen. Basically each landing light is a set of datarefs that the systems code monitors.
- You use a generic instrument to hook a panel switch up to a landing light dataref.
- The sim takes care of matching the landing light brightness with the switch depending on the electrical system status.
- Named lights can be used to visualize the landing lights.
But what else lights up on an airplane? Sergio sent me the exterior lighting diagram for an MD-82, and it would make a Christmas tree blush. There are lights for the staircases, for the inlets, on the wings, pointing at the wings, the logo lights, the list goes on.
We have sixteen landing lights, so we could probably "borrow" a few to make inlet lights, logo lights, etc. But if we do that, the landing light will light up the runway when we turn on any of those other random lights.
Thus, generic lights were born. A generic light is a light on the plane that can be used for any purpose you want. They aren't destined for a specific function like the strobes and nav lights. There are 64 of them, so divide them up and use them how you want. Just like landing lights, you use a generic light by:
- Using a generic instrument to control its "switch" from the panel.
- Using a named light to visualize it somewhere on the OBJs attached to your airplane.
Friday, December 04, 2009
Land Class Terrain
The next beta will allow authors to specify built-in land-class terrains by shapefile. Landclass isn't quite as easy to work with as you might think - I'm working on a Wiki page describing how land class works with the mesh.
You can't invent your own land classes directly in MeshTool, but there are two work-arounds:
- Once you build your DSF, use DSF2Text to edit the header, changing one of our land classes to the one you want. We have 500 + land classes, so you can probably find one to cannibalize.
- Or you can just use the library system to replace the art assets for the land class within the area covered by your mesh. (You can tweak as little as one full tile.)
Water handling and masking go together to allow you to create an accurate physical coastline. The problem is that X-Plane doesn't let you specify whether a tile is land or water using a texture/image file. Physics are always determined on a per-triangle basis.
MeshTool 2.0 beta 4 will let you specify whether the physics of an orthophoto that has water "under" its transparent areas will take on its own physics, or the physics of the underlying water. (It can act "solid" or "wet".) This lets you use orthophotos to model shallow areas and reefs.
The mask feature lets you do both. The mask feature lets you limit MeshTool to working on a single area vector area, defined by a ShapeFile. So to make a single orthophoto have wet and solid parts you:
- Issue the orthophoto command in solid mode.
- Establish a shapefile mask for areas of your DSF.
- Re-issue the orthophoto in "wet" mode.
No Z Yet
Some developers have requested that MeshTool use the Z coordinate in a Shapefile to define the elevation of water boundaries. That's a good idea, but I can't code it any time soon. The polygon processing in MeshTool is fundamentally 2-d and has no way to retain the Z information during processing. I will try to get to this feature some day, but for now it's going to have to wait.
The new beta should be available some time early next week, or now if you build from source.
But the other night I had a little bit too much to drink, got distracted, and posted these:
Plugins? Do not panic! While plugins are necessary for some of the features demonstrated here, others can be created without additional programming.
BTW, if the existing documentation uses a concept that is not explained anywhere, please email me. I sometimes leave holes in the documentation by accident.
Thursday, December 03, 2009
Driver writers have what might be the hardest combination of programming circumstances:
- Their code cannot crash or barf. X-Plane crashes, you send me some hate email. Your video driver crashes, you can't see to send me that email.
- The driver has to be fast . The whole point of buyng that new GeForce 600000 GTX TurboPower with HyperCache was faster framerates. If the driver isn't absolutely optimized for speed, that hardware can't do its thing.
- The driver writers don't have a lot of time to be fast and correct - the GeForce 700000 GTX TurboPower II will HyperCache Pro will be out 18 months from now, and they'll have to start all over again.
Applications writers like myself get to outsource the lower level aspects of our rendering engine to driver writers. When a driver doesn't work right, it's frustrating, but when a driver does work right, it's doing some amazing things.
Wednesday, December 02, 2009
Basically: some parts of X-Plane take measurements of real world information and attempt to simulate them. I have previously referred to this as "reality-based" simulation (e.g. the goal is to match real world quantities).
In those situations, if you intentionally fudge the values to get a particular behavior on this particular current version of X-Plane, it's quite possible that the fudge will make things worse, not better in the future.
This came up on the dev list with the discussion of inside vs. outside lighting. X-Plane 9 gives you 3 global lights for objects in the aircraft marked "interior", but none for the exterior.
Now there is a simple trick if you want global lights on the exterior: mark your exterior fuselage as "interior" and use the lights.
The problem is: you've misled X-Plane. The sim now thinks your fuselage is part of the inside of the plane.
This might seem okay now, but in the future X-Plane's way of drawing the interior and exterior of the plane might change. If it does, the mislabeled parts could create artifacts.
So as a developer you have a trade-off:
- Tweak the sim to maximize the quality of your add-on now, but risk having to update it later.
- Use only the official capabilities of the sim now, and have your add-on work without modification later.
Sunday, November 29, 2009
But...what are the options for light on an airplane? I don't know what Javier has done in this case, but I can give you a laundry list of ways to get lighting effects into X-Plane.
Model In 3-D
To really have convincing light, the first thing you have to do is model in 3-d. There is no substitute - for lighting to look convincing, X-Plane needs to know the true shape of the exterior and interior of the plane, so that all light sources are directionally correct. X-Plane has a very large capacity for OBJ triangles, so when working in a tight space like the cockpit, use them wisely and the cockpit will look good under a range of conditions.
You can augment this with normal maps in 940. Normal maps may or may not be useful for bumpiness, but they also allow you to control the shininess on a per-pixel basis. By carefully controlling the shininess of various surfaces in synchronization with the base texture, you can get specular hilights where they are expected.
The 2-D Panel
First, if you want good lighting, you need to use panel regions. When you use a panel texture in a 3-d cockpit with ATTR_cockpit, X-Plane simply provides a texture that exactly matches the 2-d cockpit. Since the lighting on the 2-d cockpit is not directional, this is going to look wrong.
When you use ATTR_cockpit_region, X-Plane uses new next-gen lighting calculations, and builds a daytime panel texture and a separate emissive panel texture. These are combined taking into account all 3-d lighting (the sun and cockpit interior lights - see below). The result will be correct lighting in all cases.
Even if you don't need more than one region and havea simple 1024x1024 or 2048x1024 3-d panel, use ATTR_cockpit_region - you'll need it for high quality lighting.
The 2-d panel provides a shadow map and gray-scale illumination masks. Don't use them for 3-d work! The 2-d "global lighting" masks are designed for the 2-d case only. They are optimized to run on minimal hardware. They don't provide the fidelity for high quality 3-d lighting - they can have artifacts with overlays, there is latency in applying them, and they eat VRAM like you wouldn't believe. I strongly recommend against using them as a source of lighting for a 3-d cockpit.
To put this another way, you really want to have all global illumination effects be applied "in 3-d", so that the relative position of 3-d surfaces is taken into account. You can't do this with the 2d masks.
The 2-d panel lets you specify a lighting model for every overlay of every instrument - either:
- "Mechanical" or "Swapped" - this basically means the instrument provides no light of its own - it just reflects light from external sources.
- "Back-Lit" or "Additive" - this means the instrument has two textures. The non-lit texture reflects external light, and the lit texture glows on its own.
- "Glass" - the instrument is strictly emissive.
2-d overlays take their lighting levels from one of sixteen "instrument brightness" rheostats. You can carefully allocate these 16 rheostats to provide independent lighting for various parts of the panel.
The 3-d Cockpit
The 3-d cockpit allows you to specify 3 omni or directional lights. These can be placed anywhere in the plane, affect all interior objects, and can be tinted and controlled by any dataref. Use them carefully - what they give you is a real sense of "depth". In particular, the 3-d lights are applied after animation. If a part of the cockpit moves from outside the light to into the light, the moving mesh will correctly change illumination. This is something you cannot do with pre-baked lighting (e.g. a _LIT texture).
Finally, ATTR_light_level is the secret weapon of lighting. ATTR_light_level lets you manually control the brightness of _LIT texture for a given submesh within an OBJ. There are a lot of tricks you can do with this:
- If you know how to pre-render lighting, you can pre-render the glow from a light onto your object into your _LIT texture, and then tie the brightness of the _LIT texture to a dataref. The result will be the appearance of a glow on your 3-d mesh as the light brightens. Because the lighting effect is pre-calculated, you can render an effect that is very high quality.
- You can create back-lit instruments in 3-d and link the _LIT texture to an instrument brightness knob.
- You can create illumination effects on the aircraft fuselage and tie them to the brightness of a beacon or strobe.
- Any given triangle in your mesh can only be part of a single ATTR_light_level group. So you can't have multiple lighting effects on the same part of a mesh. Plan your mesh carefully to avoid conflicts. (Example: you can't have a glow on the tail that is white for strobes and red for beacons - you can only bake one glow into your _LIT texture.)
- ATTR_light_level is not available on the panel texture. For the panel texture, use instrument brightness to control the brightness of the various instruments.
Saturday, November 21, 2009
Stranger In A Strange Land
X-Plane is an OpenGL application. OpenGL (the "open graphics language") is a "language" by which X-Plane can tell any OpenGL graphics card that it wants to draw some triangles.
Think of X-Plane as an American tourist in a far away land. X-Plane doesn't speak the native language of ATI-land or NVidia-Land. But if the hotel says "we speak OpenGL", then we can come visit and ask for a nice room and a good meal.
Of course, if you have ever been an American tourist (or live in a country that is sometimes infested with American tourists :-) you know that sometimes American tourists who speak only English do not get to see the very best a country has to offer. Without local knowledge, the tourist can only make generic requests and hope the results turn out okay.
An example of where this becomes a problem is full-screen anti-aliasing (FSAA). OpenGL allows an application to ask for anti-aliasing. The only parameter an OpenGL application can ask for is: how much? 2x, 4x, 8x, 16x. But as it turns out FSAA is a lot more complicated. Do we want super sampling or multisampling? Coversage Sample Anti-Aliasing? Do we want a 2-tap or 5-tap filter? Do we want temporal anti-aliasing?
As it turns out, NVidia-land is a very strange country, with many flavors of FSAA. Some are very fast and some are quite slow. And when X-Plane comes in and says "we would like 16x" FSAA, there is really no guarantee that we will get fast FSAA (for example, 16x CSAA) or something much slower (like 16x super-sampling). X-Plane is not native to NVidia-land and cannot ask the right questions.
So where do the control panels come in? Well, if X-Plane can only ask for "16x FSAA", how can NVidia give users control of the many options the card can do? The answer is: control panels. The NVidia control panel is made by NVidia - it is native to NVidia-land and knows all of the local tricks: this card has high-speed 5-tap filtering, this card does CSAA, etc.
At this point I will pass on a tip from an X-Plane user: you may be able to get significantly faster FSAA performance with an NVidia card on Windows by picking FSAA in the control panel rather than using X-Plane's settings. This is because (for some reason) X-Plane's generic OpenGL request is translated into about the slowest FSAA you can find. With the control panel you can pick something faster.
Bear in mind that only some users report this phenomenon. My NVidia card works fine even with X-Plane setting FSAA. So you may have to experiment a bit.
It Gets Weirder
When it comes to full-screen anti-aliasing, I can see why NVidia wants to expose all of their options, rather than have X-Plane simply pick one. Still, which do you think is best for X-Plane:
- Coverage Sample Anti-Aliasing
- Some Mix of the Above?
And FSAA is one of the better understood options. How about these:
- Adjust LOD Bias
- Anisotropic Filtering
- Anisotropic Filtering Optimization
How about these?
- CPU Multi Core Support
- Multi-Display/Mixed GPU Acceleration
Suffice it to say, as an applications developer, the situation is a support nightmare. Users can modify a series of settings at home that we cannot easily see, that are poorly documented, that cause performance to be very different from what we see in our labs, sometimes for the worse, sometimes for the better.
Thursday, November 19, 2009
If this discussion seems tasty but confusing, let me clarify. The issue is: which parts of a scenery system are combined when the simulator runs, and which parts must be combined ("baked") in advance.
With MSFS, you can separately install a new mesh, new land class data, new coastlines, and new orthophotos. In X-Plane, all four of those elements must be pre-combined into a single "base mesh". In X-Plane, you have to bake those elements.
This means that you can't make "just" an add-on mesh for X-Plane. You have to create an add-on that addresses elevation and land class and orthophotos and water. Third party authors are quite often not very happy about this - I don't know how many times I've seen "can I replace just the elevation and not the water" posted in forums.
But the Orbx team brings up the exact reason why I thought (five years ago when designing DSF) that requiring all of the elements would be okay: often if you replace one part of the scenery and not another, the results are inconsistent. If you move a coastline but don't adjust the mesh, you may have water climbing up a mountain. If you move a mountain but don't adjust the landuse, you may have a farm at a 45 degree angle.
Cake and Pie
Not everything in X-Plane is pre-baked. As of X-Plane 940, you must pre-bake:
- Land class
- Wide-scale orthophotos
- Terrain Mesh
- All art assets (E.g. change or add types of buildings, replace the look of a given land class for an area).
- All 3-d (forests, buildings, airports, roads*).
- Small area orthophotos
* Roads are sort of a special case in X-Plane 9: you can replace them with an overlay, but the replacement roads must be "shaped" to the underlying terrain mesh, which means they won't work well if a custom mesh is installed. This limitation in the road code dates back to X-Plane 8, when most of us had only one CPU - pre-conforming the road mesh to the terrain shape saved load time.
Monday, November 16, 2009
From my perspective as an application's developer, however, the R300 has some fine print that makes it difficult to deal with:
- It features only 24-bit floating point precision (as opposed to 32-bit precision in all other shader-enabled hardware from ATI or NV). This is why the reflective water looks square and pixelated up close on these cards.
- It has a 96 instruction limit per shader (as opposed to the 1024 instruction or larger limit in all other shader-enabled hardware from ATI or NV.) X-Plane 9's current water shader is right on the bleeding edge of exceeding this limit. In fact, the water pattern is simplified for this set of GPUs to stay within the 96 instruction limit.
- Since the cards were really quite decent for their time, they are still in field and in use.
This shows up in X-Plane as a pile of special cases...X-Plane 9 productizes 2.5 renderers:
- A no shader renderer for old GPUs and buggy drivers.
- A shader-based renderer for modern hardware.
- A special-case on the shader based GPU to meet the limits of the R300.
Saturday, November 14, 2009
(In the case of 940 - there is a big fat bug - see the end of the post.)
Here's a little bit more about what's going on under the hood.
When Austin creates a new revision of the acf format (which happens in virtually every major patch), he handles backward compatibility with old aircrafts in one of two ways:
- He sets the default value of a setting to match the "unused" value in the old ACF file and sets this default value to match the legacy behavior. This naturally initializes all newly introduced functionality to its "backward compatible default" for old airplanes.
- Where this is not possible, he writes some conversion code that maps old ACF values to new ACF values. This mapping tries to re-create the old systems functionality as closely as possible.
- When you open the airplane in Plane-Maker.
- When you open the airplane in X-Plane.
Now about that bug...it turns out that 940 incorrectly updates 930 airplanes - the generator amperage is not correctly initialized. This is why 930 planes will run their batteries down in 940. (This bug was fixed in 941 beta 2, btw.)
What was strange was that, because of the way Plane-Maker's code was structured, this code was failing in X-Plane but succeeding in Plane-Maker. This doesn't happen very often (usually the code fails everywhere) but the result was authors noticing that their planes would start working if resaved in PM.
And that brings me back to the beginning of the post. If Plane-Maker can update the airplane but X-Plane cannot, that's a bug! Please report it as such.
I want to make sure people realize that auto-update should work, and that resaving in Plane-Maker should not be necessary. Otherwise authors will start silently resaving their airplane instead of reporting the bugs, and we'll never find them.
(Systems bugs sometimes only show up with a particular combination of systems settings. So while I do hope that we can catch all such bugs in beta, it is always possible that one peculiar model will induce a bug once the sim is released.)
Well, no NDA is necessary for that. The simple answer is: yes! (A more complete answer is: if you use the current file formats and not legacy formats from 9 years ago, then yes.) Here's a quick review of how long the various scenery and modeling file formats have been supported:
- DSF: 5 years
- OBJ8: 5 years
- OBJ7: 7 years
- ENV: 9 years
- OBJ2: 9 years
So if you model a building with an OBJ or model a terrain area with a DSF, I expect that it will work unmodified with the next major version.
Modeling Formats in Detail
X-Plane 9 supports 3 revisions of the OBJ file format (X-Plane's modeling format):
- Version 800, which is the current version. Introduced with version 8, the OBJ800 format has been extended heavily, but the format was never changed, so original version 800 objects are not incompatible.
- Version 700, which was used with version 7.
- Version 2, whic hwas used for most of the X-Plane 6 run.
Do not make new objects in the version 700 format! This format is obsolete, supported only for legacy purposes, and is an inferior format.
We may drop support for version 2 objects - I haven't seen user content with a version 2 object in a very long time. Version 2 objects date back to a time when every polygon was expensive, so content authored in version 2 is likely to look, well, ten years old.
If you do have version 2 content, you can use ObjConverter to convert it to version 800 format. (ObjConverter will also convert version 700 OBJs to version 800.)
DSF has been our scenery format for five years now, and will continue to be so. DSF has not had any format revisions - new features are supported by allowing DSF data to be tied to new art asset formats.
I do not know if we will support ENV in the next major version. Supporting ENV is relatively trivial in the code, but whenever there is a bug, we have to fix it in the legacy ENV code as well. ENV supports a 500m terrain mesh, which is completely obsolete by today's standard.
Do not make new content in the ENV format! Like OBJ 2 and 700, it is a legacy format for backward compatibility.
Wednesday, November 11, 2009
OpenGL and DirectX
OpenGL and DirectX (well, technically the Direct3D part of DirectX) are both:
- Specifications for how an application requests that graphics be drawn.
- Specification for what will be drawn by a library or video card when those requests are made.
But DirectX is also something else: it specifies what the hardware must do. That's different from OpenGL. You can have an OpenGL compliant renderer that uses the CPU for everything difficult. It will be slow, but correct.
Extensions and Versions
Both DirectX and OpenGL have version increases, and the new version increase specifies new functionality. But OpenGL also has extensions. While normally new features come in a new spec verison, OpenGL's extensions allow OpenGL implementations to pick up new functionality "a la carte". OpenGL implementers can pick and choose what they add.
When Features Show Up
The trend in game GPUs has been for new features to show up in the DirectX spec first, then become available in OpenGL via an extension, and then make it into a future core OpenGL version. For example, most DirectX 10 features were available once DX10 hardware was released, in the form of extensions.
Sometimes the features flow in the opposite direction. For example, ATI's tessellation technology may make it into the DirectX 11 spec, but is already available as an OpenGL extension.
How This Affects X-Plane
To be blunt, X-Plane is not an early adopter of GPU tech. We are a small company so we have to prioritize our feature work carefully, and there's strong motivation to prioritize a feature that helps all users over a feature that helps only users with certain hardware.
So by the time we code a hardware dependent feature, the feature is usually available in OpenGL...I can't think of any cases where not using DirectX has held us up.
For performance, DirectX vs. OpenGL doesn't really matter - both provide access to the "fast path" of the harware - this is the code path where the GPU runs its fastest. At that point, it's a question of hardware, not OpenGL vs. DirectX.
So X-Plane uses OpenGL, but both are fine for rendering engine development (unless you need to be cross-platform), and both provide reasonable access to video card features for X-Plane.
Tuesday, November 10, 2009
If you live in the US, you'll definitely appreciate it...the lists are funny and yet have a seed of painful truth in them.
So I decided to try to create my own lists.
I am only tangentially related in tech support - Randy takes on most of the work with some help from Jack. Sometimes very weird reports get escalated to me. (And most of the "let the report sit for a week" comes from me not having time to dig in.)
Anyway, please take these with a grain of salt - they're meant to be funny and exagerated. Most of our users are very, very helpful in tech support calls, despite the fact that, if you are talking to tech support X-Plane is already hosed. And Randy puts forth some amazing acts of patience in the face of some of the requests he gets. My hope here is only to show that there are two sides to the frustration in a tech support incident, and we'll all be happier if we can see the whole picture.
Five Things You Can Do To Annoy Tech Support
1. Be As Angry as Possible
Threaten to switch to Microsoft Flight Simulator. Drop the F word a few times. KEEP CAPS LOCK DOWN FOR THE ENTIRE EMAIL. Tech support definitely responds better to users who are angrier - you don't want to get sub-standard service because you were too nice, right?
2. Omit Information
If you have a second graphics card made in Kazakhstan, over-clocked and running hacked drivers you got off of the pirate bay, don't tell us. If your computer regularly catches on fire, be sure not to mention that. Did you recompile the Linux Kernel yourself after letting your pet monkey edit the thread scheduler? It's best we not know.
Extra credit: report a truly bizarre problem, provide no details on your customized configuration, wait a week and tell us how you fixed it by removing a third party program that "enhances" sound or graphics. Priceless!
3. Don't Include Past Emails In a Thread
Be sure to delete any past information from your email. Change the subject of the email so we can't tell what the original issue was. If you have more than one email, send replies from different addresses. A perfect reply would be "That didn't work" sent from an email address that you haven't used before, without your name included.
4. Email the Last Person You Talked To.
If you just finished up sorting out a shipping problem with the shipping guy, ask him how to create a plugin. If you just got info from the developers about UDP, ask them why your credit card was charged the amount it was charged.
5. Bring Up New Issues In the Middle of Old Ones.
To do this just right, wait until the thread between you and tech support is pretty deep into the meat of a complex issue. Then throw in another paragraph about something else that's gone wrong. To perfect this technique, try to pick a new problem that the person who you are emailing with isn't equipped to handle (see point 4) and keep the report vague (see point 2). You can repeat this technique to stretch out a tech support incident indefinitely.
Five Ways Tech Support Can Annoy You
1. Make the User Reinstall the OS
Reinstalling the operating system fixes approximately 0% of user problems, but it takes a really long time, and is almost guaranteed to screw something else up, usually something that wasn't broken and isn't related to X-Plane. If a user is a little bit annoyed, this is a great way to pour gasoline on the flames.
This is really a special case of the general strategy "ask the use to do something time consuming, annoying, and unlikely to help."
2. Forward the User a Huge FAQ, None of Which is Relevant to the Problem
Everyone likes form letters and impersonal service. The FAQ should be badly written, badly formatted, confusing to read, and preferably not accidentally contain the real solution to the problem. If the solution to the problem is in the FAQ, don't tell the user where in the FAQ to look.
3. Wait a Long Time Between Replies
Tech support incidents are like fine wines - they get better with age. To allow the user's annoyance to bloom into a finely honed rage, be sure to let each email 'sit' for a week before replying. This works especially well if your response is just to ask another question, setting the user up for another week's delay.
4. Blame Some Other Component
The modern PC is built by approximately 600 different vendors. Blame one of them. The beauty of this strategy is that it is one that can be used by every vendor who provided software or hardware for the PC. Also, because quite often the problem really is with another component, you can claim this with a straight face.
Tip: blame the graphics card maker - ATI and NVidia do not have the resources to pursue every complaint that an over-clocked graphics card running the latest patch to some simulator written by two guys in their bedrooms crashed with the drivers visible somewhere in the callstack. Put the blame on the GPU makers - they don't have the resources to refute you, no matter how bogus your claim.
5. Forward the User's Issue Around the Company Until It Gets Lost and Dropped
Everyone in the company has to be in on this strategy for it to work - if one of your idiot coworkers actually solves the user's problem, well that defeats the purpose. This strategy can be combined with (3) and is sort of a riff on (4) - once the user complains that they got dropped, blame everyone else in the company for the mis-communication.
Monday, November 09, 2009
So as I am about to blog about all of the new and cool things coming in XSquawkBox 1.3, let me first make this painfully clear:
XSquawkBox is a piece of code that Wade and I write in our spare time. After finishing our regular job of coding, rather than sit back and read a book or take a walk in the park, we go back to our computers and write yet more code. We don't make any money off of XSB, and no one pays any money for XSB.Okay - that was a little bit over the top. We want people to enjoy XSB - otherwise we wouldn't have published it. And 99.999% of XSB users are very understanding and appreciative of the development situation.
This type of freeware development has certain constraints: there is no time table, XSB will be on the back burner when real work comes up or when real life comes up. There is no support. There are no guarantees of features or functionality. If you don't like those terms, I will issuse you a full refund and you can use a different piece of freeware.
With that in mind, I can't resist posting some of what is coming in XSB 1.3 - I think it's going to be a really strong release. 1.3 will have the bug fixes people are waiting for, including a fix for the dreaded noise on push-to-talk on Windows. Also coming:
- Wade programmed the server list to auto-populate off the net. How did we live without that?
- COM2 can tune into voice channels too. COM2 has its own hardware selection for those with multiple sound cards or USB headsets.
- You can turn on "labels" for traffic. I actually hate this feature, but I programmed it anyway - it's optional. I'll post why I did it some other time.
- XSB can connect to itself for users with multiple X-Plane visuals. Simply run XSB on every computer, log in from one (the "master") and then connect to the master from each visual (the "slave"). You'll have traffic visible across monitors.
When will 1.3 go beta? I don't know...ask Wade. Wait - don't ask Wade. He'll get to it when he is able!
Friday, November 06, 2009
Ignorance Is Bliss
We have tried the other approach: ignore missing art assets. ENV based scenery in version 7 did not require custom objects to actually be available - missing objects were ignored.
When I was working on the ENV reader for version 8 (the ENV code needed to be retrofit into the new rendering engine) I found to my surprise that virtually every ENV-based custom scenery pack I looked at was missing at least a few of the OBJs that the ENV referenced! I don't know how this happened - it seems that in the process of working on scenery, authors started to "lose" objects and simply never noticed.
When we developed DSF we had a chance at a clean slate: there were no DSFs in existence so we could set the rules for art assets any way we wanted. So we picked the harshest rule possible: any missing art asset was illegal and would cause X-Plane to refuse to load the scenery package, with no way to ignore the error. Why be this rude?
- Missing artwork failures are 100% reproducible - you don't have to try your package more than once to see the problem. If you are missing an art asset, you will have the failure every single time you run.
- The error is found on load - you don't have to fly over the art asset to discover that it is missing.
- Therefore if an author tests a scenery package even once, even in the most trivial way, he or she will discover the missing art asset.
- Once the error is fixed, it is fixed forever, so a scenery pack that passes this quality control measure in development will be just fine "in the wild".
- This rule has been in place since 8.0 beta 1 for DSFs, so there are no legacy DSF files that would have this problem.
There is one special case worth mentioning: a scenery pack might reference an art asset in another scenery pack, and that other scenery pack might not be installed. This is why the library file format allows for "export_backup". (Read more here and here.) Export_backup is your scenery pack's way of sayingg "only use this art asset if you can't find it somewhere else. It lets you provide emergency art-work in the event the other library is not installed.
What should you use as an emergency backup art asset? It could be anything - a big floating question mark, an empty object, a poor approximation of the desired art asset. But my main point is that responsibility for location of art assets lies with the author of a pack - so if you make a scenery pack, be sure to provide backups for any libraries you use.
(If you use OpenSceneryX, the library comes with a "developer pack" - read more here. Basically they already built a "backup" library that you can put in your scenery pack to avoid nasty messages from X-plane when OSX isn't installed.)
Wednesday, November 04, 2009
I was able to fix a few MeshTool bugs, but I have more problem reports, so I might be able to do a MeshTool beta 3 in a few days if things go smoothly.
940 is final - there might be a 941 maintenance release - we'll know in a few weeks.
Friday, October 16, 2009
This is the Cirrus jet in X-Plane 9.22.
Pretty, eh? But...look through the right two windows - look at the passenger door on the other side. Note that through the middle window the passenger door is visible. Through the right window, the entire passenger door is gone!
This is the same shot from X-Plane 940, where the problem has been corrected.
Thursday, October 15, 2009
So first, what's so special about this? Well, if you've ever worked with a lot of translucency in X-Plane, you know that it doesn't work very well - you invariably get some surfaces that disappear at some camera angles.
The problem is that current GL rendering requires translucent surfaces to be drawn from farthest to nearest, and who is far and who is near changes as the camera changes. There are lots of tricks for trying to get the draw order mostly right, but in the end it's somewhere between a huge pain in the ass and impossible.
What's cool about the robot data is that the graphics card is drawing the transparency even if it is not drawn from back to front, which means the app can just shovel piles of translucent triangles into the card and let the hardware sort it out (literally).
X-Plane is currently riddled with transparency-order bugs, and the only thing we can do is burn a pile of CPU and add a ton of complexity to solve some of them partly. That proposition doesn't make me happy.
So I am keeping an eye on hardware-accelerated OIT - it's a case where a GPU feature would make it easier for modelers to create great looking content.
Sunday, October 11, 2009
This will hopefully be a lot easier than the current, convoluted, WED 1.0 technique of selecting the two surrounding vertices and picking "split", then repositioning the vertex.
Hackers: this features is not yet checked into the tree, so ... building from source won't help you. It'll be available some time in the next week.
Saturday, October 10, 2009
With that out of the way: I have released a WorldEditor 1.1 developer preview. So I wanted to explain in a little bit more detail what the difference is between a developer preview and a beta. Here is an approximation of the standard definitions of "milestones" - they are what I use for WED.
- Development: not all features are coded, no guarantees about bugs.
- Alpha: all features are coded, no bug is so severe that you can't at least try a feature. (For example, if WED crashed on startup, it would not be alpha, because you could not test saving files.)
- Beta: all requirements of alpha, also no bugs that cause program crash or data loss.
- Release: no open bugs.
WED 1.1 is still in phase (1) - development, and the build I posted is a developer preview - a cut of whatever code I had laying around. So: I can't promise it isn't going to trash your data or crash! Be even more cautious with the developer preview than you would with a normal beta. You don't want to run a five hour session without saving your work, and you want to be backing up your work often - the "save" command might trash your entire project.
Why do a developer preview if it's still so buggy? Some users who know how to compile WED from its source code are already using WED 1.1 and they seem to be enjoying it. So far it appears not to be lethally broken. Given that and the fact that most of the uncoded WED 1.1 features are usability and edge-cases, it seemed like the developer preview could be useful for getting earlier feedback.
One last note: the manual is not updated at all, nor is there any documentation on the new features. Let me be clear: no tech support or help is provided what-so-ever. Do not email me, or X-Plane tech support with "how do I use WED 1.1" questions. If you cannot figure out how to use WED 1.1 on your own, don't use the developer preview.
Wednesday, October 07, 2009
Monday, October 05, 2009
Sometimes a user will offer to work around a bug by changing an add-on, or just dropping the add-on. But...I have learned the hard way: never ignore a bug you don't understand.
First, the bug might run much deeper than the reported use. Perhaps the bug is actually affecting dozens of other add-ons.
If we don't understand the bug, how can we say "this is so unimportant that it can be ignored"?
Now some bugs, once diagnosed, may prove to be not worth fixing. But...until the bug is fully understood, we have to take the time to dig in. We can't just give up because the bug seems unimportant.
Tuesday, September 29, 2009
Freecycle is basically a series of locally based mailing lists for the free exchange of things you would have thrown out. My wife and I use it every time we move - we get packing materials from other people who have just moved, and then give them back to our new local group when we are done. We also freecycle all of "that old junk" that we realize we shouldn't move with us to the next location.
If you are ever in a situation where you need to throw out items that are potentially useful but just not worth the cost of carrying anymore, please consider Freecycle - it strikes me as a reasonably reliable way to keep items out of landfills. (Especially hard-to-recycle mixed-material items like electronics.)
One note of caution: freecycle is just a group of people exchanging "stuff" in their spare time - typically it could be 2-3 days before someone can pick up your donated item. So...if you have a lot of stuff to give away, start early. I made this mistake in DC and wasn't able to give away all of the items because people couldn't pick them up soon enough.
Sunday, September 27, 2009
This has put me a little bit behind schedule with, well, everything. I have four or so interesting example plugins almost ready to be published, WED is ready for some kind of developer preview, and I have some X-Plane 940 bug reports to go through. I hope to get the back-log cleared out this week. But first priority: unpacking the computers (and figuring out how to get an internet drop into the new home office).
Tuesday, September 22, 2009
The same thing goes for normal maps - you can't mirror a normal map without getting bogus results. Think of your normal map as a real 3-d (slightly extruded) piece of metal. If you are seeing the text go the wrong way, the metal must be facing with the exterior side pointing to the interior of the airplane.
Having your normal map flipped does more than just "inset" what should be "extruded" - it completely hoses the lighting calculations. In some cases this will be obvious (no shiny where there should be shiny) but in others you will see shine when the sun is in a slightly wrong position.
The moral of the story is: you can't recycle your textures by flipping if you want to use normal maps.
Wednesday, September 16, 2009
First, photo-realistic scenery links the use of photographs in scenery to realism in its very name, and I don't buy it.
Yes, some photo-based scenery packages are realistic looking, by today's standard of flight simulation. Some are not. Just look at any old photo-realistic package to see what I mean...realistic is a relative term, defined by how much fidelity we expect, and that expectation has steadily gone up. Even with a modern package, a photo-based scenery pack might not be realistic if the photos are not used well.
(For example, is a package that uses orthophotos on the mesh but provides no 3-d in a city still considered realistic now? What kind of review would such a package get?)
Nor do photos have a monopoly on realism. They can look nice when well used, but I would put Sergio's custom panel work up against any photo-based panel. (Sergio does not manipulate photos for his panel, he constructs them from scratching. He has thousands of photos for reference, but the pixels you see are not originally from any photo.)
Second, the term photo-realistic (in the scenery world) is most commonly applied to scenery that applies orthophotos to the terrain mesh in a non-repeating way. But orthophoto base meshes don't have a monopoly on the use of photographs, which can be used to form land-class textures or to texture objects.
Okay, so "hate" is a strong word. But I feel some frustration whenever I see scenery discussed in terms of "photorealism".
Tuesday, September 15, 2009
In particular, one of those half-finished features not only could use to be shipped, but also affects the way just about every other rendering effect works. So better to get these features finished first, and build new effects within the context of these "new rules".
Here are some of the ideas that I've heard kicked around:
- Environment maps on the plane - I like it, it's not that hard to do, and the framerate hit could be ramped up and down with detail. See above about finishing other features first.
- Next-gen texturing on runways (wet runways, environment maps, bump mapping) - I like all of it. The runways really need to be addressed comprehensively, not piece-wise, in order to find a rendering configuration that meets our scalability and efficiency needs. (In other words, we can't just burn tons of VRAM on the runways, and we need a way to render them that works on low and high end computers with one set of art assets.)
- Normal mapping on the ground. I like it, but I wonder if this isn't part of a bigger idea: procedural texturing on the ground. E.g. if we want to add detail on the ground, can we add it with multiple layers at different resolutions with a shader adding yet more detail.
Going Crazy With Choices
Here's a straw man of why polygons might be different: draped polygons can't have specular shininess right now, and I don't think anyone is complaining. So it's a bit of a waste to use the alpha channel of a normal map for shininess. Perhaps it could be used for something else like an environment mapping parameter.
Hrm...but we would like to have environment mapping on airplane objects too someday. Well, we could go two ways with that. We could just use the alpha channel for shininess and environment mapping...not totally unreasonable but it wouldn't let us have a glossy non-reflective material, e.g. aluminum vs. shiny white paint.
Math nerds will realize that the blue channel in normal maps can be "reconstituted" from the red and green channels by the GPU (at the cost of a tiny bit of GPU power). That would give us two channels to have fun with - blue and alpha. Perhaps one could be shininess and one could be an environment mapping material.
Well, shininess is still no good on the ground. But...perhaps that would be a good place to store dynamic snow accumulation? Hrm...
My point of this stream-of-consciousness rant is that the design of any one rendering engine feature is heavily influenced by its neighbors. We'll get all of these effects someday. If there are features that are really easy, we can get them into the sim quickly, but the only obvious one I see now is using bump maps on other OBJ-like entities (which at this point would mean facades).