Talk:Texture mapping
This article is rated Start-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
This article links to one or more target anchors that no longer exist.
Please help fix the broken anchors. You can remove this template after fixing the problems. | Reporting errors |
Texture mapping vs parametrization
[edit]The article needs a serious rewriting. There is a deep confusion between the parametrization problem and the texturing one. The code is by far unnecessary and probably make everithing less clear. I will probably remove it and rewrite some stuff in the next days. ALoopingIcon 09:15, 7 February 2006 (UTC)
Different types of texture mapping
[edit]I think some mention should go to popular types of texturing, such as environment reflection mapping, normal/bump mapping, etc., as well as the use of transparency in texturing.
Incomplete sentence
[edit]At the moment, this incomplete sentence is in the article & I can't figure out what it was supposed to say: "Before Descent and Duke Nukem 3D, successfully used portal rendering and arbitrary orientation of the walls." Can anyone fix? Elf | Talk 05:02, 26 April 2006 (UTC)
What???
[edit]I removed this:
Between 1990 and 2000, various hybrid methods existed that mixed floating point and fractions and additionally fixed-point numbers, and mixed affine and perspective-correct texture-mapping. The mix basically uses perspective-correct texture-mapping on a large scale, but divides every polygon in 2D image-space into either quadrants (Terminal Velocity:8x8), small spans (Descent:4x1, Quake:16x1) or lines of constant z (Duke Nukem 3D, System Shock and Flight Unlimited). The constant z approach is known from Pseudo-3D. Pseudo-3D does not allow rotation of the camera, while Doom and Wacky Wheels restrict it to only one axis. Before Descent and Duke Nukem 3D, successfully used portal rendering and arbitrary orientation of the walls. 2D raytracing of a grid was added to the mix and called ray-casting. This was used in Wolfenstein 3D and Ultima Underworld. Demos often used static screen-to-texture look-up tables generated by a ray tracer to render and rotate simple symmetrical objects such as spheres and cylindrical tunnels.
After 2000, perspective-correct texture mapping became widely used via floating point numbers. Perspective-correct texture mapping adds complexity, which can easily be paralleled and pipelined costing only silicon. And it adds one divide per pixel. In this respect, a graphics card has two advantages over a CPU. First, it can trade high throughput for low latency. Second, it often has a similar z and 1/z from a former calculation. Floating point numbers have the advantage that some of the bits belong to the exponent and only need to be added. The improvement from using long floating point numbers is immense, as rounding error causes several problems during rendering. For instance (this is not a collection of examples, but a complete list for the basic texture mapper), in the transformation stage, polygons do not stay convex and have to be split into trapezoids afterwards. In the edge interpolation, the polygons do not stay flat and back face culling has to be repeated every for span, otherwise the renderer may crash (with long variables, this bug may need hours to show up--or even years). Also, because of rounding in the span interpolation, the texture coordinates may overflow, so a guard band and/or tiling is used.
Ray tracers are able to run real-time or high resolution. They use Barycentric coordinates, which produce holes at the vertices. But due to the high precision used in ray-tracing, it is unlikely that any ray will pass through these holes.
It makes no sense for me. →AzaToth 03:23, 28 April 2006 (UTC)
I support your decision. This article is still being worked, those things were just out of place.
- This first paragraph was a de-facto useless historical information. This won't apply anymore even to mobile devices.
- This whole FP blah is definatly out of place (I also agree it's quite senseless considering the topic).
- Ray-tracing considerations should not be here just because they use FP barycentric coords.
MaxDZ8 talk 06:46, 28 April 2006 (UTC)
- I added {{cleanup-rewrite}} because I think that's the reality. →AzaToth 16:04, 28 April 2006 (UTC)
- Is anyone actively working on this article? It really is one of the most important computer graphics related topics, yet it currently is in a pretty pitiful state. I'd like to pitch in, but there's so much to do, I'm not too sure where to start. How about we brainstorm an article overview, and hand out the sections? In any case, I'll see what I can do. Nezbie 04:16, 2 May 2006 (UTC)
I am not. The most evident problem is this is now assumed to be just there, thus becoming Deep Magic. I am already having troubles working on shaders and level of detail (programming) so I'm sure I cannot handle this. This feature however traces back to the early ages of 3d graphics so maybe you can find something at http://accad.osu.edu/~waynec/history/PDFs/. MaxDZ8 talk 15:47, 2 May 2006 (UTC)
Comments about the Code
[edit]the code is not really related to what texture mapping is. Texture mapping is a rasterization problem. In the code given, texture co-ordinates are simply set. Also, most implementations of setting texture co-ords is NOT done in hardware, except if your'e using a vertex shader. The reason I mention all of this is because is has nothing to do with being perspectively correct! Perspective correct texturing involves corecting for the no-linear perpective transform. To correct for perspective, u, and v is divided by w in the rasterizer, otherwise you are dealing with affine texturing.
I hope other people agree - I dont have time to edit the original, but will contribute if anyone just removes it, and adds stubs.
- When I first stumbled on this article, I was very tempted to remove the code at once. I felt this was too drastic wrt the rest of the community, so I just stated this just sets TexCoords. I support your proposal to remove the code completely.
- The actual way tcCoords are generated is not meaningful, in the rasterizer, they're just there.
- To other readers and better support your proposal, I say what you wrote is definetly correct.
- In case another one agrees (or no one says for a while) I'll remove the code.
MaxDZ8 talk 07:27, 12 June 2006 (UTC)
- The description of the code is incorrect, and the code itself is incorrect as well. That perspective correction is done in the rasterizer has already been mentioned, but beyond that this will generate a horrible seam. When you are on one side of the seam, your vertices may have a u coordinate near 1.0, but on the other side, they will have a coordinate near 0.0. Using only (x,y,z) to determine the texture will not allow you to pick either a 1.0 or 0.0 for points directly on the seam which will correspond to the other points in the current triangle, and what you will get is the entire texture appearing backwards across the single triangle. Additional information needs to be known in order to compensate for this. - Rainwarrior 19:55, 1 July 2006 (UTC)
- That's sure. It's a well known issue with naïve spheremapping. It doesn't matter anyway since the whole thing was out of place. I'm glad someone dropped the code.
- MaxDZ8 talk 09:02, 2 July 2006 (UTC)
Some rewriting July 1, 2006
[edit]I've rewritten the lead, trying to preserve the same information... though it still feels disorganized to me. I'm not even sure what to do with the "history" section... probably remove it. It appears to be trying (badly) to describe the Bresenham algorithm, but also at the same time the difference between affine and perspective-correct texture mapping (with some strange confusion between the two... why keep mentioning "fractions"?). I'm thinking the section should be removed and just replaced with more description of those two things. (I'll think about it, maybe do that edit in a few minutes.) I've also noticed that the texture filtering, bilinear interpolation, and nearest neighbor interpolation articles seem to be pretty bad as well. - Rainwarrior 20:18, 1 July 2006 (UTC)
- Okay, I've finished rewriting the information that was there. I think it needs some organization, but at least it's more accurate now, I hope. There are some details I don't have, like I don't know exactly when perspective correct cards hit the market (I said "recently". I hate being so vague.); there's actually no article on affine or perspective correct texture mapping. This would probably be a good addition to this article (I wouldn't suggest making a new article for it) as its own section, perhaps. - Rainwarrior 20:36, 1 July 2006 (UTC)
As far as I remember, my old Permedia2 on Pentium133 was already perspective correct. I hardly believe cards without this feature to be ever mass-marketed anyway so I would say it's at least 10 years this feature is commonplace. Texture projection however (division of tcCoords by .w coordinate) may be newer, I guess it was supported only by DX7 or DX6 (it was always supported by the GL) so it's a few years later.
Looks all but "recently" to me.
MaxDZ8 talk 09:02, 2 July 2006 (UTC)
Just to add one thing about the division of tcCo-ords by 'w'. This is done in the rasterizer, to perform perspective correct texturing. This is to deal with the fact that the projection is non linear.
When performing projective texture mapping, we use homogeneous texture coordinates, or coordinates in projective space.
When performing non-projective texture mapping, we use real texture coordinates, or coordinates in real space.
Software renderer
[edit]I have not looked at your (plural) age, but you seem not to have lived in the times when home computers needed software for texture mapping. I guess everybody is happy that these times are away and even JAVA on a mobile phone uses OpenGL (you tell me if it does). The reason for this section is, that wikipedia is full of games (articles about~) out of that time. These authors have an even lower grasp of how linear and perspective correct texture mapping have been intermixed then you. I understand that the paragraph maybe too hard to understand without pics, so who cares about DooM, Quake, or descent anyway? Arnero 17:17, 14 April 2007 (UTC)
- I don't understand what you're talking about with regard to this article. Are you saying you'd like to see references to Doom and Quake in there? (There's one reference to Michael Abrash's discussion of perspective correct textures in Quake) I'm not sure exactly when perspective correct texturing appeared in games, but it was at least as early as that. The practice is much older than when it appeared in games, of course. I don't think OpenGL is really directly related to perspective correct textures; it's just an interface to the hardware (or software) which may or may not have that kind of texturing. Yes, a picture would be good to explain perspective correct textures. - Rainwarrior 17:42, 14 April 2007 (UTC)
Affine mapping in Doom?
[edit]According to the article, Doom has to have walls perfectly vertical and floors perfectly flat because it uses affine texture mapping. The way I understand it, Doom uses a raycasting algorithm that has nothing to do with the scanline / polygon methods referred to in the article. Also, I've played Doom and seen screenshots and it doesn't seem to have any of the artifacts of affine texture mapping. So is this caption correct? 136.176.8.18 15:05, 4 September 2007 (UTC)
- The caption is correct, and hopefully explained why Doom doesn't have affine artifacts (maybe it's inadequate); It's only half affine. For a horizontal surface if the perspective correction is done only on the vertical and then a horizontal span is rendered in an affine way; because a screen-horizontal span of pixels on a world-horizontal surface all has the same depth there is no difference between affine and texture correct for that single span (so affine is used). Vice versa for the vertical surfaces, which are rendered in vertical spans (and note that wall textures are stored by column to facilitate this). Secondly, while Doom does do a little raycasting, most of the work is done by a BSP tree (unlike in Wolfenstein). Yes, there are other reasons why the walls/floor had to be axis aligned, but affine texture mapping is the major one (big performance boost because it's the inner-inner loop of the rendering process). - Rainwarrior 16:31, 4 September 2007 (UTC)
- it's not correct to say doom used to have exact horrizontal walls because of affine texturemapping. Doom used raycasting for rendering and this technique uses a bitmap-format for levels, that's why you just have horrizontal walls and no ramps. btw. that's the reason for the doors opening by sliding. —Preceding unsigned comment added by 84.177.10.8 (talk) 02:00, 15 September 2007 (UTC)
- As I said, there are multiple reasons for aligning walls and floors to axes, but the ability to use affine texturing to render the vertical or horizontal spans is definitely something that benefits from this choice. Raycasting isn't something that is technically limited to axis-aligned objects (and the BSP tree really helps speed it up), and it doesn't comprise the bulk of the computation in the Doom engine. The bitmap format used for the walls and floors was specifically chosen to facilitate the rendering of vertical and horizontal spans; axis-alignment is the reason for the format, not the other way around. - Rainwarrior 02:59, 15 September 2007 (UTC)
- BSP trees are not a rendering method, raycasting is the method it uses, and because of the assumptions it makes and the way maps are stored, the walls are all vertical. It doesn't use polygons the way modern games do, so I didn't think that affine mapping as described by the article and the picture made any sense when applied to Doom, a much older game with a completely different rendering algorithm. 136.176.19.42 03:27, 3 October 2007 (UTC)
- Whether you're rendering a triangle or a span, the same problems with perspective apply to textures. Doom avoids the expense of perspective correction by using axis-aligned spans. Doom was right in the middle of the period of time where the difference between affine and perspective correct texture mapping really mattered (i.e. there wasn't hardware ready to do it for you), and I think it is a very appropriate example. I didn't write the caption on the image though, maybe the word "scanline" is misleading, since it never really renders an entire scanline; I'll change it to use the word "span", which is more in line with existing literature on doom, and the source itself. - Rainwarrior 04:12, 3 October 2007 (UTC)
- Doom doesn't use raycasting for rendering. Wolfenstein3D used raycasting but Doom didn't. Doom renders walls in order from front to back from the BSP. Leem02 (talk) 08:53, 8 September 2018 (UTC)
Doom screenshot has the description "[d]oom renders vertical spans (walls) with perspective-correct texture mapping" which is not correct. Doom uses linear (affine) interpolation to map textures to vertical lines. The reason it looks perspective correct is because all walls are vertical and has no depth change when mapping the texture. — Preceding unsigned comment added by Neurosys (talk • contribs) 00:03, 26 March 2017 (UTC)
Link to the same page!
[edit]The article currently links to itself (1st sentence, "surface texture"). It should either be removed or a real page only about textures should be created. --78.56.57.236 (talk) 19:10, 22 February 2008 (UTC)
It's probably the result of some not-well-implemented merge. Removed.
MaxDZ8 talk 09:16, 23 February 2008 (UTC)
Perspective correct math simplification
[edit]My math skills are rusty, but doesn't the complicated equation simplify to interpolate(u)/interpolate(z)? Surely there is a less convoluted way to express this in the text than the given equation?--Henke37 (talk) 19:12, 20 July 2014 (UTC)
Maybe this article should be split up into smaller pieces? texture mapping is more abstract than realtime rendering;
'Texture Map' in turn linking to S3TC, procedural texture , and bitmap images. this could also link to 3d sculpting/paint software, and mention the use of texture maps for other surface properties (light maps, specular maps), also mipmaps, swizzling, tiling, clip maps, vram.
'texture map rendering' - describing the technicalities of realtime rasterisation of texture maps; evolution from 'affine texture mappers' (e.g. PS1) to 'perspective correction', and describe the various approximation approaches possible.
The main article should describes the overall concept, and links to 3d modelling packages & concepts, and the above. methods of UV editing, cylindrical mapping unwraps etc.
However as I suggest this I'm reminded of how I run into 'notability' issues. I personally prefer the idea of smaller articles because they link concepts more accurately, aiding discovery (and future use as an AI resource) Fmadd (talk) 12:11, 4 June 2016 (UTC)
split the article?
[edit]Perhaps the article would be better split into 3 ? (I've tried to re-arrange into sections that would 'map' onto them)
- Texture Map: - the resource used in texture mapping;details- formats, memory ordering, mipmaps, how they are used as resources in APIs. also link to 'tile maps'/character-map graphics. Use in GPGPU to approximate 1D,2D, 3D functions through lookup. 'used in multiple layers by materials.'. The article Texture atlas and possibly texture compression (both stub class) could be merged here. render-to-texture
- Texture mapping: the overall idea, used in 3d modelling & rendering; including assignment ('mappings' e.g. cylindrical,projection, UV unwrap, tiling, mirroring). mention control by vertices. Relation to a Material. also History of. Rendermapping/baking. UV painting. The existing article UV mapping is quite close to what this should be, I would suggest merging this split with that.
- Texture mapped rendering: the history & technicalities of software & hardware texture mappers. history: flightsims, CAD,SGI, games,modern GPU. affine,perspective-correct, forward (saturn)/backward; doom-like limited axes. surface caching. Overview of modern shader pipeline.
As I make this suggestion, I'm reminded how I often run into 'notability guidelines'. However smaller articles are IMO more manageable (the wording and ordering of a large article can become incoherent) and they flow better through more precise links - improving discoverability, and increasing wikipedias' value as an AI resource. Fmadd (talk) 12:26, 4 June 2016 (UTC)
ok i found a better solution? adding a computer graphics glossary, which might allow streamlining this article by simple links to definitions instead of needing so many sections here Fmadd (talk) 13:11, 5 June 2016 (UTC)
Information related to hardware implementations of texture mapping and rasterization
[edit]Posting here at Fmadd's suggestion. Prefacing this with an "I'm new, please forgive my mistakes and provide some guidance" in case I do this wrong.
the following patent: https://www.google.com/patents/US6002738 describes a method and apparatus to perform both tomographic reconstructions and volume rendering using texture mapping within the same device. Essentially, the invention interprets tomographic data from something like a CT scan and processes and displays the resulting information using texture mapping techniques and hardware. Quote "The mathematical and algorithmic similarity of volume rendering and backprojection, when reformulated in terms of texture mapping and accumulation, is significant. It means that a single high performance computer graphics and imaging computer can be used to both render and reconstruct volumes at rates of 100 to 1000 times faster than CPU based techniques. Additionally, a high performance computer graphics and imaging computer is an order of magnitude less expensive than a conventional CT system."
I also have access to one of the authors of the following document:http://dl.acm.org/citation.cfm?id=134071 which describes shadows and lighting effects using texture mapping.
I believe I can also obtain physical copies of the SIGGRAPH proceedings from 1991 through 1999 at least, which I can use to add additional citations to multiple pages on the computer graphics pages. Ignus3 (talk) 19:31, 13 June 2016 (UTC)
- interesting. in the 1995-2000 era prior to shaders various techniques appeared trying to leverage texture-mapping hardware 'for other purposes', I suppose what you're talking about here would easily justify a new section, "applications of texture mapping hardware"/ "variations" ... not sure exactly how. Maybe what you're talking about even deserves its own article (which will have links to tomography, texture mapping, and volume rendering), or a section in another article like 'volume rendering' . IMO , add the info .. see how it goes.. (When I read your references, I was also hoping you'd have good suggestions for increasing the citations for a load of *uncited* information that I've brain dumped here whilst restructuring the article)Fmadd (talk) 20:32, 13 June 2016 (UTC)
- Hi fmadd, the above actually was my attempt to do just that, I'm pretty unschooled in the technology of 3D graphics, so I've been mostly going off of word association. I had also been staring at the screen so long my vision was going loopy, so I posted the info here first :) as to your suggestion regarding the tomographic reconstruction using texture mapping, I think that info might fit best in a discussion of CT or CAT scan machines (or the tech thereof) as it is referencing a device that uses a single processor for what was, up until that time, two processes. I also did a ton of copy editing of this page as that's something I feel more confident with than computer graphics tech.Ignus3 (talk) 22:49, 14 June 2016 (UTC)
Baking
[edit]The subsection does not define what exactly is baking. Maybe somebody can help with that. — Preceding unsigned comment added by 148.225.71.160 (talk) 19:41, 29 March 2017 (UTC)
- Texture baking is described in this section. Jarble (talk) 19:41, 29 December 2020 (UTC)
Doom caption misleading
[edit]FTFA: Doom engine renders vertical and horizontal spans with affine texture mapping, and is therefore unable to draw ramped floors or slanted walls.
Unless you know precisely how the engine works, this is very misleading. The texture mapping in Doom is only affine in the directions orthogonal to the viewing direction, which is also the only direction where it makes no difference.
Also, while this explains why the use of affine mapping in this direction doesn't affect the result, it doesn't explain why it cannot render slanted floors and walls, this is due to other engine limitations. Tomb Raider for example could render slanted floors, although both its floors (even flat ones) and walls were affected by the display problem caused by affine texture mapping. — Preceding unsigned comment added by 77.61.180.106 (talk) 00:13, 12 December 2021 (UTC)