Talk:CPU cache/Archive 1
This is an archive of past discussions about CPU cache. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 |
Trace cache history
A discussion of "first proposed" should take into account Alex Peleg and Uri Weiser, "Dynamic flow instruction cache memory organized around trace segments independent of virtual address line," US Patent 5,381,533 (filed in 1994, which continued an application filed in 1992; granted to Intel 1995).--M.smotherman 13:42, 23 June 2006 (UTC)
Clarification for increasing of associativity vs increasing cache size
This sentence is not clear: "The rule of thumb is that doubling the associativity has about the same effect on hit rate as doubling the cache size, from 1-way (direct mapped) to 4-way." Is the associativity doubling from 1-way to 4-way ? Isn't that quadrupling ? Does the same apply for doubling from 4-way to 8-way ? Beside clarification, I think this deserves further explanation, perhaps with examples - e.g. cache sizes and associativies for Athlon vs P6, P4 and Core, etc.
- Attempted a fix. Please let me know if it's better/understandable now.Iain McClatchie 05:32, 4 July 2006 (UTC)
does address translation really belong here?
It seems to me that much of this section should be moved to the virtual memory article (or removed, if it is redundant) --CTho 01:34, 23 December 2005 (UTC)
- Perhaps the design section of this article is not filled out enough. Address translation fundamentally affects cache design: virtual vs physical tagging/indexing, virtual hints, and virtual aliasing can only be explained in the context of address translation.
- As a seperate issue, address translation is performed by TLBs. Many common implementations of TLBs are, in a broad but useful sense, caches of the page tables in memory. I think this is a useful similarity to present.Iain McClatchie 05:36, 4 July 2006 (UTC)
Incomprehensible
This part is incomprehensible (to me):
- Implementation
- Because cache reads are the most common operation that take more than a single cycle, the recurrence from a load instruction to an instruction dependent on that load tends to be the most critical path in well-designed processors, so that data on this path wastes the least amount of time waiting for the clock. As a result, the level-1 cache is the most latency sensitive block on the chip.
--145.97.222.38 14:54, 7 Jan 2005 (UTC)
- I think the point being made is that most ALU operations complete in one clock cycle and are probably the most common instructions. Beyond ALU operations and JMPs, reading/writing memory is probably the next most common/useful operation. When doing a bunch of memory reads, the CPU will probably fill the cache with what's in RAM at those locations, so most of your memory reads are going to be from cache. So the cache reads are extremely common, but take multiple clock cycles because they may need to transparently read from RAM and fill into cache, and from there it may still take multiple clock cycles just to read data from cache. The more clocks it takes to read from cache, the slower you are going to operate on your data. Since a majority of memory reads are actually from cache, the time required to read from cache has more impact than how quickly cache can fill from RAM. Also, if it takes an extra clock cycle to read from cache, any typical operation on data will require an additional clock cycle. Rmcii 02:37, 5 May 2006 (UTC)
- I wrote this paragraph, and I've just cut out most of it. I was attempting to convey too much insight, and the troubles were many: data caches are often NOT the critical path, due to all sorts of practical difficulties; understanding this paragraph required some understanding of synchronous systems; and finally, it was only really necessary to motivate the following description of the implementation. So I just said that folks try hard to make caches go fast, and left it at that. Iain McClatchie 06:06, 4 July 2006 (UTC)
This section also confuses me:
- The diagram to the right shows two memories. Each location in each memory has a datum (a cache line), which in different designs ranges in size from 8 to 512 bytes. The size of the cache line is usually larger than the size of the usual access, which ranges from 1 to 16 bytes.
"Each location" has a "datum" == "cache line" == "between 8 and 512 bytes in size"? And between the CPU, the cache, the main memory, and all the kinds of things they contain, what exactly does a "usual access" mean? --Piet Delport 11:22, 11 April 2006 (UTC)
- I think the point being made is that most caches allow storage of more than the typical datum size. In x86, memory is accessed 1 byte at a time, and there's support for up to 8 byte (qwords) registers (expanded by MMX/SSE), so you're not likely to find a cache with a size on the order of 8 bytes. I think DDR/DDR2 supports streaming an entire row to the memory controller. If you're going to stream a row, you need a cache large enough to hold it or else there's no benefit from its use. Rmcii 02:37, 5 May 2006 (UTC)
- The "usual access" is the usual access from a CPU instruction. On a 32-bit CPU, this is usually a 32-bit access, but sometimes it's 64 or 128 bits. It is very unusual for a cache request to be larger than that. CPUs may have various bus widths throughout the design which have little relationship to the size of these accesses. I've updated the article, please let me know if it's more understandable. Iain McClatchie 06:06, 4 July 2006 (UTC)
By the way, I'd just like to point out to anyone frustrated by the article that these two feedback comments were quite valuable to me. Once resolved, I think they will have helped improve the clarity of the article, and I appreciate that. Iain McClatchie 06:09, 4 July 2006 (UTC)
Working sets
The phrase "working set" doesn't appear in this article at all, which I think is a fairly major omission. I must sleep now, or else I'd add it right now. A quick Google search shows that most people consider a "working set" to refer to memory pages, but my understanding is that the concept also applies to cache lines. --Doradus 05:26, Jan 7, 2005 (UTC)
I'd like to stay away from adding "working sets" into this article.
Working sets are generally attributed to the use of main memory by processes in a multiprocessing virtual memory system. The set size matters because the operating system can allocate more memory to one process and less to another. There is some similarity to the hit rates versus size that characterize caches, but folks have found hit rate rather than working set to be a more useful concept for hardware caches with fixed sizes. Iain McClatchie 06:13, 4 July 2006 (UTC)
K8 Caching diagram
The diagram of the K8 cache hiearchy is misleading. While the TLBs are caches, they are not filled from "normal" memory as the icache and dcache are, but are filled by the OS from page tables. Dyl 07:40, Dec 24, 2004 (UTC)
I was under the impression the P5, P6, P4, K7, and K8 all have hardware page table walkers. Is this not correct?
Also, can you be more specific about how the diagrams are misleading? The icache and dcache cache main memory, and the TLBs cache the page tables (which are in main memory). If the TLBs are not filled by a hardware table walker, then I agree there should be some distinction made between the hardware and software fill paths on the diagram. Iain McClatchie 09:09, 25 Dec 2004 (UTC)
I believe all x86 implementations have hardware page table walkers. It's generally a per-instruction-set type of thing rather than per-implementation, since it has software consequences. --CTho 01:24, 23 December 2005 (UTC)
- Can you elaborate? I don't understand. --Patrik Hägglund 07:33, 21 September 2006 (UTC)
- If you don't provide hardware table-walk, then you need some kind of special instructions in your instruction set that enable management of the TLB. However, the two are not orthogonal: the Power Architecture (and I'm sure many others) provides TLB management instructions for those that use software tablewalk (SWTW), even though many implementations have supported hardware tablewalk (HWTW) also. Some OS people like HWTW because it's fast, but others don't like it because it constrains how they manage virtual memory -- they can't provide their own preferred TLB replacement algorithms, or have page table entries with extra information, because those details are constrained by the HWTW algorithm. BGrayson 15:49, 27 April 2007 (UTC)
I think that the K8 diagram and its text was a very inforamtive example. Thanks! However, I want to know more about for example how the load-store unit is connected to the caches. In AMD's "BIOS and Kernel Developer's Guide", section 10.2.1.2 Miss Address buffers (MABs) and Page Directory cache (PDC) are mentioned. How do they fit into the picture?
How are L1 and L2 caches indexed and tagged? Reding the text about address translation, I assume that L1 caches use virtual indexing, and physical tagging with vhints, and the L2 cache use physical indexing and physical tagging. Is that correct? --Patrik Hägglund 07:33, 21 September 2006 (UTC)
- In general, not all L1 and L2 caches are virtually-indexed -- this article seems too slanted in that direction. For example, Freescale and IBM PowerPC chips have always used physically indexed and physically tagged L1 caches, without sacrificing L1 cache latency. BGrayson
Better organization?
Just an opinion, but this article might benefit from a reorganization along these lines:
- Intro
- Why cache is necessary/important
- How cache works
- Design - i think this is the most important reorganization, since the concept of cache design is somewhat convoluted in this article: should mention clearly 1. why a design choice/option exists, and 2. what the problem this design choice solves 3. how this design choice solves the problem
- How researchers analyze cache performance
- Areas where current research is focused on, in regards to cache design
- Implementation (how design concepts are implemented on CPUs) - stuff like address translation should go here, imo
- History (discusses how cache design has evolved along with CPU development)--Confuzion 02:02, 7 Jan 2005 (UTC)
Organize this AND dumb the wording down please! Not everyone knows all the terminology of processors.
It strikes me that this page is a victim of its own success. Not all of this is specific to the CPU cache and much text duplicates the Cache page. I understand the need for completeness and narrative, but this page could benevolently improve the cache page yet retain narrative and be more specific to what it is. Stuff like address translation would then work well here, as that is quite specific to CPU caches. Notwithstanding of course that address translation has a disambiguation page that does not reference address translation in the context of CPU caches. 64.157.7.133 22:16, 7 May 2007 (UTC)
Pronounciation
Could we have a guide to pronunciation? I have head it pronounced "cash", "catch" and "cashay" before, which one is right? —Preceding unsigned comment added by 81.179.78.222 (talk) 20:35, 8 September 2007 (UTC)
- Main article(Cache) contains transcription of this word, so I think it doesn't necessary to include it in this article. If you are really interested, you can read and listen pronunciation of this word on the en.wiktionary.org, i.e. here. Dan Kruchinin 03:13, 21 October 2007 (UTC)
Recent edit
This edit http://en.wikipedia.org/w/index.php?title=CPU_cache&curid=849181&diff=169828857&oldid=169769446 seems generally good, but I don't like "the more economically viable solution has been found: ", because von Neumann's original paper proposed a hierarchy of memories. The solution was "found" before the first machine was even built. --CTho 13:26, 7 November 2007 (UTC)
- Yes, my bad, please WP:SOFIXIT next time. --Kubanczyk 15:05, 7 November 2007 (UTC)
History. 1970 vs 1980
In the history section I pointed that performance gap between processor and memory has been growing since 1980. But in this edit this year was changed to 1970. When I wrote about it I used the "Computer architecture : a quantitative approach" ISBN 1-558-60596-7 by John L Hennessy as a source of information. On the page 289 he says a bit about cache history. There he writes that 1980 year was a start point of processor-memory performance gap growing process. Also the same information can be found here in the "The Processor-Memory performance gap" section. Dan Kruchinin 03:45, 8 November 2007 (UTC)
- Yes, my bad, please WP:SOFIXIT and provide those refs in normal way :)) --Kubanczyk 08:09, 8 November 2007 (UTC)
Request for references
Hi, I am working to encourage implementation of the goals of the Wikipedia:Verifiability policy. Part of that is to make sure articles cite their sources. This is particularly important for featured articles, since they are a prominent part of Wikipedia. The Fact and Reference Check Project has more information. Thank you, and please leave me a message when a few references have been added to the article. - Taxman 19:31, Apr 22, 2005 (UTC)
Is it ok to use references like http://portal.acm.org/citation.cfm?id=224437 which require accounts to access them? --CTho 01:29, 23 December 2005 (UTC)
- Yes, it *is* OK to use inline references that require paid accounts to access. The WP:EL#Sites_requiring_registration guideline clearly states "A site that requires registration or a subscription should not be linked unless the web site itself is the topic of the article or is being used as an inline reference."
- If you got information from it, WP:SAYWHEREYOUGOTIT -- it doesn't matter if other people can get it for free or if it requires a paid account to access it. --68.0.124.33 (talk) 02:05, 26 April 2009 (UTC)
- I found the following articles which may benefit the authors of this article:
- Whetham, Benjamin. (5/9/00), "Theories about modern cpu cache". Overclockers.com Retrieved: 31st May 2007 From: http://www.overclockers.com/articles139/
- The Computer Language Co. Inc., (1999), "Cache". Techweb.com Retrieved: 31st May 2007 From: http://www.techweb.com/encyclopedia/imageFriendly.jhtml?term=cache
- Alan Jay Smith. (August, 1987). "Design of cpu cache memories". Retrieved: 31st May 2007 From: http://digitalassets.lib.berkeley.edu/techreports/ucb/text/CSD-87-357.pdf
- Jupitermedia. (16/09/04). "Cache". HardwareCentral. Retrieved: 31st May 2007 From: http://systems.webopedia.com/TERM/c/cache.html
- PantherProducts. (2006). "Central processing unit cache memory". Retrieved: 31st May 2007 From: http://www.pantherproducts.co.uk/Articles/CPU/CPU%20Cache.shtml
Victim cache section seems wrong
There were some questionable claims in the victim cache section. I've fixed some of them, but it needs some additional work.
I'll try and read up some papers to see and add some information, but that will likely take some time. In the meantime if there are some experts who know this well enough, please contribute.
Pramod 10:28, 27 December 2008 (UTC) —Preceding unsigned comment added by Pramod.s (talk • contribs)
Note: it has been observed that a faulty L2 cache will prevent Windows XP systems from booting unless the cache is manually disabled from BIOS. Doing so however will severely reduce overall system performance.--89.147.67.118 (talk) 14:19, 12 July 2009 (UTC)
ways and sets
In the example describing ways and sets, the number of ways and sets is the same. This might lead one to believe that ways and sets are the same things, which I think is wrong. Something needs to be done to clarify the difference between a way and a set. —Preceding unsigned comment added by Skysong263 (talk • contribs) 02:34, 2 January 2010 (UTC)
Inclusion property
We need two expressions for inclusive cache hierarchies because implementations do not necessarily enforce the inclusion property. IIRC x86 implementations generally do not. When contents of L1 are not guaranteed to be backed by L2, L2 snoop misses do not imply L1 misses even though the hierarchy is generally labeled as inclusive. Guaranteeing inclusion however may have adverse effects an associativity: backing two n-way L1 caches by a direct mapped L2 cache (Alpha EV6?) significantly restricts L1 associativity.
Why is that? i.e. Why does it significantly restrict L1 associativity? Isn't that only if the L2 is small? —Preceding unsigned comment added by 71.198.7.54 (talk) 07:18, 27 February 2010 (UTC)
A.kaiser 09:31, 25 Sep 2004 (UTC)
The K7 and K8 L2/L1 designs obviously are not inclusive, but rather exclusive. My current understanding is that the P3 and P4 designs are inclusive, so that bus snoops check only the L2 tag. Can you point to any evidence to the contrary?
The adverse effects on associativity from the inclusion guarantee is an excellent point and should be added to the page somewhere.
Iain McClatchie 07:01, 26 Sep 2004 (UTC)
Intel Optimization guide on P-M and both P4s: "Levels in the cache hierarchy are not inclusive. The fact that a line is in level i does not imply that it is also in level i+1."
Since the P-M shares much of its microarchitecture with the P3, I expect the P3 to be similar.
A.kaiser 12:34, 26 Sep 2004 (UTC)
That's good evidence. I'll go think about what that means and how to talk about it. Unless you'd like to hack the article, in which case, please go ahead. I might get to it in a week or so if you don't.
It does seem like the right hierarchy isn't exclusive-inclusive, with inclusive broken into really inclusive-not actually inclusive. I think I'm seeing three completely different categories: inclusive, exclusive, and "serial". I'm making up that last name, because I don't know what it's formally called in the literature.
Iain McClatchie 19:51, 27 Sep 2004 (UTC)
It looks like the article still doesn't explain this well. In my experience, you have at least three different kinds of inclusivity possibilities:
- inclusive: when a higher-level cache (L1) allocates, you allocate also. When you evict, you also back-invalidate the higher-level cache. No special requirements for when the higher level evicts, or when you allocate of your own volition.
- exclusive: when a higher-level cache allocates, you evict (possibly allocating in its spot what the higher level evicted). When you allocate, you back-invalidate the higher-level cache. No special requirements on when the higher level evicts (although you could choose to allocate), or when you evict of your own volition.
- pseudo-inclusive (I made that up a few years ago, but don't think it's gotten very widespread use anywhere): this is what is done on the L2 caches that I am most familiar with (Freescale/Motorola): when a higher-level cache allocates, you allocate as well. All other actions do not have strict requirements (L1 evict, L2 alloc, L2 evict). In particular, when you evict, you do not back-invalidate (this is the main difference from true inclusive). You start out as inclusive, but don't maintain inclusivity via back-invalidates. This allows you to make better use of your L2 cache (especially if it has a smaller associativity than your L1s, or has very different set size), at the expense of requiring all snoops to go to the L2 and the L1. It also means that on a non-dirty L1 eviction, you don't need to explicitly cast out to the L2 -- it likely still has a copy from when you allocated.
There are likely many possibilities between full inclusive and full exclusive, but most of Freescale's L2 caches have been what I'm calling pseudo-inclusive.
Actually, now that I think about it, the MPC7400 was a true victim L2: when the L1 evicts, allocate. That's it. The MPC7450/e600 and e500 are pseudo-inclusive: when the L1 allocates, you allocate. That's it.
Now I'm wondering if we could merge the concept of victim with inclusion, and show that victim caches and inclusion properties are special cases of the more general allocation/eviction policy as it concerns two or more levels of caching. That would be more of a sweeping change....
I can try to write up something about this a little more carefully than the above, if y'all think it's a Good Idea. I'd definitely want feedback before I drastically alter the page.
BGrayson 14:50, 27 April 2007 (UTC)
AMD Centric Article
I came here looking into why the Intel Core 2 Duo chips with 64k L1 cache per core, and 4Mb shared L2 cache, works so well compared to a AMD K8 with 1MB L1 cache. However, there is very little Intel based information here. Any chance the knowledable can make this article less AMD centric? --Mgillespie 10:43, 7 August 2006 (UTC)
- The AMD K8 has 128 kB L1 cache and 1 MB L2 cache (maximum). -- Darklock (talk) 02:36, 16 March 2008 (UTC)
- There is reference to Intel Pro, this article is not AMD centric and it is not desirable to make it more Intel centric. Other pages are dedicated to specific architecture implementations.
- However the page needs more architecture references to illustrate the concepts. It would also need to better deal with real-time and embedded constraints. Market1G (talk) 17:39, 6 April 2010 (UTC)
Technical sections
I am having a hard time understanding the Structure and Associativity sections.
Structure is overly detailed. I'm skeptical of it's general application to cache architectures. I'm tempted to delete the section.
Associativity launches in to an explanation of how associativity without first explaining what associativity is. I'm not familiar enough with the concept to write an introductory paragraph myself. --Kvng (talk) 19:25, 29 March 2010 (UTC)
- This needs better explanation and introduction you are right but no deleting please.
- Also the link for reference [2] is broken as well as the last paragraph of the Associativity section (missing text).
Market1G (talk) 19:50, 6 April 2010 (UTC)
- The section on Associativity was mangled in this edit. — Aluvus t/c 00:28, 7 April 2010 (UTC)
low power cache
As far as I know, the people designing caches generally ignored the amount of energy consumed by the cache until fairly recently. And so it is understandable that, until recently, this article has said nothing about low power cache.
I think this article should say something about current research in CPU caches. In particular, I think this article should say something about research on low power caches.
I attempted to add a couple of sentences about research on low power caches, but they were deleted a minute later. --68.0.124.33 (talk) 06:21, 13 December 2009 (UTC)
I'm reverting that delete. I hope this doesn't ignite a huge edit war. Feel free to replace my text with a better description of current research in CPU caches. --68.0.124.33 (talk) 03:06, 30 December 2009 (UTC)
- In fact, it would be worth a special page on optimizing CPU consumption! On one the hand fast caches consume a lot of power but on the other hand the memory hierarchy and cache efficiency drastically reduce power for a given CPU throughput. Market1G (talk) 20:03, 6 April 2010 (UTC)
- Yes, I would like an article dedicated to techniques for designing CPUs with improved (reduced) CPU power consumption.
- Since Google pointed out the importance of power consumption in their servers, I think that article should not be limited to CPUs for laptops.
- There are already a few articles that briefly mention in passing part of the information such an article would have: CPU design, low-power electronics, performance per watt, power management, CPU power dissipation, and this CPU cache article.
- Is it possible to piece together a first rough draft entirely from the information in those articles? --68.0.124.33 (talk) 05:07, 7 September 2010 (UTC)
Details of operation to be clarified
- >> If data are written to the cache, they must at some point be written to main memory as well.
- >> The timing of this write is controlled by what is known as the write policy.
- >> In a write-through cache, every write to the cache causes a write to main memory.
- >> Alternatively, in a write-back or copy-back cache, writes are not immediately mirrored to the main memory.
- >> Instead, the cache tracks which locations have been written over (these locations are marked dirty).
- >> The data in these locations are written back to the main memory when that data is evicted from the cache.
- >> For this reason, a miss in a write-back cache may sometimes require two memory accesses to service:
- >> one to first write the dirty location to memory and then another to read the new location from memory.
- 1. The last bit about 2 memory accesses, a write followed by a read is hard to understand. There are neither clear reasons nor implications. Should this be clarified or deleted? If clarified, it should rather be moved to a specific section dealing with details such as prefetch, by-pass, write buffer...
- 2. To me this section sounds more like an overview than 'Details of operation', can the title be changed?
Market1G (talk) 19:07, 6 April 2010 (UTC)
- I thought I understood what it was saying, although as always the wording could be improved.
- Every cache is either a write-through cache, or a write-back cache.
- In a write-through cache, no dirty data is ever stored in the cache -- so a miss on a read requires (in the worst case) 1 memory access: data read from RAM into cache.
- In a write-back cache, a miss on a read may require (in the worst case) 2 memory access:
- In the worst case, the place where the desired data about-to-be-read may be marked dirty (and the write buffer, if any, may already be full). So the dirty data in the cache must be written to RAM (or some data in the write buffer must be written to RAM, and then the dirty data in the cache pushed out to the write buffer).
- Only after there is a place to put the desired data, then that data can be read from RAM into cache.
- If the data were read from RAM before there was a place for it, then it would overwrite some piece of dirty data and that dirty data would be incorrectly lost.
- Is there a better way for the article to explain this? --68.0.124.33 (talk) 05:24, 7 September 2010 (UTC)
L1
What is the L1 cache? This article is a bit ambiguous. A typical CPU is directly connected to one instruction cache and one data cache (similar to a Harvard architecture). The main memory and all other cache levels, if any, between main memory and the instruction cache are all unified and contain both instruction and data (similar to a Princeton architecture). Is the first unified cache "under" the instruction cache the L1 cache? Or is the instruction cache a L1 cache, and the first unified cache "under" the instruction cache the L2 cache? --68.0.124.33 (talk) 04:41, 20 September 2010 (UTC)
Ahistorical historical note
The very first virtual memory machine, the Ferranti Atlas, was not very slow; in fact, it was one of the fastest computers of its day. Nor did it have a page table (held in main memory); it had an associative (content addressable) memory with one entry for every 512 word block. Shmuel (Seymour J.) Metz Username:Chatul (talk) 23:27, 22 November 2010 (UTC)
Latency
- >> Latency: The virtual address is available from the MMU some time, perhaps
- >> a few cycles, after the physical address is available from the address generator.
Isn't this a mistake? The MMU translates into *physical* addresses. Therefore the *physical* address is available from the MMU some time, perhaps a few cycles, after the *virtual* address is presented to it.
-- agl
- That was a mistake, and it's been fixed a while ago. 67.164.0.182 05:22, 4 July 2006 (UTC)
- >> Historically, the first hardware cache used in a computer
- >> system did not cache the contents of main memory but rather
- >> translations between virtual memory addresses and physical
- >> addresses. This cache is known by the awkward acronym
- >> Translation Lookaside Buffer (TLB).
This needs some clarification, as early computer systems did not have virtual memory, though they had instruction caches.
--Stephan Leclercq 08:50, 22 Jul 2004 (UTC)
- The Burroughs B5000 and the Ferranti Atlas were the first computers with a virtual memory; neither had an instruction cache. Shmuel (Seymour J.) Metz Username:Chatul (talk) 17:49, 23 November 2010 (UTC)
Yep. There's a whole history to write here, of which I only know a little. I know that early Crays had essentially a one-line cache. I have read that there were two IBM 360 projects developed simultaneously. One was the famous "Stretch", and the other was a simpler machine which had a cache. The simple one was somehow better.
- Stretch was designed in the 1950's, the IBM System/360 was designed in the 1960's. IBM had stopped taking new orders for Stretch well before they announced the S/360. The machines aren't remotely similar, although IBM did cannibalize a lot of technology from Stretch for use in the 7000 series. Shmuel (Seymour J.) Metz Username:Chatul (talk) 17:49, 23 November 2010 (UTC)
I have read that TLBs predated data caches, but I have not yet tracked down an authoritative source yet. Perhaps I should remove that comment until I do.
Iain McClatchie 20:54, 22 Jul 2004 (UTC)
I know that CDC Cyber (designed by Seymour Crey) had a 8 word instruction cache, that contained the last 8 words executed, and was cleared at every jump instruction that did not fall on the cache. Looks like nothing, but the cache enhanced tight loops by a factor of 6-10...
Hope it helps ... --Stephan Leclercq 22:41, 22 Jul 2004 (UTC)
I would enjoy a history section. One tidbit I enjoyed is that the MC68010 CPU (which I believe found its widest use in the original LaserWriter printer) had an instruction cache big enough for exactly 2 instructions, which was just enough for a big memory-move loop of one MOVE and one DBRA instruction. Tempshill 04:39, 7 Jan 2005 (UTC)
I can say with a fair amount of confidence that TLBs predated caches (unless you call a TLB a cache, of course, which I don't). The first commercial computer with a cache (more or less as we think of it today) was the System/360 Model 85, announced in 1968 and delivered the following year. The 360 Model 67 had no cache, but did have an 8-entry TLB; it was delivered in May of 1966. I believe that at least 2 earlier machines also had TLBs: the Multics hardware, and the Atlas.
The CDC 6600 (1964, predating all the CDC Cyber machines) had an 8-word instruction stack, which could be used to contain a 7 word loop, which might have as many as 27 instructions. The words had to be from consecutive memory locations. The 7600 (1968) had a 12 word stack whose contents did not have to be from consecutive locations.
Lastly, the Stretch vs. System/360 story recounted above doesn't ring true, at least as told. Stretch was delivered to customers before System/360 was much more than a gleam in anyone's eye. Capek 07:20, 10 Jan 2005 (UTC)
- In fact it would be worth adding read/pre-fetch and write-back buffers, which are kind of 1-entry caches most often associated with proper caches —Preceding unsigned comment added by Market1G (talk • contribs) 17:43, 6 April 2010 (UTC)
- Stretch was designed in the 1950's and the IBM System/360 was designed in the 1960's. However, the first delivery of Stretch was only a few years before IBM announced the S/360 and overlapped the writing of the SPREAD report.
- Evans, Bob O. (December 4, 2007). "Introduction to the SPREAD Report". Annals of the History of Computing. 5 (1): 4–5. doi:10.1109/MAHC.1983.10011.
- Haanstra, J.W.; Evans, B.O.; Aron, J.D.; Brooks, Jr., F.P.; Fairclough, J.W.; Heising, W.P.; Hellerman, H.; Johnson, W.H.; Kelly, M.J.; Newton, D.V.; Oldfield, B.G.; Rosen, S.A.; Svigals, J. (December 4, 2007). "Processor Products-Final Report of the SPREAD Task Group, December 28, 1961". Annals of the History of Computing. 5 (1): 6–26. doi:10.1109/MAHC.1983.10007.
{{cite journal}}
: CS1 maint: multiple names: authors list (link) - Chuck Boyer. "The 360 revolution" (PDF). pp. 27–29.
- Shmuel (Seymour J.) Metz Username:Chatul (talk) 17:49, 23 November 2010 (UTC)
- Stretch was designed in the 1950's and the IBM System/360 was designed in the 1960's. However, the first delivery of Stretch was only a few years before IBM announced the S/360 and overlapped the writing of the SPREAD report.
Dispute sequence of events for paging
If instruction prefetch buffers such as those in the IBM 7030 (Stretch), CDC 6600 and S/360 Model 91[1] are considered to be caches then the TLB was not the first use of a cache. Note the a loop within the instruction stack did not refetch the instructions from main memory.
There was no semiconductor memory on the early computers, other than registers. Main memory used a variety of technologies, including delay lines, drums and, most often, core. Shmuel (Seymour J.) Metz Username:Chatul (talk) 14:53, 1 December 2010 (UTC)
References
- ^ "System/360 Model 91". IBM archives. IBM.
{{cite web}}
: Unknown parameter|separator=
ignored (help)
Is ECS on CDC 6x00 in scope
The basic CDC 6x00 and 6416 are limited to a maximum of 256 Ki words of 60 bit core storage with no cache. However, the optional Extended Core Storage (ECS)[1][2] has an 8-word buffer for each bank, which serves as a 1-way data cache. Accesses to ECS words in the buffer are actually faster than accesses to words in Central Memory. ECS is logically distinct from CM; there are special instructions for accessing it, and the address of an ECS location is taken from X0 rather than from an A register. If the buffers for ECS qualify as CPU caches then they may well be the first data cache. The question is whether ECS is in scope for this article.
References
- ^ CDC (2-21-69). Control Data 6400/6500/6600 Computer Systems Reference Manual. Revision H. 60100000.
{{cite book}}
: Check date values in:|date=
(help); Unknown parameter|separator=
ignored (help) - ^ CDC (2-16-68). Control Data 6400/6500/6600 Extended Core Storage Systems Reference Manual. Revision A. 60225100.
{{cite book}}
: Check date values in:|date=
(help); Unknown parameter|separator=
ignored (help)
Shmuel (Seymour J.) Metz Username:Chatul (talk) 22:32, 9 December 2010 (UTC)
Is way prediction same as pseudo associativity?
Is way prediction same as pseudo associativity? --132.68.40.87 (talk) 09:16, 5 February 2010 (UTC)
While direct cache and normal (parallel) n-way set associative cache always respond in a fixed amount of time on a hit, caches that use "way prediction" or (serial) "pseudo associativity" respond in two different amounts of time (the fast hit time, and the slow hit time).
I was under the impression that they were two slightly different techniques: My understanding was that:
- Starting from a direct-mapped cache, switching to n-way pseudo associativity keeps the same number of tag comparators (1), but reduces the number of cache misses to the same as n-way set associative. Instead of using that single comparator once and giving up if it doesn't match, the comparator is re-used over several cycles to check n other cache lines. The "fast hit time" (a match on the first compare) is about the same as the original direct-mapped cache, but the "slow hit time" is much slower -- but those "slow hits" *would* have been a miss in the original direct mapped cache, so overall it's faster.
- Starting from some (parallel) n-way set associative cache, adding "way prediction" keeps the same number of tag comparators (n) and the same number of cache misses. Instead of waiting for one comparator (or possibly no comparators, on a miss) to announce a hit, and then choosing the data line associated with that comparator to feed into the CPU, we pick the data associated with some comparator and preemptively feed that into the CPU and start speculatively executing based on that data. If we are lucky and guessed right, the "fast hit time" is a few cycles shorter. Hopefully we guess intelligently enough to save cycles on most reads. but no matter how badly we guess, the "slow hit time" is no slower than the original n-way set associative cache without way prediction.
Alas, a quick search to refresh my memory gave me a reference ([1]) that implies that they are basically the same.
Could someone update the article to add some information on way prediction? --68.0.124.33 (talk) 21:56, 19 January 2011 (UTC)
tone of section
parts of this are in the first person, "we can see", "Note that", etc. and overall the section sounds like a lecture, i'd have tagged it but the "lecture/lesson" template has been removed or something since I last used it, also is this Mark Hill notable enough to be mentioned, (no page on wiki - thats an externel link in the article), ps, regardless, could someone add a disambig hatnote to Mark Hill, we have several people on the disambig and no link to it from the default, ie.(this article is about... for other see...) — Preceding unsigned comment added by 109.151.54.48 (talk) 23:55, 20 August 2011 (UTC)
Are references and rephrasing adequate?
I rephrased the beginning of the first subsection and added references (and deleted citation needed notes and comments about need for rephrasing to avoid having to present an exhaustive survey of cache line sizes and cache access sizes), but someone might complain that the largest common reference size typically being equal to register size might require a reference (even though it is effectively common knowledge and proof would require an exhaustive survey).
The rephrasing does seem to interfere with the flow, but I was annoyed by the 'citation needed' notes and so provided a quick and dirty fix. (I may attempt a more broad reworking of the article at some point.) Paul A. Clayton (talk) 08:39, 23 September 2011 (UTC)
indexing vs tagging
Virtually indexed and/or tagged caches. What is the difference between indexing and tagging? 145.97.222.38 14:32, 7 Jan 2005 (UTC)
The index bits are *not* stored in the cache. The index bits are (typically) the "middle bits" of the effective address. The index bits select a particular row of the cache.
The tag bits *are* stored in the cache. The tag bits are (typically) the "high bits" of the effective address. After a particular row of the cache is selected, the cache memory sends out all the bits on that row (including the tag bits). If the tag bits that come out exactly match the "high bits" of the address we are trying to look up, we have a hit.
The block offset is the "low bits" of the effective address. When we have a hit, we use the block offset to select a particular word of the block of data from that row of the cache.
Both indexing and tagging have something to do with "address bits" -- how can we write this article to avoid confusing them? How can we make it more clear in this article? --DavidCary (talk) 15:41, 8 December 2011 (UTC)
Cache entry structure
This section seems to imply that index and displacement fields are stored in the cache, as opposed to being used to address entries within the cache:
Cache row entries usually have the following structure:
Data blocks | Tag | Index | Displacement | Valid bit |
Unless this has changed since years ago when I thought I understood how caches work, then it seems this should be:
Cache row entries usually have the following content:
Data blocks | Tag | Valid bit |
Cache row entries are usually addressed by:
Index |
Data blocks within each cache row entry are addressed by:
Displacement |
The data blocks (cache line) contain the actual data fetched from the main memory. The memory address (physical or virtual) is split (MSB to LSB) into a tag, an index and a displacement (offset), while the valid bit denotes that this particular entry has valid data. The index length is bits and describes which row the data has been put in. The displacement length is and specifies which block of the ones we have stored we need. The tag length is and contains the most significant bits of the address. The tag from the address is compared to the tag stored in the row(s) addressed by index to see if a row contains valid data for that address (a match and valid bit set) or if the row does not contain valid data for that address (a non-match or valid bit not set).
For a 2-way cache, there are two sets of cache row entries (requiring 2 tag comparators), for a 4-way cache, there are 4 sets of cache row entries (requiring 4 tag comparators). For a fully associative cache, the index is not used and instead, the tag includes all address bits but the displacement, and the equivalent of a content addressable memory (requiring n tag comparators, where n is the number of cache row entries) is used.
Jeffareid (talk) 17:07, 22 April 2010 (UTC)
- I agree -- this looks like a mis-guided combination of all the parts of a virtual address plus all the parts of a cache line.
- I tried to fix it -- did I get those two ideas properly separated? --68.0.124.33 (talk) 05:12, 20 September 2010 (UTC)
- I'm just learning about this stuff myself, but the textbook I'm using (Computer Architecture: A Quantitive Approach 4th Ed. by Hennessey and Patterson) calls the displacement field the "block offset," and says the offset field specifies the minimum addressable unit within a block (not which block). Several quick searches on the internet turns up the same information. Could someone confirm this and correct the article? I would, but I'm not completely sure I'm right. — Preceding unsigned comment added by 173.174.238.208 (talk) 09:19, 9 October 2011 (UTC)
I would rephrase to:
The index describes which row the data has been put in. The index length is bits.
The displacement (offset) specifies which block of the stored data blocks from the cache line is needed. The displacement length is bits.
BigEndian77 (talk) 17:03, 30 October 2011 (UTC)
- Dear BigEndian77, I agree entirely. I would have added your improved phrasing to the article, but I see you were WP:BOLD and already went ahead -- good job. --DavidCary (talk) 14:44, 20 December 2011 (UTC)
- Dear 173.174.238.208, I agree entirely, so I changed every mention of "displacement" to "block offset", with the appropriate H&P reference, and added a brief footnote on the "displacement" terminology. --DavidCary (talk) 14:44, 20 December 2011 (UTC)
The line?
Can anyone confirm whether the cache was initially referred to as the line, at least for x86 CPUs. This term is used in the book Understanding the Linux Kernel. Specifically I think it was referring to the first cache "units" which were off-CPU SRAM. If so should I add this to the x86 subsection of the history section? --178.208.209.155 (talk) 01:42, 9 September 2011 (UTC)
- My understanding is that the terms "cache line", "cache row", "cache entry" are all synonymous and refer to one of the many blocks of data in the cache. Each cache line is associated with its own tag bits and other flag bits.
- People who mention "the cache line" are referring to some specific block of data in the cache, not the entire cache.
- As far as I can tell,
- that book[2]
- uses "cache line" the same way.
- Is there some way we can improve this article to point out and clear up this common misconception? --DavidCary (talk) 15:47, 20 December 2011 (UTC)
Recentism
Some parts of the article, e.g., CPU cache#Two-way set associative cache describe concepts as being tied to microprocessors when the were actually used on other types of the machines. Shmuel (Seymour J.) Metz Username:Chatul (talk) 17:31, 5 March 2014 (UTC)
- Well, the article is called "CPU cache", right? :) — Dsimic (talk | contribs) 04:12, 6 March 2014 (UTC)
- I changed the two-way set associative cache section to say "they require fewer transistors, take less space on the processor circuit board or on the microprocessor chip, and can be read and compared faster", which is hopefully a bit less recentistic but still discusses current processors. Further improvements are welcome. Guy Harris (talk) 20:03, 13 June 2014 (UTC)
Proposed merge with Tag RAM
Extra small stub Christian75 (talk) 10:43, 21 July 2013 (UTC)
The Tag RAM is actually part of the CPU's caches and registers, there for it is appropriate that such a small article (could be considered a stub) be merged with CPU cache and a redirect to the proper section of CPU cache be provided for the url http://en.wikipedia.org/wiki/Tag_RAM.
69.131.180.86 (talk) 01:59, 7 March 2014 (UTC)
- Why do you think it would be appropriate to merge it with CPU cache instead of Processor register? I'd like to understand your logic here. — {{U|Technical 13}} (e • t • c) 18:49, 13 June 2014 (UTC)
- Each tag RAM is a part of some CPU cache. So if there's so little to say about "tag RAM" that it's always going to be a stub, then I agree with Christian75 and 69.131.180.86 that "tag RAM" should be redirected and merged into the more general article "CPU cache" that talks about all the parts of the cache, including the tag RAM.
- Many CPUs, such as the Clipper architecture, have one chip that contains the processor registers, and (a) completely separate chip(s) that contains the CPU cache; that cache in turn includes the tag RAM. Even on current microprocessors that put them all on one chip, the processor registers are often visible in a completely separate region from the cache(s). So I don't understand why anyone would even consider merging tag RAM into processor register instead of CPU cache. Does that answer your question, Technical 13? --DavidCary (talk) 20:56, 27 January 2015 (UTC)
- Wow. I had forgotten all about this. If this is still a desired merge, I suggest tagging it with the appropriate templates and getting a formal discussion started or being BOLD and doing it. :) —
{{U|Technical 13}} (e • t • c)
21:02, 27 January 2015 (UTC)
- The tags have been there since, it appears, July 2013, if Tag RAM and CPU cache are to be believed. Both tags point here for discussion. So I guess this section is the formal discussion in question. Guy Harris (talk) 21:37, 27 January 2015 (UTC)
- Wow. I had forgotten all about this. If this is still a desired merge, I suggest tagging it with the appropriate templates and getting a formal discussion started or being BOLD and doing it. :) —
Rule of thumb on how many cycles does it take for the CPU to get data from different types of memory
I'm new to wiki, but I suppose this should be considered to be added to the CPU cache article in some form, if smart people around here agree on it. And I also hope that someone has more reliable source on this subject:
http://www.tomshardware.co.uk/forum/342067-28-what-cache Best solution gamerk316 CPUs 24 July 2012 20:07:13 Cache is basically just high speed RAM built directly on the CPU. An old rule of thumb:
- If data is in the L1 cache, the CPU can get to it in about 1-2 clock cycles
- If data is in the L2 cache, the CPU can get to it in about 10-20 clock cycles
- If data is in the L3 cache, the CPU can get to it in about 50-80 clock cycles
- If data is in RAM, the CPU can get to it in about 80-100 clock cycles
- If data is on the HDD, the CPU can get to it in about 100,000 clock cycles [see why more RAM helps performance?]
Numbers vary a bit by processor architecture, but each level of cache down gets larger, but also takes slightly longer to access. As you can see, on a system with enough RAM to avoid a Page Fault [needing to go to the HDD to load data into RAM], the L3 cache has very limited performance benefits. Hence why some argue that the space the L3 cache occupies on the CPU die would be better used for some other purpose. — Preceding unsigned comment added by 84.206.46.11 (talk) 12:44, 7 April 2014 (UTC)
- Those numbers look too liberal (conservative?), and are likely outdated against the widening CPU/memory gap. I've got PS3 literature that cites 400 cycles for RAM (see "Pitfalls of Object-Oriented Programming"), and every other example for PC I've seen shows 200 cycles or more.
- Even L1 cache is slower than that now (see L0 cache below) according to i7 documentation where L1 is listed as ~4 cycles. L2, ~10 cycles. L3 ~40-300 (depending on whether another core has it). And 100 ns local DRAM access / 2.3 GHz is ~230 cycles per this Stack Overflow page discussing the Intel forum post.
- Novous (talk) 20:24, 3 June 2015 (UTC)
No mention of L0 cache?
How come this article has no mention of L0 cache? Various CPU's, including the Cyrix 686 mention an L0 "scratch-pad" cache, Qualcomm processors also include mention of L0 cache, as well as numerous research papers. I do not know enough to write an authoritative piece on it, but someone with more knowledge should definitely consider writing it.
Novous (talk) 20:13, 3 June 2015 (UTC)
- Hello! Using this or this as a reference, it seems that L0 caches are pretty much marketing gimmicks. Of course, I could be wrong there, but I'd like to have a look at references describing L0 caches as a completely different category. — Dsimic (talk | contribs) 18:58, 27 June 2015 (UTC)
Sentence not making sense
Is it me, or is this a non-sequitur? "If each location in main memory can be cached in either of two locations in the cache, one logical question is: which one of the two? The simplest and most commonly used scheme, shown in the right-hand diagram above, is to use the least significant bits of the memory location's index as the index for the cache memory, and to have two entries for each index." 24.7.113.102 (talk) 08:04, 31 March 2016 (UTC)
- "If data at a location..." would be better (instead of "each"), I think. --Frederico1234 (talk) 10:09, 31 March 2016 (UTC)
The *OS* selects a replacement policy for the CPU cache?
Which CPUs allow software to control the hardware's cache replacement algorithm, and which OSes do so rather than leaving the default policy in effect? Guy Harris (talk) 00:35, 25 April 2016 (UTC)
- Note that this page is about the hardware cache in the CPU that caches data from main memory in higher-speed cache memory; it's not about the page cache in the operating system that caches data from secondary storage in main memory. Guy Harris (talk) 00:45, 25 April 2016 (UTC)
Generalise, concepts common to GPU,DSP and other coprocessor caches ?
should there be a more general concept of 'hardware cache' or 'processor cache' i.e. CPU and GPU caches both do a similar job, caching off-chip DRAM. Processors other than CPUs (VideoPU', GPU, some DSPs e.g. Hexagon,TI ) also have CPU-style caches. EDIT: and consider HSA, and other examples can cache shared memory, and remember ComputeShaders are far more general than the graphics pipeline- its really a 'vector coprocessor'. Some SOCs can even share caches between CPU,DSP,GPU. (e.g. I've come here from mentioning cache lines / texture mapping / z-order curves which are there for texture-cache coherency) Fmadd (talk) 06:58, 9 May 2016 (UTC)
- Hello! Well, a GPU certainly doesn't cache system RAM; some content may be (and is) copied from the system RAM into the GPU's RAM/VRAM, but that isn't true caching. — Dsimic (talk | contribs) 20:26, 9 June 2016 (UTC)
- Some systems have 'physical' unified memory - CPU and GPU's using system RAM (or 'main memory') directly, e.g: AMD APUs; and many game consoles (PS4, xbox 1 & xbox 360); and many SOCs (e.g. things like the RPi, you may not think of them as 'real computers' but a lot of computing happens on them). In the APU, the CPU & GPU literally share an L3 cache to exchange data at fine grain (they support CPU and GPU literally running the same kernel efficiently in openCL) Also with 'UMA' as nvidia call it (which is only 'logical' unified memory really.. there's still a physical split between 'system' and 'device memory'), the CPU and GPU can increasingly share the same address space (they have ensure the GPU uses the same MMU control format) - the long term trend is toward closer co-operation and even physical merge between CPU and GPU, the GPU becoming a general purpose vector coprocessor .. and a bit like 'COMA/NUMA' Fmadd. Many of the principles discussed in CPU caches are directly relevant to GPUs e.g. L1/L2, instruction caches. There are also differences e.g. sometimes texture-caches aren't coherent, but with GPGPU they're becoming increasingly close. So a lot of the same ideas are relevant.. coherency, writeback/write thru policy, L1/L2, I-caches (for shader code) vs D-cache (talk). 21:24, 9 June 2016 (UTC)
- That's true for integrated GPUs, which is why Intel introduced L4 CPU caches to improve the integrated GPU performance, for example. Although, that's still technically a CPU cache, not a separate GPU cache, because the GPU simply acts as another "consumer" of the cache. Though, "unified" memory spaces do introduce greater challenges. — Dsimic (talk | contribs) 21:44, 9 June 2016 (UTC)
- in the case of an AMD HSA system you have CPU L1/L2cache, GPU L1,L2 cache going to the address space & physical-memory. The point I was really trying to make is that the article could perhaps be generalised to a 'processor cache' , describing concepts relevant both to CPUs and GPUs (also some DSPs). Over the years GPU caches have become more complex to deal with both code & data, compute-shaders which can read/modify/write data, synchronisation between threads (i.e. logic for atomics done in the L2 to keep multiple GPU L1's in sync), and the same MMU model as the CPU. It's possible it's too much for one article; but there would be a lot of repetition if we started making a whole new article to describe the behaviour of GPU L1,L2, I$/D$.. The 'unified' memory case is already extremely common .. in most peoples pockets and living rooms. I'll have a flick through again and think some more about this Fmadd (talk) 22:41, 9 June 2016 (UTC)
- Those are all valid points, but even renaming the article to Processor cache wouldn't be without its own set of troubles. For example, you'd rarely call a GPU a processor, although it surely is some kind of a processor because it processes data. :) — Dsimic (talk | contribs) 23:33, 9 June 2016 (UTC)
External links modified
Hello fellow Wikipedians,
I have just modified one external link on CPU cache. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
- Added archive https://web.archive.org/web/20120907012034/https://www.stanford.edu:80/class/ee282/08_handouts/L03-Cache.pdf to http://www.stanford.edu/class/ee282/08_handouts/L03-Cache.pdf
When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}
).
An editor has reviewed this edit and fixed any errors that were found.
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—InternetArchiveBot (Report bug) 22:45, 8 September 2016 (UTC)
- Works, but I changed it to use {{cite web}}. Guy Harris (talk) 23:06, 8 September 2016 (UTC)
Stalls, rewording
The article does not make mention of a stall, which is what occurs to program execution when a cache miss occurs. That is were the penalty is ultimately felt, because the program executes slower.
- Fixed
Also the article needs rewording. I'm a software developer, and still had a great deal of difficulty trying to follow along with the article. I wouldn't think it would be very useful to a lay-person in this state. It contains lots of good information; the sentences are just hard to follow. — Preceding unsigned comment added by Dan East (talk • contribs) 17:43, 6 January 2005 (UTC)
- Very disappointing to hear. If you can say anything about where you were having trouble following along, it might help me fix the article. Iain McClatchie 01:28, 7 Jan 2005 (UTC)
Edited the Cache performance sub-heading to Policies and rearranged text to better reflect what replacement and write policies are. Ysrajwad (talk) 15:15, 17 October 2016 (UTC)
External links modified
Hello fellow Wikipedians,
I have just modified one external link on CPU cache. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
- Added archive https://web.archive.org/web/20070515052223/http://www.sandpile.org/impl/k8.htm to http://www.sandpile.org/impl/k8.htm
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
An editor has reviewed this edit and fixed any errors that were found.
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—InternetArchiveBot (Report bug) 01:11, 20 May 2017 (UTC)
External links modified
Hello fellow Wikipedians,
I have just modified one external link on CPU cache. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
- Added archive https://web.archive.org/web/20110718154522/http://www.zipcores.com/skin1/zipdocs/datasheets/cache_8way_set.pdf to http://www.zipcores.com/skin1/zipdocs/datasheets/cache_8way_set.pdf
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
An editor has reviewed this edit and fixed any errors that were found.
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—InternetArchiveBot (Report bug) 18:08, 28 July 2017 (UTC)
KiB vs KB
@Comp.arch: Unfortunately there is a MOS requirement NOT to use binary prefixes, last debated in 2014. In essence NIST, SI, and ISO say never use decimal prefixes in place of binary ones, Wiki says always do! See WP:COMPUNITS, Wikipedia talk:Manual of Style/Dates and numbers/Archive 148#Microsoft_is_more_important_than_IBM_and_Toshiba and BIPM brochure. I'd be tempted to say go ahead, make the change regardless, but there are a lot of people who will jump on you if you try. :-( Martin of Sheffield (talk) 14:59, 20 September 2017 (UTC)
- I can go either way (just not kB here, that wasn't used). I just noticed both used. Note, KB is ambigious, and is in some cases (e.g. for files) meant to be 1000 bytes, but never for CPU caches. Sometimes I link to like this: KB. comp.arch (talk) 15:05, 20 September 2017 (UTC)
- I work in the Linux field where sizes are normally documented and reported using IEC binary prefixes (or sometimes both), so tend to always use them where appropriate. For instance from the virsh(1) man page: "for historical reasons, some commands default to bytes, while other commands default to kibibytes". Unfortunately M$ and non-specialists rule so we are stuck with the abuse. Funnily enough, KB is the only one that shouldn't be ambiguous: "K" is a non-SI prefix so should not be used for 10^3. Martin of Sheffield (talk) 15:57, 20 September 2017 (UTC)
- And then there was the obscene usage "octal K" for 512. We hates it precious! Shmuel (Seymour J.) Metz Username:Chatul (talk) 21:14, 20 September 2017 (UTC)
VIPT problems with page coloring
On VIPT caches, under certain circumstances (MMU pages smaller than cache ways, multiple virtual pages map to same physical memory area) the same physical memory block may end up multiple times in several cache lines, resulting in coherency problems. The article claims that "In practice this is not an issue because, in order to avoid coherency problems, VIPT caches are designed to have no such index bits; this limits the size of VIPT caches to the page size times the number of sets.". However, some ARM Cortex-A processors do have this problem, see citations below. The article also (correctly) claims that PIPT solves these problems - if it were not an issue in practice, there would be no need for PIPT caches. Therefore, I suggest removing above sentence and perhaps replace it by:
This issue arises on some processors such as some ARM-Cortex A cores [1] [2] . Possible solutions are: Use larger MMU pages, flush the cache on context switches to completely eliminate aliasing (slow), or make sure that the affected bits in the virtual address of aliased pages are always identical. Erlkoenig90 (talk) 07:10, 7 September 2018 (UTC)
- ^ Jacob Bramley (September 11, 2013). "Page Colouring on ARMv6 (and a bit on ARMv7)".
- ^ "ARM Cortex-A Series Programmer's Guide, Version: 4.0". January 22, 2014. p. 8-12. (registration required)