Moore's Law slowing down?

Discussion in 'Hardware' started by Bolimomo, Jul 13, 2013.

  1. Moore's Law: per wiki -

    Moore's law is the observation that, over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years. The period often quoted as "18 months" is due to Intel executive David House, who predicted that period for a doubling in chip performance (being a combination of the effect of more transistors and their being faster).'s_law

    Roughly, if the law holds one would expect the CPU performance doubled in about 18 months to 2 years.

    I have been following the Intel CPU chip development, and am looking into a hardware upgrade and thus now doing some research works. Currently Intel is at their fourth generation process, with models such as i7-4770K.

    I pulled some Passmark scores for the 1st, 2nd, 3rd and 4th generation i7 chips and made some comparisons. There was roughly about 1 generation each year since 2010. It's been 3 years since 2010. The chip performance does not seem to quite double yet.

    i7-4770K at 10136, versus i7-950 at 5682.

    Moore's Law finally slow down?
  2. [​IMG]
  3. i7-950 launched in q2 2009 on the bloomfield core. bloomfied launched in q4 2008 with the i7-920, 940 and 960. moore's law seems dead as a measure of performance, eh. doesn't help that amd is lagging so far behind.
  4. The slowing down has been happening since the early 2000s when Intel and AMD introduced their multi-core products.

    Single-core hit a wall at that time.

    Multi-core is the only way they can give a semblance of improvement

    Now, projects needing serious performance are off-loading the compute-intensive portions to the GPUs.

    In the 5-20 years, you'll start seeing more and more powerful quantum computers from that company in Canada. They'll be like GPUs - good at completing simple operations really fast, but they won't be replacing Intel, AMD, or ARM general purpose CPUs any time soon.
  5. I remember dad buying a new laptop every 6 - 8 months and performance being about doubled every time. I don't know but he must have gone through 20 or 25 laptops.

    I am now using the base Mac mini 2011 and 2012 one running Windows 7 and the other Windows 8. The CPU load is around 7 - 8%, far cry from when he was hitting 90% and could not put up more charts.

    The performance difference between the 2011 and 2012 is around 10% or so, big deal, yeah right like I will be rushing out to get the latest and greatest because now I can run at 6% or 7% CPU load instead of 7 to 8%.

    It is all in the software and how efficiently and effectively it is processing the information.

    Anyway, rambling aside, the technology is starting to hit another barrier and that is that they can make them hardly any smaller, I believe that even 14nm is pushing the boundaries already. They reckon that they start to have big issues going below that. And we are now at 19 nm with the SSD's.
  6. Occam


    General computation speed has really hit the wall within the last 3 years or so. Intel Extreme Edition is all but static at this point. Even the number of cores has stopped going up (aside from niche chips such as Intel Phi or Nvidia Tesla). The latest 1-socket Xeons are in a similar situation (the Haswell ones no faster than their Ivy Bridge predecessors).

    About the only thing left is power reduction, which neatly coincides with the shift to all things mobile, so that buys the industry a little more time before it potentially turns to a race to the bottom for manufacturing efficiency (but hopefully not).

    I'd say that the US lost out on tremendous revenue by failing to prevent the export of semiconductor manufacturing technology and/or protecting it via much longer patents.
  7. Moore's law is shot to hell. There's two problems:

    1) even if you had more transistors available, that doesn't really equate to being able to compute faster for most problems. Only problems that parallelize easily, and where someone (or some compiler) has done the effort to parallelize them, benefit. Most computing work meets neither requirement.

    2) The new, smaller lithography process nodes that would give you more transistors produce transistors that really don't work very well. Too few atoms per transistor equals unreliable transistors.

    Realize also that Passmark is wildly optimistic compared to real computing. The actual situation is much less favorable in terms of growth rate than it would lead you to believe.
  8. Pekelo


    I think the reason for multi core was heat, not reaching speed limitations. the single core simply overheats, thus the multicore....