User Tools

Site Tools


ai_timelines:the_cost_of_teps

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

ai_timelines:the_cost_of_teps [2022/09/21 07:37] (current)
Line 1: Line 1:
 +====== The cost of TEPS ======
 +
 +// Published 21 March, 2015; last updated 10 December, 2020 //
 +
 +<HTML>
 +<p>A billion <a href="http://en.wikipedia.org/wiki/Traversed_edges_per_second">Traversed Edges Per Second</a> (a <a href="http://en.wikipedia.org/wiki/Giga-">G</a>TEPS) can be bought for around $0.26/hour via a powerful supercomputer, including hardware and energy costs only. We do not know if GTEPS can be bought more cheaply elsewhere.</p>
 +</HTML>
 +
 +
 +<HTML>
 +<p>We estimate that available TEPS/$ grows by a factor of ten every four years, based the relationship between TEPS and FLOPS. TEPS have not been measured enough to see long-term trends directly.</p>
 +</HTML>
 +
 +
 +
 +===== Background =====
 +
 +
 +<HTML>
 +<p>Traversed edges per second (<a href="http://en.wikipedia.org/wiki/Traversed_edges_per_second">TEPS</a>) is a measure of computer performance, similar to <a href="http://en.wikipedia.org/wiki/FLOPS">FLOPS</a> or <a href="http://en.wikipedia.org/wiki/Instructions_per_second#Millions_of_instructions_per_second">MIPS</a>.  Relative to these other metrics, TEPS emphasizes the communication capabilities of machines: the ability to move data around inside the computer. Communication is especially important in very large machines, such as supercomputers, so TEPS is particularly useful in evaluating these machines.</p>
 +</HTML>
 +
 +
 +<HTML>
 +<p>The <a href="http://www.graph500.org/results_nov_2013">Graph 500</a> is a list of computers which have been evaluated according to this metric. It is <a href="http://www.graph500.org/">intended</a> to complement the <a href="http://www.top500.org/lists/2014/11/">Top 500</a>, which is a list of  the most powerful 500 computers, measured in FLOPS. The Graph 500 began in 2010, and so far has measured 183 machines, though many of these are not supercomputers, and would presumably not rank among the best 500 TEPS scores if more supercomputers computers were measured.</p>
 +</HTML>
 +
 +
 +<HTML>
 +<p>The TEPS benchmark is defined as the number of graph edges traversed per second during a breadth-first search of a very large graph. The scale of the graph is tuned to grow with the size of the hardware. See the <a href="http://www.graph500.org/specifications">Graph500 benchmarks page</a> for further details.</p>
 +</HTML>
 +
 +
 +==== The brain in TEPS ====
 +
 +
 +<HTML>
 +<p>We are interested in TEPS in part because we would like to estimate the brain’s capacity in terms of TEPS, as an input to forecasting AI timelines. One virtue of this is that it will be a relatively independent measure of how much hardware the human brain is equivalent to, which we can then compare to other estimates. It is also easier to measure information transfer in the brain than computation, making this a more accurate estimate. We also expect that at the scale of the brain, communication is a significant bottleneck (much as it is for a supercomputer), making TEPS a particularly relevant benchmark. The brain’s contents support this theory: much of its mass and energy appears to be used on moving information around.</p>
 +</HTML>
 +
 +
 +===== Current TEPS available per dollar =====
 +
 +
 +<HTML>
 +<p>We estimate that a TEPS can currently be produced for around $0.26 per hour in a supercomputer.</p>
 +</HTML>
 +
 +
 +==== Our estimate ====
 +
 +
 +<HTML>
 +<p>Table 1 shows our calculation, and sources for price figures.</p>
 +</HTML>
 +
 +
 +<HTML>
 +<p>We recorded the TEPS scores for the top eight computers in the <a href="http://www.graph500.org/results_nov_2014">Graph 500</a> (i.e. the best TEPS-producing computers known). We searched for price estimates for these computers, and found five of them. We assume these prices are for hardware alone, though this was not generally specified. The prices are generally from second-hand sources, and so we doubt they are particularly reliable.</p>
 +</HTML>
 +
 +
 +=== Energy costs ===
 +
 +
 +<HTML>
 +<p>We took energy use figures for the five remaining computers from the <a href="http://www.top500.org/list/2014/11/">Top 500</a> list. Energy use on the Graph 500 and Top 500 benchmarks are probably somewhat different, especially because computers are often scaled down for the Graph 500 benchmark. See ‘Bias from scaling down’ below for discussion of this problem. There is a Green Graph 500 list, which gives energy figures for some of the supercomputers doing similar problems to those in the Graph 500, but the computers are run at different scales there to in the Graph 500 (presumably to get better energy ratings), so the energy figures given there are also not directly applicable.</p>
 +</HTML>
 +
 +
 +<HTML>
 +<p>The cost of electricity varies by location. We are interested in how cheaply one can produce TEPS, so we suppose computation is located somewhere where power is cheap, charged at industrial rates. Prevailing energy prices in the US <a href="http://www.eia.gov/electricity/monthly/epm_table_grapher.cfm?t=epmt_5_6_a">are around</a> $0.20 / kilowatt hour, but in some parts of Canada <a href="http://en.wikipedia.org/wiki/Electricity_sector_in_Canada#Rates">it seems</a> industrial users pay less than $0.05 / kilowatt hour. This is also low relative to <a href="http://www.statista.com/statistics/263262/industrial-sector-electricity-prices-in-selected-european-countries/">industrial energy prices</a> in various European nations (though these nations too may have small localities with cheaper power). Thus we take $0.05 to be a cheap but feasible price for energy.</p>
 +</HTML>
 +
 +
 +=== Bias from scaling down ===
 +
 +
 +<HTML>
 +<p>Note that our method likely overestimates necessary hardware and energy costs, as many computers <a href="http://spectrum.ieee.org/computing/hardware/better-benchmarking-for-supercomputers">do not use all of their cores</a> in the Graph 500 benchmark (this can be verified by comparing to cores used in the Top 500 list compiled at the same time). This means that one could get better TEPS/$ prices by just not building parts of existing computers. It also means that the energy used in the Graph 500 benchmarking (not listed) was probably less than that used in the Top 500 benchmarking.</p>
 +</HTML>
 +
 +
 +<HTML>
 +<p>We correct for this by scaling down prices according to cores used. This is probably not a perfect adjustment: the costs of building and running a supercomputer are unlikely to be linear in the number of cores it has. However this seems a reasonable approximation, and better than making no adjustment.</p>
 +</HTML>
 +
 +
 +<HTML>
 +<p>This change makes the data more consistent. The apparently more expensive sources of TEPS were using smaller fractions of their cores (if we assume they used all cores in the Graph 500), and the very expensive Tianhe-2 was using only 6% of its cores. Scaled according to the fraction of cores used in Graph 500, Tianhe-2 produces TEPShours at a similar price to Sequoia. The two apparently cheapest sources of TEPShours (Sequoia and Mira) appear to have been using all of their cores. Figure 1 shows the costs of TEPShours on the different supercomputers, next to the costs when scaled down according to the fraction of cores that were used in the Graph 500 benchmark.</p>
 +</HTML>
 +
 +
 +<HTML>
 +<figure aria-describedby="caption-attachment-468" class="wp-caption alignnone" id="attachment_468" style="width: 600px">
 +<a href="http://aiimpacts.org/wp-content/uploads/2015/03/image-7.png"><img alt="" class="wp-image-468 size-full" height="371" sizes="(max-width: 600px) 100vw, 600px" src="https://aiimpacts.org/wp-content/uploads/2015/03/image-7.png" srcset="https://aiimpacts.org/wp-content/uploads/2015/03/image-7.png 600w, https://aiimpacts.org/wp-content/uploads/2015/03/image-7-300x186.png 300w" width="600"/></a>
 +<figcaption class="wp-caption-text" id="caption-attachment-468">
 +<strong>Figure 1</strong>: Cost of TEPShours using five supercomputers, and cost naively adjusted for fraction of cores used in the benchmark test.
 +                </figcaption>
 +</figure>
 +</HTML>
 +
 +
 +=== Other costs ===
 +
 +
 +<HTML>
 +<p>Supercomputers have many costs besides hardware and energy, such as property, staff and software. Figures for these are hard to find. <a href="http://www.efiscal.eu/files/presentations/amsterdam/Snell_IS360_TCO_presentation.pdf">This presentation</a> suggests the total cost of a large supercomputerover several years can be more than five times the upfront hardware cost. However these figures seem surprisingly high, and we suspect they are not applicable to the problem we are interested in: running AI. High property costs are probably because supercomputers tend to be built in college campuses. Strong AI software is presumably more expensive than what is presently bought, but we do not want to price this into the estimate. Because the figures in the presentation are the only ones we have found, and appear to be inaccurate, we will not further investigate the more inclusive costs of producing TEPShours here, and focus on upfront hardware costs and ongoing energy costs.</p>
 +</HTML>
 +
 +
 +=== Supercomputer lifespans ===
 +
 +
 +<HTML>
 +<p>We assume a supercomputer lasts for five years. This was the age of <a href="http://en.wikipedia.org/wiki/IBM_Roadrunner">Roadrunner</a> when decommissioned in 2013, and is consistent with the ages of the computers whose prices we are calculating here — they were all built between 2011 and 2013. <a href="http://en.wikipedia.org/wiki/ASCI_Red">ASCI Red</a> lasted for nine years, but was apparently considered ‘<a href="http://www.upi.com/Science_News/2006/06/29/Worlds-first-supercomputer-decommissioned/UPI-60321151628137/">supercomputing’s high-water mark in longevity</a>‘. We did not find other examples of large decommissioned supercomputers with known lifespans.</p>
 +</HTML>
 +
 +
 +=== Calculation ===
 +
 +
 +<HTML>
 +<p>From all of this, we calculate the price of a GTEPShour in each of these systems, as shown in table 1.</p>
 +</HTML>
 +
 +
 +<HTML>
 +<table border="1" cellpadding="0" cellspacing="0" dir="ltr">
 +<colgroup>
 +<col width="158"/>
 +<col width="100"/>
 +<col width="100"/>
 +<col width="100"/>
 +<col width="100"/>
 +<col width="100"/>
 +<col width="100"/>
 +<col width="100"/>
 +<col width="120"/>
 +<col width="732"/>
 +</colgroup>
 +<tbody>
 +<tr>
 +<td data-sheets-value='[null,2,"Name"]'>Name</td>
 +<td data-sheets-value='[null,2,"GTeps"]'>GTeps</td>
 +<td data-sheets-value='[null,2,"Estimated Price "]'>Estimated Price (million)</td>
 +<td data-sheets-value='[null,2,"Price/hour (5 year life)"]'>Hardware cost/hour (5 year life)</td>
 +<td data-sheets-value='[null,2,"Energy (kW)"]'>Energy (kW)</td>
 +<td data-sheets-value='[null,2,"Hourly energy cost (5c/kWh)"]'>Hourly energy cost (at 5c/kWh)</td>
 +<td data-sheets-value='[null,2,"Total (hardware + energy)"]'>Total $/hour<br/>
 +                    (including hardware and energy)</td>
 +<td data-sheets-value='[null,2,"GTEPS/totalhourly$"]'>$/GTEPShours<br/>
 +                    (including hardware and energy)</td>
 +<td data-sheets-value='[null,2,"Other (Price)"]'>$/GTEPShours scaled by cores used</td>
 +<td data-sheets-value='[null,2,"Notes and Link"]'>Cost sources</td>
 +</tr>
 +<tr>
 +<td data-sheets-value='[null,2,"DOE/NNSA/LLNL Sequoia (IBM - BlueGene/Q, Power BQC 16C 1.60 GHz)"]'>DOE/NNSA/LLNL Sequoia (IBM – BlueGene/Q, Power BQC 16C 1.60 GHz)</td>
 +<td data-sheets-value="[null,3,null,23751]">23751</td>
 +<td data-sheets-numberformat='[null,4,"\"$\"#,##0",1]' data-sheets-value="[null,3,null,250000000]">$250</td>
 +<td data-sheets-formula="=R[0]C[-1]/(24*365.25*5)" data-sheets-numberformat='[null,4,"\"$\"#,##0",1]' data-sheets-value="[null,3,null,5703.855806525211]">$5,704</td>
 +<td data-sheets-numberformat='[null,2,"#,##0.00",1]' data-sheets-value="[null,3,null,7890]">7,890.00</td>
 +<td data-sheets-formula="=R[0]C[-1]*0.05" data-sheets-numberformat='[null,2,"#,##0.00",1]' data-sheets-value="[null,3,null,394.5]">$394.50</td>
 +<td data-sheets-formula="=R[0]C[-3]+R[0]C[-1]" data-sheets-numberformat='[null,2,"#,##0.00",1]' data-sheets-value="[null,3,null,6098.355806525211]">6,098.36</td>
 +<td data-sheets-formula="=R[0]C[-6]/R[0]C[-1]" data-sheets-numberformat='[null,2,"#,##0.00",1]' data-sheets-value="[null,3,null,3.894656322706941]">$0.26</td>
 +<td> $0.26</td>
 +<td data-sheets-value='[null,2,"http://arstechnica.com/information-technology/2012/06/with-16-petaflops-and-1-6m-cores-doe-supercomputer-is-worlds-fastest/"]'><span class="easy-footnote-margin-adjust" id="easy-footnote-1-457"></span><span class="easy-footnote"><a href="#easy-footnote-bottom-1-457" title='&amp;#8220;Livermore told us it spent roughly $250 million on Sequoia.&amp;#8221; &lt;a class="in-cell-link" href="http://arstechnica.com/information-technology/2012/06/with-16-petaflops-and-1-6m-cores-doe-supercomputer-is-worlds-fastest/" target="_blank" rel="noopener noreferrer"&gt;http://arstechnica.com/information-technology/2012/06/with-16-petaflops-and-1-6m-cores-doe-supercomputer-is-worlds-fastest/&lt;/a&gt;'><sup>1</sup></a></span></td>
 +</tr>
 +<tr>
 +<td data-sheets-value='[null,2,"K computer (Fujitsu - Custom supercomputer)"]'>K computer (Fujitsu – Custom supercomputer)</td>
 +<td data-sheets-value="[null,3,null,19585.2]">19585.2</td>
 +<td data-sheets-numberformat='[null,4,"\"$\"#,##0",1]' data-sheets-value="[null,3,null,1000000000]">$1,000</td>
 +<td data-sheets-formula="=R[0]C[-1]/(24*365.25*5)" data-sheets-numberformat='[null,4,"\"$\"#,##0",1]' data-sheets-value="[null,3,null,22815.423226100844]">$22,815</td>
 +<td data-sheets-numberformat='[null,2,"#,##0.00",1]' data-sheets-value="[null,3,null,12659.89]">12,659.89</td>
 +<td data-sheets-formula="=R[0]C[-1]*0.05" data-sheets-numberformat='[null,2,"#,##0.00",1]' data-sheets-value="[null,3,null,632.9945]">$632.99</td>
 +<td data-sheets-formula="=R[0]C[-3]+R[0]C[-1]" data-sheets-numberformat='[null,2,"#,##0.00",1]' data-sheets-value="[null,3,null,23448.417726100844]">23,448.42</td>
 +<td data-sheets-formula="=R[0]C[-6]/R[0]C[-1]" data-sheets-numberformat='[null,2,"#,##0.00",1]' data-sheets-value="[null,3,null,0.8352461231616226]">$1.20</td>
 +<td data-sheets-value='[null,2,"running costs are $10M/year"]'> $1.13</td>
 +<td data-sheets-value='[null,2,"http://arstechnica.com/information-technology/2012/06/with-16-petaflops-and-1-6m-cores-doe-supercomputer-is-worlds-fastest/"]'><span class="easy-footnote-margin-adjust" id="easy-footnote-2-457"></span><span class="easy-footnote"><a href="#easy-footnote-bottom-2-457" title='&amp;#8220;The K Computer in Japan, for example, cost more than $1 billion to build and $10 million to operate each year.&amp;#8221; &lt;a class="in-cell-link" href="http://arstechnica.com/information-technology/2012/06/with-16-petaflops-and-1-6m-cores-doe-supercomputer-is-worlds-fastest/" target="_blank" rel="noopener noreferrer"&gt;http://arstechnica.com/information-technology/2012/06/with-16-petaflops-and-1-6m-cores-doe-supercomputer-is-worlds-fastest/&lt;/a&gt; (note that our estimated energy expenses come to around $5M, which seems consistent with this).'><sup>2</sup></a></span></td>
 +</tr>
 +<tr>
 +<td data-sheets-value='[null,2,"DOE/SC/Argonne National Laboratory Mira (IBM - BlueGene/Q, Power BQC 16C 1.60 GHz)"]'>DOE/SC/Argonne National Laboratory Mira (IBM – BlueGene/Q, Power BQC 16C 1.60 GHz)</td>
 +<td data-sheets-value="[null,3,null,14982]">14982</td>
 +<td data-sheets-numberformat='[null,4,"\"$\"#,##0",1]' data-sheets-value="[null,3,null,50000000]">$50</td>
 +<td data-sheets-formula="=R[0]C[-1]/(24*365.25*5)" data-sheets-numberformat='[null,4,"\"$\"#,##0",1]' data-sheets-value="[null,3,null,1140.7711613050421]">$1,141</td>
 +<td data-sheets-numberformat='[null,2,"#,##0.00",1]' data-sheets-value="[null,3,null,3945]">3,945.00</td>
 +<td data-sheets-formula="=R[0]C[-1]*0.05" data-sheets-numberformat='[null,2,"#,##0.00",1]' data-sheets-value="[null,3,null,197.25]">$197.25</td>
 +<td data-sheets-formula="=R[0]C[-3]+R[0]C[-1]" data-sheets-numberformat='[null,2,"#,##0.00",1]' data-sheets-value="[null,3,null,1338.0211613050421]">1,338.02</td>
 +<td data-sheets-formula="=R[0]C[-6]/R[0]C[-1]" data-sheets-numberformat='[null,2,"#,##0.00",1]' data-sheets-value="[null,3,null,11.197132327404502]">$0.09</td>
 +<td data-sheets-value='[null,2,"bought using part of $180 Million grant"]'>$0.09</td>
 +<td data-sheets-value='[null,2,"http://www.pcworld.com/article/218951/us_commissions_beefy_ibm_supercomputer.html"]'><span class="easy-footnote-margin-adjust" id="easy-footnote-3-457"></span><span class="easy-footnote"><a href="#easy-footnote-bottom-3-457" title='&amp;#8220;Mira is expected to cost roughly $50 million, according to reports.&amp;#8221; https://www.alcf.anl.gov/articles/mira-worlds-fastest-supercomputer&amp;#8221;IBM did not reveal the price for Mira, though it did say Argonne had purchased it with funds from a US$180 million grant.&amp;#8221; &lt;a class="in-cell-link" href="http://www.pcworld.com/article/218951/us_commissions_beefy_ibm_supercomputer.html" target="_blank" rel="noopener noreferrer"&gt;http://www.pcworld.com/article/218951/us_commissions_beefy_ibm_supercomputer.html&lt;/a&gt;,'><sup>3</sup></a></span></td>
 +</tr>
 +<tr>
 +<td data-sheets-value='[null,2,"Tianhe-2 (MilkyWay-2) (National University of Defense Technology - MPP)"]'>Tianhe-2 (MilkyWay-2) (National University of Defense Technology – MPP)</td>
 +<td data-sheets-value="[null,3,null,2061.48]">2061.48</td>
 +<td data-sheets-numberformat='[null,4,"\"$\"#,##0",1]' data-sheets-value="[null,3,null,390000000]">$390</td>
 +<td data-sheets-formula="=R[0]C[-1]/(24*365.25*5)" data-sheets-numberformat='[null,4,"\"$\"#,##0",1]' data-sheets-value="[null,3,null,8898.01505817933]">$8,898</td>
 +<td data-sheets-numberformat='[null,2,"#,##0.00",1]' data-sheets-value="[null,3,null,17808]">17,808.00</td>
 +<td data-sheets-formula="=R[0]C[-1]*0.05" data-sheets-numberformat='[null,2,"#,##0.00",1]' data-sheets-value="[null,3,null,890.4000000000001]">$890.40</td>
 +<td data-sheets-formula="=R[0]C[-3]+R[0]C[-1]" data-sheets-numberformat='[null,2,"#,##0.00",1]' data-sheets-value="[null,3,null,9788.41505817933]">9,788.42</td>
 +<td data-sheets-formula="=R[0]C[-6]/R[0]C[-1]" data-sheets-numberformat='[null,2,"#,##0.00",1]' data-sheets-value="[null,3,null,0.21060406488151523]">$4.75</td>
 +<td>$0.30</td>
 +<td data-sheets-value='[null,2,"http://www.crizmo.com/worlds-top-10-supercomputers-with-their-cost-speed-and-usage.html"]'><span class="easy-footnote-margin-adjust" id="easy-footnote-4-457"></span><span class="easy-footnote"><a href="#easy-footnote-bottom-4-457" title='&amp;#8220;&lt;b&gt;Cost: &lt;/b&gt;2.4 billion Yuan or 3 billion Hong Kong dollars (390 million US Dollars)&amp;#8221; &lt;a class="in-cell-link" href="http://www.crizmo.com/worlds-top-10-supercomputers-with-their-cost-speed-and-usage.html" target="_blank" rel="noopener noreferrer"&gt;http://www.crizmo.com/worlds-top-10-supercomputers-with-their-cost-speed-and-usage.html&lt;/a&gt; '><sup>4</sup></a></span></td>
 +</tr>
 +<tr>
 +<td data-sheets-value='[null,2,"Blue Joule (IBM - BlueGene/Q, Power BQC 16C 1.60 GHz)"]'>Blue Joule (IBM – BlueGene/Q, Power BQC 16C 1.60 GHz)</td>
 +<td data-sheets-value="[null,3,null,1427]">1427</td>
 +<td data-sheets-numberformat='[null,4,"\"$\"#,##0",1]' data-sheets-value="[null,3,null,55300000]">$55.3</td>
 +<td data-sheets-formula="=R[0]C[-1]/(24*365.25*5)" data-sheets-numberformat='[null,4,"\"$\"#,##0",1]' data-sheets-value="[null,3,null,1261.6929044033766]">$1,262</td>
 +<td data-sheets-numberformat='[null,2,"#,##0.00",1]' data-sheets-value="[null,3,null,657]">657.00</td>
 +<td data-sheets-formula="=R[0]C[-1]*0.05" data-sheets-numberformat='[null,2,"#,##0.00",1]' data-sheets-value="[null,3,null,32.85]">$32.85</td>
 +<td data-sheets-formula="=R[0]C[-3]+R[0]C[-1]" data-sheets-numberformat='[null,2,"#,##0.00",1]' data-sheets-value="[null,3,null,1294.5429044033765]">1,294.54</td>
 +<td data-sheets-formula="=R[0]C[-6]/R[0]C[-1]" data-sheets-numberformat='[null,2,"#,##0.00",1]' data-sheets-value="[null,3,null,1.1023195872041567]">$0.91</td>
 +<td data-sheets-value='[null,2,"\u00a337.5 million"]'> $0.46</td>
 +<td data-sheets-value='[null,2,"http://hexus.net/business/news/enterprise/41937-uks-powerful-gpu-supercomputer-booted/"]'><span class="easy-footnote-margin-adjust" id="easy-footnote-5-457"></span><span class="easy-footnote"><a href="#easy-footnote-bottom-5-457" title='&amp;#8220;Blue Joule&amp;#8230;The cost of this system appears to be 10 times (£37.5 million) the above mentioned grant to develop the Emerald GPU supercomputer.&amp;#8221; &lt;a class="in-cell-link" href="http://hexus.net/business/news/enterprise/41937-uks-powerful-gpu-supercomputer-booted/" target="_blank" rel="noopener noreferrer"&gt;http://hexus.net/business/news/enterprise/41937-uks-powerful-gpu-supercomputer-booted/&lt;/a&gt; Note that £37.5M = $55.3M '><sup>5</sup></a></span></td>
 +</tr>
 +</tbody>
 +</table>
 +</HTML>
 +
 +
 +<HTML>
 +<p><em><strong>Table 1</strong>: Calculation of costs of a TEPS over one hour in five supercomputers.</em></p>
 +</HTML>
 +
 +
 +=== Sequoia as representative of cheap TEPShours ===
 +
 +
 +<HTML>
 +<p>Mira and then Sequoia produce the cheapest TEPShours of the supercomputers investigated here, and are also the only ones which used all of their cores in the benchmark, making their costs less ambiguous. Mira’s costs are ambiguous nonetheless, because the $50M price estimate we have was projected by an unknown source, ahead of time. Mira is also known to have been bought using some part of a $180M grant. If Mira cost most of that, it would be more expensive than Sequoia. Sequoia’s price was given by the laboratory that bought it, after the fact, so is more likely to be reliable.</p>
 +</HTML>
 +
 +
 +<HTML>
 +<p>Thus while Sequoia does not appear to be the cheapest source of TEPS, it does appear to be the second cheapest, and its estimate seems substantially more reliable. Sequoia is also a likely candidate to be especially cheap, since it is ranked first in the Graph 500, and is the largest of the IBM <a href="http://en.wikipedia.org/wiki/Blue_Gene">Blue Gene/Q</a>s, which dominate the top of the Graph 500 list. This somewhat supports the validity of its apparent good price performance here.</p>
 +</HTML>
 +
 +
 +<HTML>
 +<p>Sequoia is also not much cheaper than the more expensive supercomputers in our list, once they are scaled down according to the number of cores they used on the benchmark (see Table 1), further supporting this price estimate.</p>
 +</HTML>
 +
 +
 +<HTML>
 +<p>Thus we estimate that GTEPShours can be produced for around $0.26 on current supercomputers. This corresponds to around $11,000/GTEP to buy the hardware alone.</p>
 +</HTML>
 +
 +
 +=== Price of TEPShours in lower performance computing ===
 +
 +
 +<HTML>
 +<p>We have only looked at the price of TEPS in top supercomputers. While these produce the most TEPS, they might not be the part of the range which produces TEPS most cheaply. However because we are interested in the application to AI, and thus to systems roughly as large as the brain, price performance near the top of the range is particularly relevant to us. Even if a laptop could produce a TEPS more cheaply than Sequoia, it produces too few of them to run a brain efficiently. Nonetheless, we plan to investigate TEPS/$ in lower performing computers in future.</p>
 +</HTML>
 +
 +
 +<HTML>
 +<p>For now, we checked the efficiency of an iPad 3, since one was listed near the bottom of the Graph 500. These are sold for <a href="http://www.amazon.com/Apple-MC705LL-Wi-Fi-Black-Generation/dp/B00746LVOM/ref=sr_1_1?ie=UTF8&amp;qid=1426895358&amp;sr=8-1&amp;keywords=3rd+generation+ipad">$349.99</a>, and apparently produce 0.0304 GTEPS. Over five years, this comes out at exactly the same price as the Sequoia: $0.26/GTEPShour. This suggests both that cheaper computers may be more efficient than large supercomputers (the iPad is not known for its cheap computing power) and that the differences in price are probably not large across the performance spectrum.</p>
 +</HTML>
 +
 +
 +===== Trends in TEPS available per dollar =====
 +
 +
 +<HTML>
 +<p>The long-term trend of TEPS is not well known, as the benchmark is new. This makes it hard to calculate a TEPS/$ trend. Figure 2 is from a powerpoint <em><a href="http://www.graph500.org/sites/default/files/files/bof/Graph500-BoF-SC14-v1.pdf">Announcing the 9th Graph500 List!</a></em> from the <a href="http://www.graph500.org/bof">Top 500 website</a>. One thing it shows is top performance in the Graph 500 list since the list began in 2010. Top performance grew very fast (3.5 orders of magnitude in two years), before completely flattening, then growing slowly. The powerpoint attributes this pattern to ‘maturation of the benchmark’, suggesting that the steep slope was probably not reflective of real progress.</p>
 +</HTML>
 +
 +
 +<HTML>
 +<p>One reason to expect this pattern is that during the period of fast growth, pre-existing high performance computers were being tested for the first time. This appears to account for some of it. However we note that in June 2012, Sequoia (which tops the list at present) and Mira (#3) had both already been tested, and merely had lower performance than they do now, suggesting at least one other factor is at play. One possibility is that in the early years of using the benchmark, people develop good software for the problem, or in other ways adjust how they use particular computers on the benchmark.</p>
 +</HTML>
 +
 +
 +<HTML>
 +<figure aria-describedby="caption-attachment-458" class="wp-caption alignnone" id="attachment_458" style="width: 500px">
 +<a href="http://aiimpacts.org/wp-content/uploads/2015/03/teps-trend-top-500-copy.png"><img alt="teps trend top 500 copy" class="wp-image-458" height="377" loading="lazy" sizes="(max-width: 500px) 100vw, 500px" src="https://aiimpacts.org/wp-content/uploads/2015/03/teps-trend-top-500-copy-1024x772.png" srcset="https://aiimpacts.org/wp-content/uploads/2015/03/teps-trend-top-500-copy-1024x772.png 1024w, https://aiimpacts.org/wp-content/uploads/2015/03/teps-trend-top-500-copy-300x226.png 300w, https://aiimpacts.org/wp-content/uploads/2015/03/teps-trend-top-500-copy.png 1440w" width="500"/></a>
 +<figcaption class="wp-caption-text" id="caption-attachment-458">
 +<strong>Figure 2</strong>: Performance of the top supercomputer on Graph 500 each year since it has existed (along with the 8th best, and an unspecified sum).
 +                </figcaption>
 +</figure>
 +</HTML>
 +
 +
 +
 +
 +==== Relationship between TEPS and FLOPS ====
 +
 +
 +<HTML>
 +<p>The top eight computers in the Graph 500 are also in the <a href="http://en.wikipedia.org/wiki/TOP500">Top 500</a>, so we can compare their TEPS and FLOPS ratings. Because many computers did not use all of their cores in the Graph 500, we scale down the FLOPS measured in the Top 500 by the fraction of cores used in the Graph 500 relative to the Top 500 (this is discussed further in ‘Bias from scaling down’ above). We have not checked thoroughly whether FLOPS scales linearly with cores, but this appears to be a reasonable approximation, based on the first page of the Top 500 list.</p>
 +</HTML>
 +
 +
 +<HTML>
 +<p>The supercomputers measured here consistently achieve around 1-2 GTEPS per scaled TFLOPS (see Figure 3). The median ratio is 1.9 GTEPS/TFLOP, the mean is 1.7 GTEPS/TFLOP, and the variance 0.14 GTEPS/TFLOP. Figure 4 shows GTEPS and TFLOPS plotted against one another.</p>
 +</HTML>
 +
 +
 +<HTML>
 +<p>The ratio of GTEPS to TFLOPS may vary across the range of computing power. Our figures may may also be slightly biased by selecting machines from the top of the Graph 500 to check against the Top 500. However the current comparison gives us a rough sense, and the figures are consistent.</p>
 +</HTML>
 +
 +
 +<HTML>
 +<p><a href="http://on-demand.gputechconf.com/gtc/2013/presentations/S3089-Breadth-First-Search-Multiple-GPUs.pdf">This presentation</a> (slide 23) reports that a Kepler GPU produces 10<sup>9</sup> TEPS, as compared to 10<sup>12</sup> FLOPS reported <a href="http://en.community.dell.com/techcenter/high-performance-computing/b/weblog/archive/2013/11/25/accelerating-high-performance-linpack-hpl-with-kepler-k20x-gpus.aspx">here</a> (assuming that both are top end models), suggesting a similar ratio holds for less powerful computers.</p>
 +</HTML>
 +
 +
 +<HTML>
 +<figure aria-describedby="caption-attachment-473" class="wp-caption alignnone" id="attachment_473" style="width: 600px">
 +<a href="http://aiimpacts.org/wp-content/uploads/2015/03/image-10.png"><img alt="Figure xxx: GTEPS/scaled TFLOPS, based on Graph 500 and Top 500." class="size-full wp-image-473" height="371" loading="lazy" sizes="(max-width: 600px) 100vw, 600px" src="https://aiimpacts.org/wp-content/uploads/2015/03/image-10.png" srcset="https://aiimpacts.org/wp-content/uploads/2015/03/image-10.png 600w, https://aiimpacts.org/wp-content/uploads/2015/03/image-10-300x186.png 300w" width="600"/></a>
 +<figcaption class="wp-caption-text" id="caption-attachment-473">
 +                  Figure 3: GTEPS/scaled TFLOPS, based on Graph 500 and Top 500.
 +                </figcaption>
 +</figure>
 +</HTML>
 +
 +
 +<HTML>
 +<figure aria-describedby="caption-attachment-472" class="wp-caption alignnone" id="attachment_472" style="width: 600px">
 +<a href="http://aiimpacts.org/wp-content/uploads/2015/03/image-9.png"><img alt="Figure xxx: GTEPS and scaled TFLOPS achieved by the top 8 machines on Graph 500. See text for scaling description. " class="size-full wp-image-472" height="371" loading="lazy" sizes="(max-width: 600px) 100vw, 600px" src="https://aiimpacts.org/wp-content/uploads/2015/03/image-9.png" srcset="https://aiimpacts.org/wp-content/uploads/2015/03/image-9.png 600w, https://aiimpacts.org/wp-content/uploads/2015/03/image-9-300x186.png 300w" width="600"/></a>
 +<figcaption class="wp-caption-text" id="caption-attachment-472">
 +                  Figure 4: GTEPS and scaled TFLOPS achieved by the top 8 machines on Graph 500. See text for scaling description.
 +                </figcaption>
 +</figure>
 +</HTML>
 +
 +
 +=== Projecting TEPS based on FLOPS ===
 +
 +
 +<HTML>
 +<p>Since the conversion rate between FLOPS and TEPS is approximately consistent, we can project growth in TEPS/$ based on the better understood growth of FLOPS/$. In the last quarter of a century, FLOPS/$ <a href="/doku.php?id=ai_timelines:trends_in_the_cost_of_computing" title="Trends in the cost of computing">has grown</a> by a factor of ten roughly every four years. This suggests that TEPS/$ also grows by a factor of ten every four years.</p>
 +</HTML>
 +
 +
 +
 +
 +
 +
 +
 +
 +<HTML>
 +<ol class="easy-footnotes-wrapper">
 +<li><div class="li">
 +<span class="easy-footnote-margin-adjust" id="easy-footnote-bottom-1-457"></span>“Livermore told us it spent roughly $250 million on Sequoia.” <a class="in-cell-link" href="http://arstechnica.com/information-technology/2012/06/with-16-petaflops-and-1-6m-cores-doe-supercomputer-is-worlds-fastest/" rel="noopener noreferrer" target="_blank">http://arstechnica.com/information-technology/2012/06/with-16-petaflops-and-1-6m-cores-doe-supercomputer-is-worlds-fastest/</a><a class="easy-footnote-to-top" href="#easy-footnote-1-457"></a>
 +</div></li>
 +<li><div class="li">
 +<span class="easy-footnote-margin-adjust" id="easy-footnote-bottom-2-457"></span>“The K Computer in Japan, for example, cost more than $1 billion to build and $10 million to operate each year.” <a class="in-cell-link" href="http://arstechnica.com/information-technology/2012/06/with-16-petaflops-and-1-6m-cores-doe-supercomputer-is-worlds-fastest/" rel="noopener noreferrer" target="_blank">http://arstechnica.com/information-technology/2012/06/with-16-petaflops-and-1-6m-cores-doe-supercomputer-is-worlds-fastest/</a> (note that our estimated energy expenses come to around $5M, which seems consistent with this).<a class="easy-footnote-to-top" href="#easy-footnote-2-457"></a>
 +</div></li>
 +<li><div class="li">
 +<span class="easy-footnote-margin-adjust" id="easy-footnote-bottom-3-457"></span>“Mira is expected to cost roughly $50 million, according to reports.” https://www.alcf.anl.gov/articles/mira-worlds-fastest-supercomputer”IBM did not reveal the price for Mira, though it did say Argonne had purchased it with funds from a US$180 million grant.” <a class="in-cell-link" href="http://www.pcworld.com/article/218951/us_commissions_beefy_ibm_supercomputer.html" rel="noopener noreferrer" target="_blank">http://www.pcworld.com/article/218951/us_commissions_beefy_ibm_supercomputer.html</a>,<a class="easy-footnote-to-top" href="#easy-footnote-3-457"></a>
 +</div></li>
 +<li><div class="li">
 +<span class="easy-footnote-margin-adjust" id="easy-footnote-bottom-4-457"></span>“<b>Cost:</b> 2.4 billion Yuan or 3 billion Hong Kong dollars (390 million US Dollars)” <a class="in-cell-link" href="http://www.crizmo.com/worlds-top-10-supercomputers-with-their-cost-speed-and-usage.html" rel="noopener noreferrer" target="_blank">http://www.crizmo.com/worlds-top-10-supercomputers-with-their-cost-speed-and-usage.html</a> <a class="easy-footnote-to-top" href="#easy-footnote-4-457"></a>
 +</div></li>
 +<li><div class="li">
 +<span class="easy-footnote-margin-adjust" id="easy-footnote-bottom-5-457"></span>“Blue Joule…The cost of this system appears to be 10 times (£37.5 million) the above mentioned grant to develop the Emerald GPU supercomputer.” <a class="in-cell-link" href="http://hexus.net/business/news/enterprise/41937-uks-powerful-gpu-supercomputer-booted/" rel="noopener noreferrer" target="_blank">http://hexus.net/business/news/enterprise/41937-uks-powerful-gpu-supercomputer-booted/</a> Note that £37.5M = $55.3M <a class="easy-footnote-to-top" href="#easy-footnote-5-457"></a>
 +</div></li>
 +</ol>
 +</HTML>
 +
 +
  
ai_timelines/the_cost_of_teps.txt · Last modified: 2022/09/21 07:37 (external edit)