性能调优 session 1 - 计算机体系结构 量化研究方法
阅读原文时间:2023年09月03日阅读:6
近期本人参与的存储系统项目进入到性能调优阶段,当前系统的性能指标离项目预期目标还有较大差距。本人一直奉行"理论指导下的实践",尤其在调试初期,更要抓住主要矛盾,投入最少的资源来获取最大的收益。如何找到主要矛盾并重点解决呢?
本文参考经典书籍《计算机体系结构 量化研究方法》,主要介绍系统可靠性和性能评估的基本理论,以及 Amdahl's Law (阿姆达定律)和 processor performance equation(处理器性能等式),为性能调优和系统可靠性评估提供理论支撑。

目录

Background and Introduction

  • SLA (Service Level Agreement)

    • Service Accomplishment, where the service is delivered as specified
    • Service Interruption, where the delivered service is different from the SLA
  • Module Reliability

    • Mean time to failure (MTTF)
    • Mean time to repair (MTTR)
    • Mean time between failures (MTBF) = MTTF + MTTR
    • Failure in time (FIT): failures per billion hours
  • Module Availability

    • Module availability = MTTF / (MTTF + MTTR)

Example1

Assume a disk subsystem with the following components and MTTF:

  • 10 disks, each rated at 1,000,000-hour MTTF
  • 1 ATA controller, 500,000-hour MTTF
  • 1 power supply, 200,000-hour MTTF
  • 1 fan, 200,000-hour MTTF
  • 1 ATA cable, 1,000,000-hour MTTF

Using the simplifying assumptions that the lifetime are exponentially distributed and that failures are independent, compute the MTTF of the system as a whole.

Answer1

The sum of the failure rates is

\[Failure\ rate_{system}=10\times\frac{1}{1,000,000}+\frac{1}{500,000}+\frac{1}{200,000}+\frac{1}{200,000}+\frac{1}{1,000,000}=\frac{10+2+5+5+1}{1,000,000}=\frac{23}{1,000,000}=\frac{23,000}{1,000,000,000}
\]

or 23,000 FIT.

The MTTF for the system is just the inverse of the failure rate

\[MTTF_{system}=\frac{1}{Failure\ rate}=\frac{1,000,000}{23}\approx43,500 \ hours
\]

or just under 5 years.

Example2

Disk subsystems often have redundant power supplies to improve dependability. Using the preceding components and MTTFs, calculate the reliability of redundant power supplies. Assume that one power supply is sufficient to run the disk subsystem and that we are adding one redundant power supply.

Assumptions:

Answer2

Mean time until one supply failed is $ MTTF_{power supply} / 2 $.

A good approximation of the probability of a second failure is MTTR over the mean time until the other power supply fails.

\[MTTF_{power\ supply\ pair}=\frac{MTTF_{power\ supply}/2}{\frac{MTTR_{power\ supply}}{MTTF_{power\ supply}}}=\frac{MTTF^2_{power\ supply}}{2 \times MTTR_{power\ supply}}
\]

Assume a human operator to notice the failure and replace it, the reliability of the fault tolerant pair of power supplies is

\[MTTF_{power supply pair} = \frac{200000^2}{2 \times 24} \approx 830,000,000
\]

making the pair about 4150 times more reliable than a single supply.

Annual Failure Rate

Fallacy

The rated mean time to failure of disks is 1,200,000 hours or almost 140 years so disk practically never fail.

The number 1,200,000 far exceeds the lifetime of a disk, which is commonly assumed to be 5 years or 43,800 hours.

For this large MTTF to make some sense: keep replacing the disk every 5 years - the planned lifetime of the disk. Replace a disk 27 times before a failure in next century, or about 140 years.

Therefore, more useful measure is the percentage of disks that fail, which is called annual failure rate (AFR).

Example

Assume 1000 disks with a 1,000,000-hour MTTF and that the disks are used 24 hours a day. If you replaced failed disk with a new one having the same reliability characteristics, the number of failed disks in a year(8760 hours) is

\[Failed\ disks = \frac{number\ of\ disks \times time\ period}{MTTF}=\frac{1000\ disks\times8760\ hours/disk}{1,000,000 hours}=9
\]

0.9% of disks would fail per year, 4.4% over 5-years lifetime.

In real environments according to research, 3%-7% of drives failed per year for an MTTF of about 125,000-300,000 hours.

The real-world MTTF is about 2-10 times worse than the manufacture's MTTF.

  • Typical performance metrics

    • response time
    • throughput
  • Execution time

    • Wall clock time: include all system overheads
    • CPU time: only computation time
  • Speedup of X relative to Y

    X is faster than Y,

    \[n=\frac{Execution\ time_Y}{Execution\ time_X}=\frac{1/Performance_Y}{1/Performance_X}=\frac{Performance_X}{Performance_Y}
    \]

  • Benchmarks

    • Kernels(e.g. matrix multiply)

    • Toy program (e.g. quick sort)

      Above 2 metrics cannot give the real performance of application execution.

    • Synthetic benchmarks (e.g. Dhrystone)

    • Benchmark suites (e.g. SPEC06FP, TPC-c)

  • Take advantage of parallelism

    e.g. multiple processors, disks, memory banks, pipelining, multiple function units

  • Principle of locality

    • reuse of data and instructions
    • Temporal locality and spatial locality
  • Focus on the common case

    • favor the frequent case over the infrequent case
    • Amdahl's Law
    • processor performance equation

Amdahl's Law

Amdahl's law gives us a quick way to find speedup from some enhancement, which depends on 2 factors:

  • the fraction of the computation time in the original computer that can be converted to take advantage of the enhancement.
  • the improvement gained by the enhanced execution mode, that is, how much faster the task would run if the enhanced mode were used for the entire program.

\[Execution\ time_{new}=Execution\ time_{old}\times((1-Fraction_{enhanced})+\frac{Fraction_{enhanced}}{Speedup_{enhanced}})
\]

The overall speedup is the ratio of the execution times:

\[Speedup_{overall}=\frac{Execution\ time_{old}}{Execution\ time_{new}}=\frac{1}{(1-Fraction_{enhanced})+\frac{Fraction_{enhanced}}{Speedup_{enhanced}}}
\]

Example1

Suppose that we want to enhance the processor used for web serving. The new processor is 10 times faster on computation in the web serving application than the old processor. Assuming that the original processor is busy with computation 40% of the time and is waiting for IO 60% of the time, what is the overall speedup gained by incorporating the enhancement?

Answer1

\[Fraction_{enhanced}=0.4;Speedup_{enhanced}=10;Speedup_{overall}=\frac{1}{0.6+\frac{0.4}{10}} \approx 1.56
\]

Example2

FSQRT (Floating-point square root)

Proposal 1: FSQRT is responsible for 20% of the execution time of a critical graphics benchmark. Enhance FSQRT hardware and speed up this operation by a factor of 10.

Proposal 2: FP instructions are responsible for half of the execution time for the application. Make all FP instructions in the graphics process run faster by a factor of 1.6.

Compare these 2 design alternatives.

Answer2

\[Speedup_{FSQRT}=\frac{1}{(1-0.2)+\frac{0.2}{10}}=1.22
\]

\[Speedup_{FP}=\frac{1}{0.5+\frac{0.5}{1.6}}=1.23
\]

Improving the performance of the FP operations overall is slightly better because of the higher frequency.

Example3

Back to dependability example:

\[Failure\ rate_{system}=\frac{10+2+5+5+1}{1,000,000}=\frac{23}{1,000,000}
\]

The fraction of power supply in system is $ \frac{5}{23}=0.22 $.

After adding a redundant power supply, the system is about 4150 times more reliable than before.

The reliability improvement would be

\[Improvement_{power supply pair}=\frac{1}{(1-0.22)+\frac{0.22}{4150}} \approx 1.28
\]

Despite an impressive 4150x improvement in reliability of one module, from the system's perspective, the change has a measurable but small benefit.

  • Amdahl's law can serve as a guide to how much an enhancement will improve performance and how to distribute resources to improve cost performance. The goal, clearly, is to speed resources proportional to where time is spent.
  • Amdahl's law is particularly useful for comparing the overall system performance/processor design of 2 alternatives.

Processor Performance Equation

\[CPU\ time=CPU\ clock\ cycles\ of\ a\ program \times Clock\ cycle\ time
\]

or

\[CPU\ time=\frac{CPU\ clock\ cycles\ of\ a\ program}{Clock\ rate}
\]

From instruction respect,

\[CPI=\frac{CPU\ clock\ cycles\ of\ a\ program}{Instruction\ count}
\]

\[CPU\ time=IC \times CPI \times clock\ cycle\ time
\]

Term & Dependency:

  • clock cycle time - Hardware technology and organization, 1/clock rate
  • CPI, clock cycles per instruction - Organization and instruction set architecture
  • IC, instruction count - Instruction set architecture and compiler technology

For different types of instructions,

\[CPU\ time=(\Sigma_{i=1}^{n}{IC_{i} \times CPI_{i}}) \times Clock\ cycle\ time
\]

Overall CPI

\[CPI=\frac{\Sigma_{i=1}^{n}{IC_{i} \times CPI_{i}}}{IC}=\Sigma_{i=1}^{n}{\frac{IC_i}{IC}\times CPI_i}
\]

Consider previous Example2 in section Amdahl's Law, here modified to use measurements of the frequency of the instructions and of the instruction CPI values, which, in practice, are obtained by simulation or by hardware instrumentation.

Example

Suppose we made the following measurements:

  • Frequency of FP operations = 25%
  • Average CPI of FP operations = 4.0
  • Average CPI of other instructions = 1.33
  • Frequency of FSQRT = 2%
  • CPI of FSQRT = 20

Assume that the 2 design alternatives are to

Compare these 2 design alternatives using the processor performance equation.

Answer

Original CPI with neither enhancement:

\[CPI_{original}=\Sigma_{i=1}^{n}{\frac{IC_i}{IC} \times CPI_i}=(4.0 \times 25\%)+(1.33 \times 75\%)=2.0
\]

\[CPI_{with\ new\ FSQRT}=CPI_{original}-2\%\times(CPI_{old\ FSQRT}-CPI_{of\ new\ FSQRT})=2.0-2\% \times (20-2)=1.64
\]

Since the CPI of overall FP enhancement is slightly lower, its performance will be marginally better.

\[Speedup_{new\ FP}=\frac{CPU\ time_{original}}{CPU\ time_{new FP}}=\frac{IC \times CPI_{original} \times clock\ cycle\ time}{IC \times CPI_{new FP} \times clock\ cycle\ time}=\frac{2.0}{1.625}=1.23
\]

It is more possible to measure the constituent parts of the processor performance equation. Such isolated measurements are a key advantage of using processor performance equation versus Amdahl's Law in the previous example. In particular, it may be difficult to measure things such as the fraction of execution time for which a set of instructions is responsible.

手机扫一扫

移动阅读更方便

阿里云服务器
腾讯云服务器
七牛云服务器

你可能感兴趣的文章