Friday, December 09, 2011

Fibonacci Scaling

Circa 1996 I found myself in an ideal situation: spending long hours working on a very technically challenging project with a multi-disciplinary bunch of engineers who were all smarter than I was. Good times.

During that project one of my esteemed colleagues, John Meiners, needed a way to scale up the bandwidth on communications channels allocated in real-time on an as-needed basis. We were using Asynchronous Transfer Mode (ATM), a technology in which endpoints have to negotiate with the network for the amount of bandwidth they needed. Asking for more bandwidth than we needed meant more cost for our customers. Asking for less meant data loss. Tearing down an existing channel to allocate a bigger one would be viewed as a catastrophic failure. Due to the hard real-time nature of the application, doing everything via best effort was not even on the table.

So we started small, then as load increased we allocated additional channels. But some back of the envelope calculations suggested that if we chose a fixed bandwidth increment, things could get out of hand quickly with the number of channels we had to manage. We needed to start small, but allocate larger channels as the need arose. What algorithm should we use?

John looked at various exponential algorithms, like doubling the bandwidth request of each subsequent channel. But that got really big (and hence really expensive) really quickly, and the scaling didn't match our expected use cases. What he finally settled on was Fibonacci numbers:

Fn = Fn-1 + Fn-2

where

F0 = 0
F1 = 1

Starting with F1 just as Fibonacci himself did in 1202 (since using zero as a bandwidth multiplier doesn't make much sense) produces a sequence that looks like this:

1, 1, 2, 3, 5, 8, 13, 21, 34, 55, ...

This turned out to be pretty much ideal. It starts small and conservative, expands but not too quickly, and is trivially computed since you just have to keep track of the last two numbers in the sequence and merely do an addition.

Fibonacci numbers have been around a long time. Although Leonardo of Pisa (a.k.a. Fibonacci) gets the credit, the sequence was written down earlier by Indian mathematicians. It occurs frequently in nature, most noticeably in spirals such as those found in pine cones and shells. John's idea seems pretty innovative in 1996, but lots of high-technology applications of the Fibonacci sequence exist today, including network back off algorithms, lossy data compression, pseudo-random number generators, and tree and heap data structures.

I'm thinking about this because I am once again using this sequence as a way to scale up back off time delays when recovering from network errors in Hayloft, my little C++ research project using Amazon Web Services Simple Storage Service (S3). I have no doubt that Fibonacci's simple little recursive algorithm will continue to serve me well.

1 comment:

Chip Overclock said...

Long time friend and colleague Dr. David Hemmendinger, Professor Emeritus in Computer Science at Union College, kindly consented to allow me to post the comments he made by email (any typos are purely the result of my cut and paste):

"I just looked at your blog and saw last week's post on Fibonacci scaling. Such algorithms are apparently getting some use in mobile networks -- see, for example,

www.techrepublic.com/whitepapers/fibonacci-backoff-algorithm-for-mobile-ad-hocnetworks/2363093 .

[ I had actually come across that paper while writing my blog article. -- Chip ]

One comment: it's also exponential, though with slower growth than 2^n; the Fib numbers are O(phi), where phi is the Golden Ratio,

1.618+ ( (1+sqrt(5))/2 ).

In fact,

fib_n = round( phi^n / sqrt(5)),

at least for reasonable values of n.

If you need faster growth than the Fibonacci numbers, you can generalize them: let kFib_n be the sum of the k preceding kFib numbers. For k=inf, you get doubling (1,1,2,4,8...); for smaller k, you get
growth somewhere between phi^n and 2^n."

David later followed up with some additional remarks:

"On further examination, though, I don't think that what I said about what I called kFib numbers is all that useful. They grow as O(r^n) where r is the positive root of the equation

x^k = sum from i=0 to k-1 (x^i) .

A symbolic solver shows that for k=3, r ~= 1.839 and for k=4, r ~= 1.928, so their growth rates approach O(2^n) for fairly small k, though for small k, calculating the next value from the preceding few is still efficient.

The expression that I gave for Fib_n follows from the exact expression (where phi = (1 + sqrt(5))/2 )

Fib_n = (phi^n - (-1/phi)^n)/sqrt(5)

-- since 1/phi = 0.618+, the second term rapidly goes to 0. In fact,

Fib_n = round(phi^n)/sqrt(5)

is good for all n>=0 (ie starting with Fib_0 = 0)."

As always, thank for your insightful remarks, David.