As per our daily routine, we in Bright engineering were recently presented with a problem, and went in search of an elegant solution. The situation: we wanted to display our Bright Score calculation results to our users as quickly as possible after the calculation was completed. Since our search algorithm and our scoring algorithm are distinctly different and optimized for two different needs, and since calculating Bright Scores for a user’s search results takes longer than generating the search results themselves, we present our users with search results before the corresponding Bright Scores are calculated and ready for display.
Once a user views search results, we need to retrieve Bright Score results for each of the jobs in the results set. We want to present a user with scores quickly and efficiently without hammering our infrastructure with unnecessary requests. The vast majority of our on-demand calculation requests are completed within a couple seconds, but sometimes we might have a score request backlog, or the occasional calculation failure, and we want the client to be able to handle this gracefully.
My technically savvy comrades out there might immediately align their thinking to the use of WebSockets. A WebSocket allows for a browser to retain an open connection and full-duplex communication between the client and server—essentially eliminating the need to constantly open, poll, and close an HTTP connection to see if the content of interest is ready. While this would be the ideal solution, we resisted this approach for a few reasons:
- Browser compatibility is the most immediate concern. We support browsers back to IE7, and WebSockets are an HTML5 standard that aren’t fully supported by all the devices and browsers we want to target.
- Our stack would have required new additions and configuration updates.
- The time constraints of needing this otherwise basic request solved quickly, and not wanting to involve the whole engineering team to do so.
Given the time constraints, we decided on a polling model. From there we needed to implement our polling algorithm in such a way that the client would not be causing undue burden on our infrastructure in the event of a failed calculation or a score request backlog, while still retrieving scores as quickly as possible once available. We didn’t want to issue callbacks based on a static timer, since this could lead to a consistent and unnecessary stream of requests to our servers. We therefore knew that we wanted to gradually decrease the request rate at which the client checked for score availability. Increasing the timer callback rate by one additional second between each request was the first thought, but that still produced more load than we preferred in the event of score calculation issues.
At that point the solution became clear. More than a decade after being introduced to the world of academic computer science, I finally had a good use case for the implementation of the Fibonacci sequence, a numeric sequence that gave me nightmares while trying to derive its algorithm using Scheme—the close sister to common Lisp—in the trial-by-fire of my first foray into the world of computer programming.
For those that aren’t familiar, the Fibonacci sequence is a series of a numbers who’s next value is the sum of the previous two. Using 0 and 1 as a starting point, we end up with:
0, 1, 1, 2, 3, 5, 8, 13, 21, 34…
I summoned Thiago, one of our crack staff of front-end ninjas who was working in the code at the time, to implement this glorious, simple, and elegant algorithm into the public sphere of our client-side code base. The approach was simple: our callback timer for requesting score calculation results would be based on the Fibonacci sequence. The first request would be made as soon as the user’s job search results came back, followed by subsequent requests on a Fibonacci interval—after 1 second, 1 second, 2 seconds, 3 seconds, 5 seconds, 8 seconds, and so on. After the first 10 requests, subsequent callbacks would already be a minute apart and increasing, allowing our apparently latent score calculation infrastructure an opportunity to catch up without being additionally burdened by clients endlessly wondering, “Are you ready yet?”