I am looking for advice in the following optimization problem. I have a website (system) that receives database updates later returned in queries made to the same system. In order to speed up the system the obvious solution is to cache data in different places. I have identified the most likely places where caching may be useful, but caching in every case is clearly not practical. I have begun to decompose the times it takes to place data in different caches. I will end up with.

Cu = uth computation time
Su = storage required to store result of uth computation
SCn = storage cost using nth storage resource
Wn = storage cache write time for nth storage resource
Rn = storage cache read time for nth storage resource

I understand that there are multiple caches inside today's microprocessors, but I want to ignore those for now. I actually have several options for storage. I can buy plenty of SSD, Spinning drives and/or RAM. But given the amount of possible caching I want to buy what makes most sense.

I am trying to come up with a cost function, storage seems to be pretty straight forward since there will be an allocated space that will cost X dollars. I am struggling a bit more with timing. Clearly I could minimize the time it takes to query and update. But that does not seem correct. I need responsiveness, we need servers to be responsive with every request not to be able to process every single update and query at once in a couple of days, instead of three. I am constructing a $$ to seconds function. This function would imply, we are willing to pay $X for a computer that responds in Y seconds.

I will also need an optimization algorithm, there are a lot out there and I am not sure which could be a good fit for this problem.

I am having a lot of difficulty finding research similar to this description. And I do know I will need to redo this optimization a number of times, so I will end up writing code so I don't have to do this every time.

asked 02 Sep '13, 16:13

Arturo%20H's gravatar image

Arturo H
11
accept rate: 0%

edited 02 Sep '13, 16:23

Is the "u-th computation" defined/known in advance? In other words, do you know how often computations "arrive", how large they are, how often they will be queried, and what their lifetimes are? I assume that results eventually either become stale and either should be deleted because they will never be queried again or can be deleted to make room because their query rate is low. That raises another question: is the query rate for any given computation constant over time? Are all these rates/sizes deterministic or stochastic (random)?

(04 Sep '13, 11:53) Paul Rubin ♦♦

Am I correct that the storage cost represents a one-time capacity expansion (meaning you are not adding storage on the fly to house new results)?

(04 Sep '13, 11:53) Paul Rubin ♦♦

@Paul all the details of the "u-th computation" is known in advance. A probability of usage could be calculated. The system has a messaging subsystem that recomputes every time the source data changes. The probability of it changing also has to be accounted for.

(04 Sep '13, 12:20) Arturo H

I'm not at all confident I understand the question, but ...

I would start with a model that allocates computation results to storage devices. Assuming that a result cannot be split across devices, this looks something like a binary multiple knapsack model (where each device is a "knapsack", each result is an "item", capacity is device space (bytes or whatever) and item size = result storage footprint. As the objective function, I would minimize weighted response time (sum across results of frequency of querying times response time per query for the chosen device), possibly weighting it further if snappy response time is more important for some queries than for others.

The capacities show up as parameters of that model. There are now several possible approaches. One is to do sensitivity analysis, running the model for different possible hardware configurations, getting the optimal weighted response time, and graphing that against the configuration cost. Another is to add integer binary variables representing possible expansion options (do I add a 1 TB drive [binary] or how many bytes of SSD memory do I add [integer]), express the capacity in terms of those added variables, tack on a budget constraint and optimize response within the budget.

Yet another approach is to try to merge the storage cost with the response time function, but I would avoid that if possible for a couple of reasons. First, storage cost is a one-time cost while response time is an ongoing (recurring) cost; you would have to prorate storage cost somehow to get anything meaningful. Second, you would need an explicit estimate of what a 1 ms./query improvement response time on average is worth in terms of dollars invested in storage expansion. Good luck getting that.

link

answered 07 Sep '13, 12:00

Paul%20Rubin's gravatar image

Paul Rubin ♦♦
14.6k412
accept rate: 19%

Thanks Paul. I got a book on approximation algorithms and it does include the knapsack. I'll be checking it out.

(10 Sep '13, 11:03) Arturo H
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "Title")
  • image?![alt text](/path/img.jpg "Title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Tags:

×190

Asked: 02 Sep '13, 16:13

Seen: 690 times

Last updated: 10 Sep '13, 11:03

OR-Exchange! Your site for questions, answers, and announcements about operations research.