• Posted by Konstantin 19.06.2010 1 Comment

    The other day I was working on a Python program, which internally used a data structure for holding a set of rules. Each rule specified how some object was created from some other objects through certain procedures. The corresponding set of rules could be represented as a bunch of assignment expressions:

    # Description of the relations between objects
    some_item[1].sub_item = some_proc(other_item[2], "Text", 42)
    some_item[2].sub_item = other_proc(some_item[1], "Text", 1337)
    # ... etc

    I would manually describe the required structure using code like the following (in fact, it was slightly more complicated than that, but I hope you get the point).

    struct = Structure()
    struct.add_relation(['some_item', '[1]', '.sub_item'], 
                     ['some_proc', ['other_item', '[2]'], '"Text"', '42'])
    struct.add_relation(['some_item', '[2]', '.sub_item'],
                     ['other_proc', ['some_item', '[1]'], '"Text"', '1337'])
    now_do_something_with(struct)

    After having specified the required rules I would process them and thus solve my task, the details of which are not important here. What is important, is that all those struct.add_relation lines looked messy, long and ugly, and begged for a more concise syntax.

    One obvious choice would be to write the expressions down as strings and parse them within add_relation using ast.parse:

    struct = Structure()
    struct.add_relation(
      'some_item[1].sub_item = some_proc(other_item[2], "Text", 42)')
    struct.add_relation(
      'some_item[2].sub_item = other_proc(some_item[1], "Text", 1337)')
    now_do_something_with(struct)

    Multiple variations on this idea are possible (for example, the whole description could go in a separate configuration file), but in any case this is not a very "easy" solution.

    What I am going to show you here is that appropriate use of Python's dynamic features makes it possible to have the structure defined using Python code, side-by-side with normal Python code, in the following manner:

    @structure_definition
    def struct():
       some_item[1].sub_item = some_proc(other_item[2], "Text", 42)
       some_item[2].sub_item = other_proc(some_item[1], "Text", 1337)
    
    now_do_something_with(struct)

    This post aims to explain the mechanics behind such a solution.

    Part I: __getattr__ and __setattr__

    The first step is fairly easy. As you should know, Python is fairly flexible in that it allows you to redefine any operator, including attribute access, function invocation and assignments. We can therefore create a class StructureCreator, which does redefine all those operators as needed, and then use it in the following manner:

    s = StructureCreator()
    s.some_item[1].sub_item = s.some_proc(s.other_item[2], "Text", 42)
    s.some_item[2].sub_item = s.other_proc(s.some_item[1], "Text", 1337)

    To see what we need to do to make it work, let us explicitly write out what happens here on line 2. This line is equivalent to the following expression:

    s.__getattr__('some_item')
     .__getitem__(1)
     .__setattr__('sub_item', 
       s.__getattr__('some_proc')
        .__call__(s.__getattr__('other_item').__getitem__(2), "Text", 42)
     )

    This invocation structure neatly corresponds to the parse tree of the original expression and our StructureCreator class should therefore simply collect the information from the invocations, recursively building data structures describing the left- and right-hand sides of the expression. Finally, at the time the assignment operator __setattr__ is invoked, the collected information about the assignment can be saved appropriately. Here is an abridged conceptual example:

    class StructureBuilder:
      def __init__(self, expr=''):
        self.__dict__['__expr__'] = expr
    
      def __str__(self):
        return self.__expr__
      __repr__ = __str__
      
      def __getattr__(self, attrname):
        newname = self.__expr__ + '.' + attrname
        return StructureBuilder(newname)
    
      def __setattr__(self, attrname, val):
        newname = self.__expr__ + '.' + attrname
        print 'Saving: %s = %s' % (newname, str(val))   
    
    s = StructureBuilder()
    s.x = s.y

    It remains to implement __getitem__, __setitem__ and __call__ in a similar manner, and voila, you are all covered in syntax sugar.

    The fact that you will have to prefix all your "custom" expressions with this annoying s. might still bug your aesthetical feelings. If so, follow me to the next step.

    Part II: Hiding the Builder

    In the previous part we figured out how, by using __getattr__ and __setattr__ we can make Python interpret assignments our way. In order for it to work, however, we need to explicitly refer to an object which implements those methods, and write lines like

    s.x = s.y

    Can we somehow "hide" this object and have something analogous to automatic __getattr__ for global scope? Yes we can: Python conveniently provides us with a construction:

    exec [code] in [environment]

    which would execute our code, using the provided dictionary to resolve variables.

    Let us try it. We modify our StructureBuilder to inherit from dict and add the methods __getitem__ and __setitem__ which will happily resolve and assign to arbitrary variable names:

    class StructureBuilder(dict):
      def __init__(self, expr=''):
        self.__dict__['__expr__'] = expr
    
      def __str__(self):
        return self.__expr__
      __repr__ = __str__
      
      def __getattr__(self, attrname):
        newname = self.__expr__ + '.' + attrname
        return StructureBuilder(newname)
    
      def __setattr__(self, attrname, val):
        newname = self.__expr__ + '.' + attrname
        print 'Saving: %s = %s' % (newname, str(val))   
    
      def __getitem__(self, itemname):
        newname = self.__expr__ + '.' + str(itemname)
        return StructureBuilder(newname)
    
      def __setitem__(self, itemname, val):
        newname = self.__expr__ + '.' + str(itemname)
        print 'Saving: %s = %s' % (newname, str(val))
    

    Does it work?

    s = StructureBuilder()
    exec 'x[1].y = z' in s

    It does, but now we are providing Python code in a string, which is not at all sexy. Let us try the following instead:

    def definition():
      x[1].y = z
    
    exec definition.func_code in s

    Bummer. NameError: global name 'z' is not defined.. But why? Isn't that exactly the same as the previous attempt with a string?

    Part III: Lexical Scoping

    There is a difference in how Python compiles code in a string and in a function. To see that, let us take a look at what's inside the compiled snippets for a string

    s = 'x = y'

    and a function

    def f():
      x = y

    To do that let us use Python's bytecode disassembler module dis:

    import dis
    # First snippet (code in a string)
    dis.dis(compile(s, '', 'exec'))
    # Second snippet (code in a function)
    dis.dis(f.func_code)

    Here is the output for two snippets side by side:

    Snippet1 (String)                Snippet2 (Function)
    -------------------------------------------------------
    0 LOAD_NAME      0 (y)           0 LOAD_GLOBAL    0 (y)
    3 STORE_NAME     1 (x)           3 STORE_FAST     0 (x)
    6 LOAD_CONST     0 (None)        6 LOAD_CONST     0 (None)
    9 RETURN_VALUE                   9 RETURN_VALUE 

    Now we see it - the snippet from the string uses the LOAD_NAME and SAVE_NAME commands to access the variables, which neatly proxy the requests to our dictionary, as we want them to.

    The snippet from the function behaves in a different way. Firstly, the variable x is accessed using STORE_FAST, which is a command used for local variables. This command would not use the provided dictionary because the variable x has been determined as local to the function's scope and assigned its own memory location within the function's frame at compilation. Secondly, the variable y is accessed via LOAD_GLOBAL which, in Python's terms, refers to a variable from a surrounding lexical scope, not the dynamic scope we are trying to enforce. In short, this command also does not care about the dictionary that we provided.

    Does it mean that it is impossible to overcome the scoping rules that Python uses when compiling a function? Of course not - all we need is to modify the bytecode, replacing each LOAD_FAST, STORE_FAST, LOAD_GLOBAL and STORE_GLOBAL with LOAD_NAME and STORE_NAME.

    Part IV: Bytecode

    The required modification is well documented here, but let me give you a brief overview.
    The bytecode for a Python function f is stored in its field f.func_code.co_code. It is a sequence of bytes, which is easy to parse into opcodes. Every opcode is one byte long. Opcodes with a byte-value greater or equal than opcode.HAVE_ARGUMENT are followed by a two-byte argument. This does not seem to be well documented, but is easily read out from the code of the dis.dis function, which comes with Python. For completeness' sake, here's how a simple bytecode parser might look like:

    def bytecode(code_obj):
    	import opcode
    	code = code_obj.co_code
    	n = len(code)
    	i = 0
    	while i < n:
    		op = ord(code[i])
    		i += 1
    		oparg = None
    		if op >= opcode.HAVE_ARGUMENT:
    			oparg = ord(code[i]) + ord(code[i+1])*256
    			i += 2
    		yield (op, oparg)
    

    We now need to take the code object of our function, replace the ***_FAST and ***_GLOBAL commands with ***_NAME and create a new code object. Leaving out some details, this goes as follows:

    def as_anonymous_block(func):
      new_bytecode = []
      for (op, arg) in bytecode(func.func_code):
        if (op in [STORE_FAST, LOAD_FAST, STORE_GLOBAL, LOAD_GLOBAL]):
          new_op = LOAD_NAME or STORE_NAME, correspondingly
          new_arg = for ***_FAST we need to modify this value
          new_bytecode.append(new_op, new_arg)
        else:
          new_bytecode.append(op, arg)
      return types.CodeType( ... options, flags ... , new_bytecode)
    

    Once we perform this modification on our function object, we can use exec ... in ... to have our StructureBuilder take control of the assignments within the function:

    def definition():
      x[1].y = z
     
    exec as_anonymous_block(definition) in s
    

    Part V: Putting It All Together

    As I promised in the beginning of this post, it is possible to encapsulate the whole thing into a function annotation @structure_definition and use it as follows:

    @structure_definition
    def struct():
       some_item[1].sub_item = some_proc(other_item[2], "Text", 42)
       some_item[2].sub_item = other_proc(some_item[1], "Text", 1337)
    
    now_do_something_with(struct)

    The last step is therefore creating the @structure_definition annotation, which is rather straightforward. For a given function it processes it using as_anonymous_block, creates a new StructureBuilder object, executes the function's code in the StructureBuilder, and returns the result:

    def structure_definition(f):
      blk = as_anonymous_block(f)
      result = StructureBuilder()
      exec blk in result
      return result
    

    For further details refer to (a slightly more elaborate) example code here.

    To conclude, I should note that in my case I still ended up using the solution from Step I (i.e. the one where "dynamic" variables are prefixed with s.), because it allows to use loop variables and other constructs alongside my structure definitions much more naturally than the second method. Nonetheless, I hope you enjoyed the presented exploration as much as I did.

    Tags: , , , ,

  • Posted by Swen 13.02.2010 2 Comments

    The recent blog post "Machine Learning in Mordor" is an excellent example of inherent bias in error estimation methods. Let us consider a standard classification task for simplicity. Then a good machine learning practice is first to find a classifier that is optimal or near-optimal on the training set. Recall that for a fixed classifier the training error is a Monte-Carlo estimate of the generalisation error - the average error that the classifier makes if the data is drawn independently from some unknown but fixed distribution. However, as soon as we choose to minimise the training error, the estimate becomes biased. The training error underestimates the generalisation error.

    To avoid this the data is usually split into two separate sets: training data and validation data. If we choose the classifier according to the training data and later estimate the generalisation error on the validation data, the bias disappears and we should be in good shape. Moreover for certain classification methods, such as Support Vector Machines, it is possible to prove that the training error and the generalisation error are closely connected. That is, we can prove formulae

    GeneralisationError <= TrainingError + F(n)

    where F(n) is a bias term that depends on size of the training sample. Although both of these arguments are formally correct, they are actually void in certain practical settings.

    For instance, student Bughrorc who split the data into a 15 element training and 5 element test set did not do anything wrong, but yet his classification algorithm is likely to perform inferior to the method proposed by Aghargh.

    One could argue that Bughrorc made a mistake and used the validation set for choosing between several competing classifiers and thus, for the same reasoning as explained above, the estimate on the generalisation error is now an underestimation. However, this is only a partial explanation.

    Imagine a standard no-free-lunch setting, where the labels to the data are assigned randomly so that 80% spells are good and 20% of the spells are bad. Moreover, assume that Bughrorc chooses only one classification algorithm on the training set and later uses the validation set for estimating the generalisation error. As the sample is small, the validation error might be zero again. Is the estimate free of bias now and is Bughrorc safe?

    Actually not. As weird as it seems, statistical estimates depend on your intention and potential behaviour. If Bughrorc sees a small error estimate and accepts it as a correct one, it does not mean that this is unbiased eventhough the validation was properly done. To verify whether the estimate is biased or not, Bughrorc has to explore his deeper thoughts. If he can solemnly promise that a larger validation error would not had discouraged him to drop the experiment, then the validation error is indeed unbiased (but wrong). However, if a larger validation error would have stopped him from using the classifier, the same result is biased (and wrong) - whenever Bughrorc accepts the validation error it is necessarily small and thus the validation error actually underestimates the generalisation error.

    As most of us are sane enough not to use a classifier with an estimated classification error near 100% or even near 50%, the validation error is biased for all practical purposes.

    Surprisingly enough this bias can be estimated. Let us first fix a conservative point estimate for the validation error that is smaller than the actual generalisation error on at most 5% cases, if the validation samples are independently from the actual distribution. For example, assume that we are willing to accept the 20% classification error and the classification problem is easy - for a randomly drawn sample, the validation error is below 20 with probability 95%. Now even though we shall tend to unfairly reject all cases where the validation error overestimates the generalisation error (thus biasing our estimate towards underestimation), the fraction of such cases is at most 5/95=5.3%. Consequently, the estimate is a safe overestimate with confidence 94.7%.

    Easy machine learning problem

    Easy machine learning problem

    On the other hand, if the classification problem is difficult, say, the validation error is below 20% with probability 4%, then the situation is completely different. Since the fraction of cases where the validation fails is larger than the fraction of cases we accept the estimate, all cases where we accept the estimate might be incorrect. Consequently, the estimate is a safe overestimate with confidence 0%.

    Difficult machine learning problem

    Difficult machine learning problem

    As a final remark, note that if the validation set is small and the classification task is difficult then validation error is below accepting threshold with significant probability. For the non-free lunch Mordor case, the acceptance probability is 80% for a single point and thus we underestimate the result in 80% possible cases. If the validation set is large, then such a small error is achieved with insignificant probability and the question whether we should or should not continue does not occur "at all". In other words, the smaller is our acceptable error rate, the larger must be the validation set to compensate the rejection bias.

    Tags: , ,

  • Posted by Konstantin 12.02.2010 4 Comments

    Statistics is mainly about using the observed data to make conclusions about the "real state of affairs", underlying that data. A classical and most widely spread technique for making these conclusions is based on significance testing. In simple terms, the idea of significance testing is to ask the question: "if the real state of affairs were X, how probable would it be for us to obtain the data D we are currently observing?". If the answer is "rather unprobable" (e.g. p < 0.05), the common decision is to reject the proposition X in favor of the alternative "not X". Otherwise the researcher claims to "see no reason to reject X".

    The logic behind that reasoning seems quite solid from the first glance, yet it is well known to be faulty. Naturally, the fact that the likelihood of the data P(D | X) is low need not imply that the underlying hypothesis is wrong - it might very well be the case that the data by itself is already rare enough to make this value low. The only correct way of making sound judgments is to consider the a-posteriori probability of the hypothesis P(X | D) instead. However, the latter can be quite inconvenient to compute. Besides, the wild popularity of significance tests and p-values seems to indicate that the issue is not at all that serious. Really, P(X | D) looks so similar to P(D | X), who cares?

    Book cover

    The book "What If There Were No Significance Tests?", which I stumbled upon recently while browsing a stray library shelf, makes it clear that this issue is not a joke. It is a collection of chapters written by renowned statisticians (most of which some-why work in the field of psychology), that quite convincingly condemns the widespread overuse of p-values and the related significance-based hypothesis testing in favor of other approaches. The main point is nailed quite precisely in the very first essay by Jacob Cohen, which I strongly advise you to read right now in order to get rid of any illusions you might still have regarding significance testing. And when you're done with that, you can continue reading this post.

    In the following I shall provide my personal summary of the marvelous "Member of Congress" example from J.Cohen's essay. So far it is the best illustration I know of, about why exactly it is dangerous to use significance tests blindly.

    Improbable does not mean impossible

    Consider the following situation. We have observed a person which we know to be a Member of the US Congress. We are interested in testing the hypothesis, that this person is an American citizen. To apply the significance testing methodology, we proceed by estimating the p-value:

    P(Congressman | American) ~ 535/300 000 000.

    This is clearly below the popular 0.05 threshold. As a result, we are forced to reject the null-hypothesis and conclude that the person is not an American citizen. Bummer.

    What is the problem here? Well, one thing is worth noting - while the probability for an American to be a congressman is low, it is even lower (precisely, zero), for a non-American. So maybe we would have been better off if we expanded the procedure above to the following "maximum-likelihood"-style reasoning:

    Considering that the likelihood P(Congressman | American) is greater than the likelihood P(Congressman | non-American), we must conclude that the person in front of us is an American rather than not.

    Did we just solve the problem? Is it enough to consider "p-values both ways" to clear things up? No!

    Maximum likelihood does not work

    Let us now consider a reversed situation. We are faced with a person, which, we know, is an American. We are interested in the hypothesis that he is a congressman. Compute the two likelihoods:

    P(American | Congressman) = 1

    P(American | not Congressman) ~ 300 000 000 / 6 700 000 000

    Observing that the first likelihood is greater than the second we are forced to conclude that the person in front of us is indeed a congressman. Bummer, again!

    Only by multiplying the likelihood with the marginal probability P(Congressman) could we have obtained the correct decision. Which is, to say, we must have been estimating the probabilities the other way around from the start.

    To summarize, be wary of these pitfalls. I would not agree with the strong negative opinion of the authors of the book, though. After all, a lot of stuff is quite fruitfully done nowadays using p-values only. However, each time you use them, do it sensibly and keep in mind the following two aspects:

    1. If your p-value is low, can this be solely due to low marginal probability of the data? What is the "reversed" p-value? What is the power of your test?
    2. If you suspect that your hypotheses might be subject to a highly non-uniform prior probabilities, do not use bare p-values. You must consider the prior!

    Tags: , ,

  • Posted by Konstantin 21.11.2009 No Comments

    In the recent weeks I had to give a few introductory lectures on supervised machine learning within Jaak's data mining course. I also provided the students some home assignments, and there it was especially tricky to find a simple, fun and discussable exercise, which might help to form some intuition regarding the inherent difficulties of learning from data such as overfitting, multiple testing, and the like. The following is what I came up with and I find it a rather successful creation. It is remarkable that of the 80 students participating in the course only 4 or so came up with the correct answer 🙂

    The Riddle

    After the lecture on supervised machine learning at the University of Mordor, the teacher, Sauron, gave the students a dataset of the following form:

                1) ABACDAFXYZ    -> good
                2) CEGXEAELWMN   -> good
                3) NUWAB         -> bad
                        ...
               20) QRELAZMNPCXA  -> good

    The inputs were seemingly arbitrary strings of latin characters: these were the terrible spells known only to Sauron and chosen by Sauron at random from his Great Book of Terrible Spells. The output was a binary variable, classifying each spell as good or bad. There were 20 observations in total in the dataset.

    The students were given a task: on the basis of this data, come up with a generic spell classifier. Sauron promised to test their result on the next lecture as follows: he will pick another random terrible spell from the book and require each student to make a prediction. The student whose prediction is wrong will have to jump down from the tower as a punishment.

    The first student, Aghargh, who happened to be slacking on the lecture, couldn't come up with any smart ways to solve the task, so he ended up just counting the proportion of "good" and "bad" spells in the dataset. Having observed that 16 of the 20 spells were "good", he decided to predict "good" when asked.

    The second student, Bughrorc, chose a smarter way. Firstly, he converted each string into a vector of binary attributes — one attribute for each letter, having value "1" if that letter was present in the corresponding spell and "0" otherwise. He then split the data randomly into a training set (15 instances) and a test set (5 instances), and attempted training various classifiers using the MordorMiner software. After some experimenting he finally found five classifiers that could predict all of the training examples correctly. One of these also predicted all of the testing examples correctly. He decided to use this classifier on the forthcoming test.

    On the day of testing, Sauron asked the students to classify the spell YOZAZA. Aghargh, as he decided, provided the answer "good". Bughrorc's classifier analyzed the string and claimed that the answer should be "bad"

    Which of the students, do you think, might have a better chance of not jumping from the tower? Why? Can you quantify your answer?

    Tags: , , ,

  • Posted by Margus 25.10.2009 5 Comments

    That is, if we are lucky and everything is done according to the proper tradition of doing statistics with 1% confidence intervals. In practice, things are probably even worse (many use 5% for instance), but this is what you would expect when everyone used proper methodology.

    Think about it...

    Tags: , ,

  • Posted by Konstantin 23.10.2009 No Comments

    The Alt+Tab key combination is perhaps one of the most well-known keyboard shortcuts in Windows, with the only competitors for the throne being Ctrl+C and Ctrl+V. And no matter, whether you are used to alt-tabbing for working purposes or simply as a means of efficient undercover procrastination, unless you are a complete novice you probably have this skill at the level of basic instincts.

    Unfortunately, there are cases where the instinct becomes inconvenient. Namely, whenever you use an application that displays multiple documents in separate tabs (like Firefox or Notepad++) or in separate child windows (like R), you are expected to use Ctrl+Tab rather than Alt+Tab to switch among documents. However, most of the time switching among documents is subjectively perceived as nothing essentially different than switching among programs, hence the fact that Alt+Tab won't work normally for that case is highly unintuitive. The typical case with me is that I would accidentally use Alt+Tab attempting to switch between the editor and console in R and unexpectedly find a completely different window in front of me, which is quite annoying.

    Although I am pretty sure I am not the only one to experience this kind of frustration, it is surprising that there does not seem to be any easily available solution to this trivial issue known to google. Thus, considering that the whole problem can be solved to a fair extent by simply translating Alt keypresses into Ctrl in a smart way, I've made a smallish program that does exactly that.

    I'm quite happy with the result and can't help sharing it with you.

    Download: Binary, Source.

    Tags: , , ,

  • Posted by Konstantin 11.10.2009 2 Comments

    I've recently stumbled upon a simple observation, which does not seem to be common knowledge and yet looks quite enlightening. Namely: polynomials provide an excellent way of modeling data in an order-agnostic manner.

    The situations when you need to represent data in an order-agnostic way are actually fairly common.  Suppose that you are given a traditional sample x_1, x_2, \dots, x_n and are faced with a task of devising a generic function of the sample, which could only depend on the values in the sample, but not on the ordering of these values. Alternatively, you might need to prove that a given statistic is constant with respect to all permutations of the sample. Finally, you might simply wish to have a convenient mapping for your feature vectors that would lose the ordering information, but nothing else.

    The most common way of addressing this problem is sorting the sample and working with the order statistics x_{(1)}, x_{(2)}, \dots, x_{(n)} instead of the original values. This is not always convenient. Firstly, the mapping of the original sample to the corresponding vector of order statistics (i.e. the sorting operation) is quite complicated to express mathematically. Secondly, the condition that the vector of order statistics is always sorted is not very pleasant to work with. A much better idea is to represent your data as a polynomial of the form

        \[p_x(z) = (z+x_1)(z+x_2)\dots(z+x_n)\,.\]

    This will immediately provide you with a marvellous tool: two polynomials p_x and p_y are equal if and only if their roots are equal, which means, in our case, that the samples x_1,\dots,x_n and y_1,\dots,y_n are equal up to a reordering.

    Now in order to actually represent the polynomial we can either directly compute its coefficients

        \[p_x(z) = z^n + a_1z^{n-1} + \dots + a_n\,,\]

    or calculate its values at any n different points (e.g. at 0,1,\dots,n-1) - in any case we end up with the same amount of data as we had originally (i.e. n values), but the new representation is order-agnostic and has, arguably, much nicer properties than the order statistics vector.

    It is not without its own problems, of course. Firstly, it requires at least \Omega(n^2) time to compute. Secondly, not every polynomial will have n real-valued roots. And thirdly, the interpretation of the new "feature vector" is not necessarily intuitive or meaningful. Yet nonetheless, it's a trick to consider.

    Tags: , , , ,

  • Posted by Swen 25.08.2009 No Comments

    There seems to be a wide split of opinions between theoreticians and practitioners. Namely, some theoreticians define pseudorandomness in terms of complexity theory. As a result, the corresponding constructions are inherently slow and thus rejected by practitioners. This short blog post tries to address some common misconceptions.

    Random number generator by xkcd

    A random number generator by xkcd

    Formally, a pseudorandom generator is a deterministic function f:\mathcal{S}\to\mathcal{R} which takes a (relatively short) seed s and converts it into an element of \mathcal{R}. In most cases, we as computer scientists are interested in functions f:\{0,1\}^n\to\{0,1\}^\ell. However, there are other reasonable domains, too. For instance, when performing Monte-Carlo integration in the region [0,1], we would be interested in the functions of type f:\{0,1\}^n\to[0,1]^\ell.

    It is important to note that any function of type f:\mathcal{S}\to\mathcal{R} is a pseudorandom generator. However, not all of those functions are equally good for all purposes. There are two main properties that are used to discriminate between various pseudorandom functions. First, we can talk about the efficiency of a pseudorandom generator, i.e., how fast can we compute f(s). Second, we can talk about the statistical properties of the output f(s).

    Before going into the details, let us establish why anyone should use pseudorandom generators at all. There is an interesting controversy in the computer science, as well in statistics. Many algorithms assume that it is possible to use uniformly distributed numbers. For example, Monte-Carlo integration is based on the law of large numbers:

        \[\Pr[\lim_n\frac{1}{N}\sum_{i=1}^N g(x_i)=\int_0^1g(x)dx] = 1\]

    whenever x_1,\ldots,x_N are taken independently and uniformly from the range [0,1]. Similarly, one can provide theoretical bounds for the randomized version of quicksort, provided that we can draw elements uniformly from a set \{1,\ldots,K\}. However, computers are mostly made of deterministic parts and it turns out to be really difficult to automatically collect uniformly distributed bits. A design of a device that would solve this problem is far from trivial.

    The first official solution to this problem was posed in 1927 when a student of Karl Pearson published a table of random numbers. Later such tables were built by the RAND Corporation. That is, the function f was explicitly specified through a big table. Of course, such a table is useful only if it can be used as a "reliable" source of random numbers. In particular, the value of

        \[\frac{1}{N}\sum_{i=1}^N g(x_i)\]

    should be as close to \int_0^1 g(x)dx as possible. Since there are infinite number of functions, we cannot actually check this condition. Instead, statisticians performed a series of tests on the table to verify that the sequence x_1,x_2,\ldots looks as close to random as possible.  If we extend this concept properly, we get the modern quantification of pseudorandomness.

    Formally, a function f:\mathcal{S}\to\mathcal{R} is a (t,\varepsilon)-secure pseudorandom generator if for any t-time algorithm A:

        \[|\Pr [r\gets\mathcal{R}: A(r)=1]-\Pr[s\gets\mathcal{S}: A(f(s))=1]|\leq \varepsilon\]

    where the probabilities are taken over the uniform choices of s and r. In more intuitive terms, if you replace the randomness r with f(s) and your algorithm runs in time t, then the switch generates no easily observable discrepancies.

    As an illustrative example, consider a probabilistic combinatorial search algorithm B that runs in time t_1 and outputs a solution that can be verified in time t_2. In this case, the use of a (t_1+t_2,\varepsilon) -secure pseudorandom generator within B instead of pure randomness would decrease the probability of B finding a solution by at most \varepsilon. Indeed, otherwise we could construct a (t_1+t_2)-time algorithm C that outputs 1, if B finds the solution and 0 otherwise, and use it to discern the pseudorandom generator from true randomness with success at least \varepsilon. This would contradict the fact that we have a true (t_1+t_2,\varepsilon) -secure pseudorandom generator.  Similar argument can be proven also for the Monte-Carlo integration algorithm. Note that parameters t and \epsilon can be arbitrary real numbers. For the combinatorial search algorithm that takes 3 weeks CPU time, you might use a (3\ \text{week}, 0.01)-secure pseudorandom generator.

    There are essentially two reasonable complaints about the pseudorandom generators. First, since obtaining random bits is hard, we have not solved the problem completely, as we must still get the seed from somewhere. This is indeed a valid problem. For instance, the standard rand() function in C is known to fail the NIST statistical test and thus you might actually observe inconsistencies when using rand() directly or generating a seed for a more complex function with rand(). The latter does not mean that rand() is not a pseudorandom generator, rather that its quality might be low for certain applications. As a complete solution, you would like to get a function f:\{1\}\to\mathcal{R}. For some tasks, such as Monte-Carlo integration of certain functions the corresponding solution is known (see multidimensional integration and sparse grids).

    However, we still do not know how to do it in general, i.e., how to convert randomized algorithms into deterministic ones without significant losses in their performance. The corresponding area of research is known as derandomization. Second, one might argue that we could prove directly that by replacing r with f(s), the performance of the algorithm does not deteriorate much. However, this is not an easy task and it is usually not a valid option for usual mortals.

    To summarize, whenever you use a certain function f to extend a random seed in your implementation you actually have to believe that it does not affect performance, i.e., f is a pseudorandom generator with appropriate (t,\varepsilon) parameter values. Whether you should use f(x)=1, rand() in C, or something more elaborate, depends on the application.

    Tags: , , ,

  • Posted by Konstantin 11.08.2009 7 Comments

    Inspired by Swen

    Tags: , , ,

  • Posted by Konstantin 25.06.2009 No Comments

    Every Spring thousands of aspiring undergraduates spend numerous diligent hours working on their final theses, suffering the stress of the deadlines and the tempting pressure of the nice weather. Once they are done writing, they defend their work in public. In the end, they are graded by their supervisors, opponents and an academic committee. The grading rules might vary among universities and faculties, but their essence can be abstracted as follows: the supervisor provides a mark representing his opinion, the opponent provides a mark representing his opinion, and the final grade is obtained by taking the average of the two.

    However, this process is not necessarily as simple as it looks like from the first glance. Although both the supervisor and the opponent are supposed to announce "the mark, representing their opinion", they are knowledgeable of the final grading scheme and might in fact, perhaps unconsciously, play a somewhat different game. Indeed, if the supervisor believes the student's work is worth a 4, he is in fact interested that final grade would be 4, no more and no less. The same holds for the opponent. Now let us assume that the difference between the proposed mark and the final mark measures the "penalty" for both the supervisor and the opponent. We could then regard the whole grading process as the following game.

    • The supervisor has his opinion about the proper mark s \in \{1,2,3,4,5\}. Similarly, the opponent has his opinion about the proper mark o \in \{1,2,3,4,5\}.
    • The supervisor announces the mark s' \in \{1,2,3,4,5\}, the opponent announces the mark o' \in \{1,2,3,4,5\}. The final grade f is computed as the average (s'+o')/2.
    • The supervisor receives penalty \mathrm{abs}(s-f) and the opponent receives penalty \mathrm{abs}(o-f).
    • Naturally, both parties are interested in minimizing the penalty.

    Now, assuming that both the supervisor and the opponent are indeed playing the above game, how would they act? Of course, this depends on their "true" opinions s and o, and whether they are knowledgeable of each other's opinion or not. For simplicity, let us first consider the case where s = o = 4 and both parties know it. In this case, we can represent the game in the traditional matrix form:

    s' o'
    1 2 3 4 5
    1 3 2.5 2 1.5 1
    2 2.5 2 1.5 1 0.5
    3 2 1.5 1 0.5 0
    4 1.5 1 0.5 0 0.5
    5 1 0.5 0 0.5 1
    Penalty for both supervisor and opponent for various choices of s' and o'

    There are three optimal solutions here — (s'=3,o'=5), (s'=4,o'=4) and (s'=3,o'=5), with the logical (4,4) choice being the "safest" for both parties in terms of the maximal possible penalty in case the opposing party presumed a different optimal solution.

    Now what if s and o are different. For example, s = 4 and o = 3. In this case, the payoffs of the supervisor and the opponent are not equal any more:

    s' o'
    1 2 3 4 5
    1 3 | 2 2.5 | 1.5 2 | 1 1.5 | 0.5 1 | 0
    2 2.5 | 1.5 2 | 1 1.5 | 0.5 1 | 0 0.5 | 0.5
    3 2 | 1 1.5 | 0.5 1 | 0 0.5 | 0.5 0 | 1
    4 1.5 | 0.5 1 | 0 0.5 | 0.5 0 | 1 0.5 | 1.5
    5 1 | 0 0.5 | 0.5 0 | 1 0.5 | 1.5 1 | 2
    Penalties for supervisor and opponent (separated by | ) for various choices of s' and o'

    There are two Nash equilibrium solutions to this game and it is funny to see that none of them is the seemingly logical (4,3) choice. Instead, they are (5,1) and (1,5), the former being somewhat preferable to the supervisor. That means, the equilibrium strategy dictates the supervisor to suggest the mark 5 (which exceeds his opinion), to which the opponent should respond with a drastically low evaluation of 1. Looks like a rather typical thesis defence scenario, doesn't it? 🙂

    Finally, when the true s and o are not known by the opponent and the supervisor correspondingly, the analysis becomes more complicated, because the players now have to assume something about the opponent. But the conclusion stays the same — whenever the supervisor has the slightest doubt that the opponent might be willing to suggest a lower mark, he will have to pick the overestimation strategy. And the opponent will then naturally tend to drastically underestimate.

    By this weird post I actually wanted to express sincere congratulations to those who have succesfully defended this year, in particular my diligent bachelor students Anastasia, Karl and Riivo.

    Tags: ,