When students write exams, they want to know their score as fast as possible. However, the system that is supposed to deliver this information to the students is usually slow, for a lot of reasons that I will not go into. Where I work, it has become quite usual to publish a pdf on the course website that lists the student ID together with the number of points they scored. Since student IDs are usually anonymous, most people don't see it as a problem. However, the more this practice is used, the less anonymous a student ID actually is: Once there are enough of these lists public, this data might theoretically be used to connect a student to his or her student ID only by knowing which courses he or she attended. I propose a much better solution. I did not come up with this myself by the way, this is due to a brillian colleague of mine, but I still wanted to share it with the world. (more…)


Let $R$ be a ring and $f\in R[X]$ a polynomial with infinitely many zeros in $R$. You might think that $f$ is the zero polynomial, but that is not true if $R$ is not commutative, as this example of the quaternions shows. What about if $R$ is commutative? I didn't find a counterexample online, but it's easy to give one, and I found this somewhat enlightening. Consider the field $\mathbb{F}_2=\{0,1\}$ with two elements. The polynomial $f=X^2-X\in\mathbb{F}_2[X]$ has two zeros, namely $0$ and $1$. Now consider the Ring $R=\mathbb{F}_2^{\mathbb{N}}=\{ \mathbb{N}\to\mathbb{F}_2\}$. We can think of elements of $R$ as sequences $(0,1,1,0,1,\ldots)$. Now clearly, any such sequence is also a zero of $f$. So $f$ actually vanishes everywhere on $R$, which has infinite size, but $f$ is not the zero polynomial. The statement is of course true if $R$ is a commutative integral domain.


It has come to my attention that Mathematicians don't like to ask questions. The main reason seems to be that they are afraid of looking stupid. It has annoyed me for quite some time and it has several severe disadvantages for the field of Mathematics: * Students don't ask questions in class because it's very important to appear as though you have completely grasped and understood everything immediately. As a result, students learn less and get educated more slowly. Because in the end, it won't help you to only pretend you understood math. * Many fellow PhD students do not use their real name on pages like math.se and mathoverflow because they fear intellectual persecution if people were to associate their name with completely stupid questions. As a result, it's harder to get in touch and find out who is working on similar problems. Anonymity is fine, I am a big fan. However - if it's out of shame for intellectual curiosity that you remain anonymous, then something in that society needs changing. I believe that these are symptoms of a severe sickness that has spread in Mathematics and which is causing a chronical occlusion to our metaphorical vascular system of information flow. (more…)


Hello, fellow applied mathematicians and computer scientists, hello also to all the brave physicists who use the [arXiv](http://www.arxiv.org/). Did you know that you can [publish source code and other ancillary files on the arXiv](http://arxiv.org/help/ancillary_files), along with your preprint? If you didn't, this must be great news for you. However, if you ever tried to actually do this, you *might* have been just as confused as me. It's actually quite likely that you were, because as soon as this blogpost has vanished from the front page, I am pretty sure that a google search is what led you here. > Ancillary files are included with an arXiv submission by placing them in a directory anc at the root of the submission package. If you are a novice to uploading files to the arXiv, like me, this might be confusing. What is the *submission package*? I only ever submitted a single $\KaTeX$ file! Well, let me put it straight for you. - In the directory with your .tex file(s), make a directory called anc. - Place all your source code and stuff in that directory. - Make a zip file containing all your LaTeX sources and the folder anc. - Upload that zip file to the arXiv. Trust me - everything will be fine.


My good friend and colleague Christian Ikenmeyer and I wrote this cute preprint about polynomials and how they can be written as the determinant of a matrix with entries equal to zero, one and indeterminantes. Go ahead and read it if you know even just a little math, it's quite straightforward. The algorithm described in section 3 has been implemented and you can download the code from my website at the TU Berlin. Compilation instructions are in ptest.c, but you will need to get nauty to perform the entire computerized proof.


I recently implemented an algorithm that has to perform checks on all subsets of some large set. A subset of an $n$-sized set can be understood as a binary string of length $n$ where bit $i$ is set if and only if the $i$-th element is in the subset. During my search for code to enumerate such bitstrings, I found the greatest page in the entire internet. If anyone can explain to me how computing the next bit permutation (the last version) works, please do.


Even though this does not really constitute a post with substantial content, this is a blog after all, so I thought I'd let the 8 people who read it know that for the next six weeks, I will be attending a special semester on Algorithms and Complexity in Algebraic Geometry at the Simons Institute for the Theory of Computing in Berkeley. So. If you happen to be in the bay area, give me a shout.


In the past weeks I've been visiting quite the number of jobs fairs, networking events, trainings on how to hunt for jobs and the like. I certainly learned a lot, albeit mostly things that are obvious after a moment of contemplation or come to you by common sense. Yet a simple reminder and a bit of practice are surely beneficial. High in demand are all these abstract skills we all have been told one too many times to include on your CV (complex and analytical thinking ...) and programming skills (although many employers seem to be happy to train on the job). What somehow fell of my radar and which came up more than once, is a basic knowledge of statistics. This leaves us in a bit of a pickle because most available material seems to fall into one of the two following categories: 1. Statistic for statisticians: Something we all have very actively decided against studying. All the proofs and constructions and endless excursions into theoretical concepts that haven't yet and maybe never will make their way into practice. Scientific exploration that is fun if and only if you have a thing for statistics. 2. Statistics for undergrads who don't even know if they need it, lectured by junior-professors or assistants who rather spend their time on research than on preparations: What you get is a incoherent bunch of powerpoint slides that seem to live more through examples than concise explanations and miss out on the most important part, namely a self-contained overview over available methods. To the rescue comes Arnaud Delorme, a computational neuroscientist from the University San Diego. His paper on statistical methods does basically everything you want on 23 pages: A quick recap of the necessary definition, a clean-cut overview over the most common methods and a short concise explanation for each of them with just the right amount of theoretical background and a short demonstrating example. Sure, towards the end it can be a bit tiresome to go through method after method and you won't want to read it front to back in one go. Of course I also can't guarantee for its exhaustiveness, but it seems like we have a winner when it comes to ratio of usefulness over required time-investment.


Let $G$ be an affine, connected, reductive group and $X$ a $G$-module. Choose a maximal torus $T\subseteq G$, a Borel $B\subseteq G$ containing $T$ and let $U$ be the unipotent radical of $B$. Denote by the character group of $T$. Let $\Lambda\subseteq\mathbb{X}$ be the set of dominant weights of $G$ with respect to these choices. We can decompose the graded coordinate ring $\mathbb{C}[X]=\bigoplus_{\lambda\in\Lambda} V_{(\lambda)}$ into its isotypic components $V_{(\lambda)}$ of weight $\lambda$. Let \[ \Lambda_X=\{ \lambda\in\Lambda \mid V_{(\lambda)}\ne\{0\}\} \] be the set of weights that occur in $\mathbb{C}[X]$. Let $V_{(\lambda)}\cong V_\lambda^{\oplus n_\lambda}$, where $V_\lambda$ is the irreducible module of highest weight $\lambda$. Each $V_\lambda$ has a highest weight vector which is unique up to scaling - let $f_{\lambda 1}, \ldots, f_{\lambda n_\lambda}\in V_{(\lambda)}$ be linearly independent highest weight vectors. Claim. *For each $1\le k\le n_\lambda$, if the function $f:=f_{\lambda k}\in\mathbb{C}[X]$ is reducible, then there exist weights $\lambda_1,\ldots,\lambda_r\in\Lambda_X$ such that $\lambda$ is an $\mathbb N$-linear combination of the $\lambda_i$.* Indeed, about a year ago this statement was completely unclear to me. However, it's actually not that hard to see and I felt like sharing my proof. (more…)


For the group $G=\mathrm{GL}_n(\mathbb{K})$ of invertible matrices over any field, there is the well-known determinant character $\det:G\to\mathbb{K}^\times$ which, well, maps any matrix to its determinant. It has the property that \[ [G,G]=\mathrm{SL}_n(\mathbb{K})=\{ g\in G \mid \det(g)=1 \}, \] where $[G,G]$ denotes the derived subgroup of $G$. In more geometric terms, $[G,G]$ is the vanishing of the regular function $\det-1$ on $G$. Because I find it rather cute, I will show that you can find a similar function for all linear, reductive groups. (more…)


Ich habe letzte Woche versucht, für einige Studenten das Master Theorem zu beweisen. Das ist geringfügig daneben gegangen, deswegen habe ich eine korrigierte Version des Beweises erstellt. Ich frage mich nun aber schon, wie allgemein man das eigentlich machen sollte. Grundsätzlich könnte man sich für eine Laufzeitfunktion $T:\mathbb N\to\mathbb N$ ja Rekurrenzgleichungen der Form $T = \alpha \circ T \circ \beta$ ansehen, wobei in meinem Fall $\beta(n)=n/b$ und $\alpha(n)=a\cdot n + f(n)$ ist. Ich habe leider keine Zeit, mir dazu mehr Gedanken zu machen.


I asked a question on mathoverflow, and while researching I stumbled upon a theorem which I did not know before. I consider it quite powerful and the proof is short and sweet, I copied it from this expository paper by Bryden Cais: Theorem. Let $X$ be a normal, separated, Noetherian scheme and $U\subseteq X$ a nonempty affine open subset. Then, $X\setminus U$ is pure of codimension one. Proof. We need to show that for each generic point $y\in Y=X\setminus U$, the local ring $\mathcal{O}_{X,y}$ has dimension one. Let $y$ be the generic point of a component of $Y$. Denote by $\mathfrak{m}$ the maximal ideal of the local ring $A:=\mathcal{O}_{X,y}$. Let $S:=\mathrm{Spec}(A)$. The inclusion morphism $f:S\to X$ is affine because $X$ is separated (Lemma 1). Thus, $V:=f^{-1}(U)=S\setminus\{ \mathfrak{m} \}$ is an affine, open subscheme of $S$. The morphism on global sections $A=\mathcal{O}_S(S)\to\mathcal{O}_{V}(V)$, corresponding to the inclusion $V\to S$, is therefore injective. It can not be surjective because $V\ne S$. Pick some $a\in\mathcal{O}_{V}(V)\setminus A$. If we now had $\dim(A)>1$, then the prime ideals of height one $\mathfrak{p}\subseteq A$ would satisfy $\mathfrak{p}\ne\mathfrak{m}$, i.e. they would correspond to points contained in $V$. Consequently, we would have $a\in \mathcal{O}_{V,\mathfrak{p}} = A_{\mathfrak{p}}$. Because $A$ is a normal Noetherian domain, it is the intersection of all localizations at prime ideals of height one - this is Corollary 11.4 in Eisenbud's book. This yields the contradition that $a\in A$. Lemma 1. If $f:X\to Y$ is a morphism of schemes with $Y$ separated and $X$ affine, then $f$ is an affine morphism. Proof. This is exercise 3.3.6 in Qing Liu's book. Let $V\subseteq Y$ be an open affine subset. Proposition 3.9(f) in the same book tells you that there is a closed immersion $f^{-1}(V)\cong X\times_Y V\to X\times V$, so $f^{-1}(V)$ is isomorphic to a closed subscheme of an affine scheme, hence affine.


I need to compute the meet of two partitions $\lambda=(\lambda_1,\ldots,\lambda_r)$ and $\mu=(\mu_1,\ldots,\mu_s)$ with respect to the dominance order. It's not a difficult thing to do and might be considered an oddity by most of you, but that doesn't bother me. Do you want to know more?