Quadratic forms over finite fields (of characteristic other than 2)

An n-dimensional quadratic form over a field K is a polynomial in n variables with coefficients in K that is homogeneous of degree 2. More abstractly, a quadratic form on a vector space V is a function V\rightarrow K that comes from a homogeneous degree-2 polynomial in the linear functions on V.

I’ll start off with some comments on why quadratic forms are a natural type of object to want to classify. Then I’ll review well-known classifications of the quadratic forms over \mathbb{R} and \mathbb{C}. Then I’ll give the classification of quadratic forms over finite fields of odd characteristic. Then I’ll go over some counting problems related to quadratic forms over finite fields, which might get tedious. Finally, and perhaps most interestingly, I’ll tell you how \mathbb{R} and \mathbb{C} are secretly related to finite fields, and what this means for quadratic forms over each of them.

I’ll actually just focus on nondegenerate quadratic forms. A quadratic form Q is degenerate if there is a nonzero vector \vec{x} that is “Q-orthogonal” to every vector, meaning that Q(\vec{x}+\vec{y})=Q(\vec{y}) for every vector \vec{y}. In this case, we can change coordinates so that \vec{x} is a standard basis vector, and in these coordinates, Q doesn’t use one of the variables, so the same polynomial gives us an n-1-dimensional quadratic form. We can iterate this until we get a nondegenerate quadratic form, so n-dimensional quadratic forms are in some sense equivalent to at-most-n-dimensional nondegenerate quadratic forms (in terms of abstract vector spaces, a quadratic form on a vector space is a nondegenerate quadratic form on some quotient of it). Because of this, the interesting parts of understanding quadratic forms is mostly just understanding the nondegenerate quadratic forms.

An important tool will be the fact that quadratic forms are essentially the same thing as symmetric bilinear forms. An n-dimensional symmetric bilinear form over a field K is a function \left<-,-\right>:K^n\times K^n\rightarrow K that is symmetric (i.e. \left<\vec{x},\vec{y}\right>=\left<\vec{y},\vec{x}\right>), and linear in each component (i.e., for any \vec{x}, the function \vec{y}\mapsto\left<\vec{x},\vec{y}\right> is linear). From any symmetric bilinear form \left<-,-\right>, we can get a quadratic form Q by Q(\vec{x})=\left<\vec{x},\vec{x}\right>, and from any quadratic form Q, that symmetric bilinear form can be recovered by \left<\vec{x},\vec{y}\right>=\frac{1}{2}\left(Q(\vec{x}+\vec{y})-Q(\vec{x})-Q(\vec{y})\right). This means that there’s a natural bijection between quadratic forms and symmetric bilinear forms, so I could have just as easily said this was a post about symmetric bilinear forms over finite fields (of characteristic other than 2). Throughout this post, whenever Q denotes some quadratic form, I’ll use \left<-,-\right>_Q to denote the corresponding symmetric bilinear form. Of course, this bijection involves dividing by 2, so it does not work in characteristic 2, and this is related to why the situation is much more complicated over fields of characteristic 2.

In coordinates, quadratic forms and symmetric bilinear forms can both represented by symmetric matrices. A symmetric matrix A represents the quadratic form Q(\vec{x}):=\vec{x}^TA\vec{x}, and the corresponding symmetric bilinear form \left<\vec{x},\vec{y}\right>_Q:=\vec{x}^TA\vec{y}. From the symmetric bilinear form, you can extract the matrix representing it: A_{i,j}=\left<\vec{e}_i,\vec{e}_j\right>_Q, where \vec{e}_1,\ldots,\vec{e}_n are the standard basis vectors.

Two quadratic forms are equivalent if you can turn one into the other with an invertible linear transformation. That is, Q_1 and Q_2 are equivalent if there’s an invertible linear transformation \varphi such that Q_2(\vec{x})=Q_1(\varphi(\vec{x})) for every vector \vec{x}. In coordinates, quadratic forms represented by matrices A and B are equivalent if there’s an invertible matrix M such that B=M^TAM. When I said earlier that I would classify the quadratic forms over \mathbb{R}, \mathbb{C}, and finite fields of odd characteristic, what I meant is classify nondegenerate quadratic forms up to this notion of equivalence.

Why quadratic forms?

Quadratic forms are a particularly natural type of object to try to classify. When I say “equivalence” of some sort of linear algebraic object defined on a vector space K^n (over some field K), one way of describing what this means is that \text{GL}_n(K) naturally acts on these objects, and two such objects are equivalent iff some change of basis brings one to the other. Equivalence classes of these objects are thus the orbits of the action of \text{GL}_n(K).

\text{GL}_n(K) is n^2-dimensional, so if we have some m-dimensional space of linear algebraic objects defined over K^n, then the space of equivalence classes of these objects is at least m-n^2-dimensional. So, if m>n^2, then there will be infinitely many equivalence classes if K is infinite, and even if K is finite, the number of equivalence classes will grow in proportion to some power of |K|.

For example, for large n, the space of cubic forms, or symmetric trilinear forms, on K^n is approximately \frac{1}{6}n^3-dimensional, which is much greater than n^2, so the space of equivalence classes of symmetric trilinear forms is also approximately \frac{1}{6}n^3-dimensional, and counting cubic forms up to equivalence rather than individually barely cuts down the number of them at all, leaving us with no hope of a reasonable classification. The same issue kills pretty much any attempt to classify any type of tensor of rank at least 3.

The space of (not necessarily symmetric) bilinear forms is n^2-dimensional. But bilinear forms can be invariant under certain changes of coordinates, so the space of bilinear forms up to equivalence is still positive-dimensional. It isn’t enormously high-dimensional, like the space of cubic forms up to equivalence, so this doesn’t necessarily rule out any sort of reasonable classification (in fact, linear maps, which are also n^2-dimensional, do have a reasonable classification. There’s the Jordan normal form for linear maps defined over algebraically closed fields, and the Frobenius normal form for arbitrary fields). But it does mean that any such classification isn’t going to be finitistic, like Sylvester’s law of inertia is.

Classifying objects is also not too interesting if there’s very few of them for very straightforward reasons, and this tends to happen if there’s far fewer than n^2 of them, so that it’s too easy to change one to another by change of coordinates. For instance, the space of vectors is n-dimensional, and there’s only two vectors up to equivalence: the zero vector, and the nonzero vectors.

To get to the happy medium, where there’s a nontrivial zero-dimensional space of things up to equivalence, you want to start with some space of objects that’s just a bit less than n^2-dimensional. Besides quadratic forms / symmetric bilinear forms, the only other obvious candidate is antisymmetric bilinear forms. But it turns out these are already fairly rigid. The argument about classifying symmetric bilinear forms reducing to classifying the nondegenerate ones applies to antisymmetric bilinear forms as well, and, up to equivalence, there’s not that many nondegenerate antisymmetric bilinear forms, also known as symplectic forms: Over any field, there’s only one of them in each even dimension, and none at all in odd dimensions.

Classification of quadratic forms over certain infinite fields

Let’s start with \mathbb{R} (though the following theorem and proof work just as well for any ordered field containing the square root of every positive element). We’ll use this same proof technique for other fields later.

Theorem (Sylvester’s law of inertia): For an n-dimensional nondegenerate real quadratic form Q, let n_+ denote the maximum dimension of a subspace on which Q takes only positive values, and let n_- denote the maximum dimension of a subspace on which Q takes only negative values. Then n_+ + n_- = n, and Q is equivalent to the quadratic form represented by the diagonal matrix whose first n_+ diagonal entries are 1 and whose last n_- diagonal entries are -1. All pairs of nonnegative integers (n_+,n_-) summing to n are possible, and (n_+,n_-) completely classifies nondegenerate real quadratic forms up to isomorphism. Thus, up to equivalence, there are exactly n+1 nondegenerate n-dimensional real quadratic forms.

Proof: First we’ll show that every quadratic form can be diagonalized. We’ll use induction to build a basis \vec{b}_1,\ldots,\vec{b}_n in which a given quadratic form Q is diagonal. For the base case, it is trivially true that every 0-dimensional quadratic form can be diagonalized (and 1-dimensional quadratic forms are just as straightforwardly diagonalizable, if you prefer that as the base case). Now suppose every n-1-dimensional quadratic form is diagonalizable, and let Q be an n-dimensional quadratic form. Let \vec{b}_1 be any vector with Q(\vec{b}_1)\neq0 (if this is not possible, then Q=0, and is represented by the zero matrix, which is diagonal). Then \vec{b}_1^\perp := \left\{\vec{x}\mid\left<\vec{b}_1,\vec{x}\right>_Q=0\right\} is an n-1-dimensional subspace disjoint from \vec{b}_1. Using the induction hypothesis, let \vec{b}_2,\ldots,\vec{b}_n be a basis for \vec{b}_1^\perp in which Q\restriction_{\vec{b}_1^\perp} is diagonal. Then \vec{b}_1,\ldots,\vec{b}_n is a basis in which Q is diagonal (its matrix has Q(\vec{b}_1) in the upper left entry, the diagonal matrix representing Q\restriction_{\vec{b}_1^\perp} in the basis \vec{b}_2,\ldots,\vec{b}_n in the submatrix that excludes the first row and column, and the fact that \left<\vec{b}_1,\vec{b}_i\right>_Q=0 for i\neq1 means that the rest of the first row and column are zero).

Now that we’ve diagonalized Q, we can easily make those diagonal entries be \pm1, assuming Q is nondegenerate (otherwise, some of them will be 0). If Q(\vec{b}_i)>0, then Q\left(\frac{\vec{b}_i}{\sqrt{Q(\vec{b}_i)}}\right)=1. If Q(\vec{b}_i)<0, then Q\left(\frac{\vec{b}_i}{\sqrt{-Q(\vec{b}_i)}}\right)=-1. Either way, we can replace \vec{b}_i with some scalar multiple of it such that Q(\vec{b}_i)=\pm1.

The ordering of the diagonal entries doesn’t matter because we can permute the basis vectors so that all the +1‘s come before all the -1‘s. So all that remains to show is that any two quadratic forms whose \pm1 diagonal matrices have different numbers of +1‘s are not equivalent. Let n_+ be the number of +1‘s and let n_- be the number of -1‘s, so that our diagonalized quadratic form can be written as x_1^2+\ldots+x_{n_+}^2-y_1^2-\ldots-y_{n_-}^2. There is an n_+-dimensional subspace P on which it only takes positive values (the subspace defined by y_1=\ldots=y_{n_-}=0), and an n_--dimensional subspace N on which it only takes negative values (the subspace defined by x_1=\ldots=x_{n_+}=0). It can’t have an (n_++1)-dimensional subspace on which it takes only positive values, because such a subspace would have to intersect N, and similarly, it can’t have an (n_-+1)-dimensional subspace on which it takes only negative values, because such a subspace would have to intersect P. Thus the number of +1‘s and -1‘s are the maximum dimensions of subspaces on which the quadratic form takes only positive values and only negative values, respectively, and hence quadratic forms with different numbers of +1‘s are not equivalent. \square

Over \mathbb{C}, things are simpler.

Theorem: All n-dimensional nondegenerate complex quadratic forms are equivalent to each other.

Proof: As in the real case, we can diagonalize any quadratic form; the proof of this did not depend on the field. But in \mathbb{C}, every number has a square root, so each basis vector \vec{b}_i can be scaled by \frac{1}{\sqrt{Q(\vec{b}_i)}}, to get a vector that Q sends to 1 (of course, Q(\vec{b}_i) has two square roots, but you can just pick one of them arbitrarily). Thus every nondegenerate complex quadratic form is equivalent to the quadratic form defined by the identity matrix. \square

Note that this proof works just as well over any quadratically closed field not of characteristic 2.

Over other fields, things get a bit complicated. One class of fields you might want to consider is the p-adic numbers, and there is a known classification (called the Hasse invariant) of quadratic forms over \mathbb{Q}_p (and its finite extensions), but it involves Brauer groups, for some reason.

Over the rationals, things are even worse. The best result about classifying rational quadratic forms we have is the Hasse-Minkowski theorem: Over any finite extension of \mathbb{Q}, two quadratic forms are equivalent iff they are equivalent over every non-discrete completion of the field (so, they are equivalent over \mathbb{Q} itself iff they are equivalent over \mathbb{R} and over \mathbb{Q}_p for every prime p). An interesting result, but not one that lets you easily list out equivalence classes of rational quadratic forms.

But over finite fields, things go back to being relatively straightforward.

Classification of nondegenerate quadratic forms over finite fields of odd characteristic

Let q be a power of an odd prime, and let \mathbb{F}_q denote the field with q elements.

\mathbb{F}_q has a certain feature in common with \mathbb{R}: exactly half of its nonzero elements are squares. For \mathbb{F}_q, you can show this with a counting argument: every nonzero square is the square of exactly two nonzero elements, and there are finitely many nonzero elements. For \mathbb{R}, it might not be obvious how to formally make sense of the claim that exactly half of nonzero reals are squares (i.e. positive). One way is to notice that the positive reals and negative reals are isomorphic as topological spaces. But what matters in this context is that the quotient group \mathbb{R}^\times/(\mathbb{R}^\times)^2 has order 2, just like \mathbb{F}_q^\times/(\mathbb{F}_q^\times)^2 (here K^\times means the group of units of a field K). Now let’s see how this relates to quadratic forms.

The part of the proof of Sylvester’s law of inertia that shows that every nondegenerate real symmetric form can be diagonalized with diagonal elements \pm1 can be generalized to an arbitrary field:

Lemma 1: Let K be a field, and let X be a set of elements of K^\times consisting of one element from each coset of (K^\times)^2. Then every nondegenerate quadratic form over K is equivalent to one represented by a diagonal matrix with elements of X on the diagonal.

Proof: As in the proof of Sylvester’s law of inertia, with \{1,-1\} replaced with the set X. \square

Since (\mathbb{F}_q^\times)^2 has exactly 2 cosets in \mathbb{F}_q^\times, this means there is a set X of size 2 (if q\equiv3\pmod4, then -1 is nonsquare, and you could use X=\{1,-1\} again) such that every n-dimensional nondegenerate quadratic form over \mathbb{F}_q is equivalent to one represented by a diagonal matrix with elements of X on the diagonal, and thus there are at most n+1 of them up to equivalence. However, we have not shown that these quadratic forms are inequivalent to each other (that part of the proof of Sylvester’s law of inertia used the fact that \mathbb{R} is ordered), and in fact, this is not the case. Instead, the following lower bound ends up being tight over finite fields of odd characteristic:

Lemma 2: Let K be a field. Any two invertible symmetric matrices whose determinants are in different cosets of (K^\times)^2 represent inequivalent quadratic forms (that is, if their determinants are a and b, and \frac{a}{b} is nonsquare, then the matrices do not represent equivalent quadratic forms). Thus, for n\geq1, there are at least \left|K^\times/(K^\times)^2\right| nondegenerate n-dimensional quadratic forms over K up to equivalence.

Proof: If invertible symmetric matrices A and B represent equivalent quadratic forms, then there is some change of basis matrix M such that B=M^TAM. \frac{\det(B)}{\det(A)}=\det(M)^2. If n\geq1, then for every element of K^\times, there is a symmetric matrix that has that determinant, so the fact that symmetric matrices whose determinants are in different cosets of (K^\times)^2\right| represent inequivalent quadratic forms implies that there are inequivalent nondegenerate quadratic forms for each coset. \square

What’s going on here is that there’s a notion of the determinant of a bilinear form that doesn’t depend on a matrix representing it in some basis. For every n-dimensional vector space V, there’s an associated 1-dimensional vector space \wedge^n V, called the space of volume forms on V. Elements of \wedge^n V are called volume forms because, over \mathbb{R}, they represent amounts of volume (with orientation) of regions in the vector space. This is not exactly a scalar, but rather an element of a different 1-dimensional vector space, because, in a pure vector space (without any additional structure like an inner product), there’s no way to say how much volume is 1. I won’t give the definition here, but anyway, a bilinear form f on a vector space V induces a bilinear form \det(f) on \wedge^n V. A basis for V induces a basis for \wedge^n V, in which \det(f) is represented by a 1\times1 matrix (i.e. scalar) that is the determinant of the matrix representing f. An automorphism \sigma of V induces an automorphism of \wedge^n V, which is multiplication by \det(\sigma), and thus induces an automorphism of the space of bilinear forms on \wedge^n V given by division by \det(\sigma)^2. Thus if the ratio of the determinants of two nondegenerate bilinear forms is not a square, then the bilinear forms cannot be equivalent. This also applies to quadratic forms, using their correspondence to symmetric bilinear forms.

Theorem: Up to equivalence, there are exactly two n-dimensional nondegenerate quadratic forms over \mathbb{F}_q (for n\geq1).

Proof: Lemma 2 implies that invertible symmetric matrices of square determinant and invertible symmetric matices of nonsquare determinant represent inequivalent quadratic forms. It remains to show that any two invertible symmetric matrices with either both square determinant or both nonsquare determinant represent equivalent quadratic forms.

Let \alpha,\beta\in \mathbb{F}_q^\times be square, and nonsquare, respectively. By lemma 1, every nondegenerate quadratic form over \mathbb{F}_q can be represented with a diagonal matrix whose diagonal entries are all \alpha or \beta. The determinant is square iff an even number of diagonal entries are \beta. It suffices to show that changing two \alpha‘s into \beta‘s or vice-versa doesn’t change the equivalence class of the quadratic form being represented. Thus this reduces the problem to show that the 2-dimensional quadratic forms represented by \alpha I_2 and \beta I_2 are equivalent. If we can find a,b\in\mathbb{F}_q such that a^2+b^2=\frac{\beta}{\alpha}, then the change of basis matrix \left(\begin{array}{cc} a & -b\\ b & a \end{array}\right) will do, because \left(\begin{array}{cc} a & b\\ -b & a \end{array}\right)\left(\begin{array}{cc} \alpha & 0\\ 0 & \alpha \end{array}\right)\left(\begin{array}{cc} a & -b\\ b & a \end{array}\right)=\left(\begin{array}{cc} \beta & 0\\ 0 & \beta \end{array}\right). This reduces the problem to showing that there are such a,b.

The squares in \mathbb{F}_q cannot be an additive subgroup of \mathbb{F}_q, because there are \frac{q+1}{2} of them, which does not divide q. Thus there are \tilde{a},\tilde{b}\in\mathbb{F}_q such that \tilde{a}^2+\tilde{b}^2 is nonsquare. Since \mathbb{F}_q^\times/(\mathbb{F}_q^\times)^2 has order 2, and \frac{\beta}{\alpha} is also nonsquare, it follows that \frac{\beta/\alpha}{\tilde{a}^2+\tilde{b}^2} is a square. Let c^2=\frac{\beta/\alpha}{\tilde{a}^2+\tilde{b}^2}, and let a=c\tilde{a} and b=c\tilde{b}. Then a^2+b^2=\frac{\beta}{\alpha}, as desired. \square

Note, as a consequence, if we count the degenerate quadratic forms as well, there are 2n+1 n-dimensional quadratic forms over \mathbb{F}_q up to equivalence: 2 for every positive dimension that’s at most n, and 1 more for dimension 0 (this is the zero quadratic form; there are no zero-dimensional quadratic forms of nonsquare determinant).


Given an n-dimensional quadratic form Q over \mathbb{F}_q, and some scalar \alpha\in\mathbb{F}_q, how many vectors \vec{v} are there such that Q(\vec{v})=\alpha? There are q^n vectors \vec{v}, and q possible values of Q(\vec{v}), so, on average, there will be q^{n-1} vectors with any given value for the quadratic form, but let’s look more precisely.

First, note that we don’t have to check every quadratic form individually, because if two quadratic forms are equivalent, then for any given scalar \alpha, each form will assign value \alpha to the same number of vectors. And it’s enough to check nondegenerate forms, because every quadratic form on a vector space V corresponds to a nondegenerate quadratic form on some quotient V/W, and the number of vectors it assigns a given value is just q^{\dim(W)} times the number of vectors the corresponding nondegenerate quadratic form assigns the same value. So we just have to check 2 quadratic forms of each dimension.

And we don’t have to check all q scalars individually because, if \frac{\alpha}{\beta} is a square, then multiplication by its square root is a bijection between \{\vec{v}\mid Q(\vec{v})=\alpha\} and \{\vec{v}\mid Q(\vec{v})=\beta\}. So there’s only 3 cases to check: \alpha=0, \alpha is a nonzero square, or \alpha is nonsquare. If the dimension is even, there’s even fewer cases, because a quadratic form is equivalent to any nonzero scalar multiple of it, so each nonzero scalar \alpha has the same number of vectors that Q sends to \alpha. Thus, for any 2n-dimensional quadratic form Q, there will be some number c such that, for each \alpha\neq0, \left|\{\vec{v}\mid Q(\vec{v})=\alpha\}\right|=q^{2n-1}+c, and \left|\{\vec{v}\mid Q(\vec{v})=0\}\right|=q^{2n-1}-(q-1)c.

And it turns out that in odd dimensions, for any nonzero quadratic form Q, the probability that Q(\vec{v})=0 for a randomly selected vector \vec{v} is exactly \frac{1}{q}. To see this, we can reduce to the 1-dimensional case (which is clear because there’s q vectors and only \vec{0} gets sent to 0 by Q): Given a 2n+1-dimensional quadratic form Q, let L be a 1-dimensional subspace on which Q is nonzero, and let H=\left\{\vec{v}\mid\left<\vec{v},\vec{\ell}\right>_Q=0\,\forall\ell\in L\right\}. Then every vector is uniquely a sum \vec{\ell}+\vec{h} for some \vec{\ell}\in L and \vec{h}\in H, and Q(\vec{\ell}+\vec{h})=Q(\vec{\ell})+Q(\vec{h}). Since Q\restriction H is a quadratic form on the 2n-dimensional space H, there’s some number c such that for each \alpha\neq0, \left|\{\vec{h}\in H\mid Q(\vec{h})=\alpha\}\right|=q^{2n-1}+c, and \left|\{\vec{h}\in H\mid Q(\vec{h})=0\}\right|=q^{2n-1}-(q-1)c. For \vec{\ell}\in L and \vec{h}\in H, Q(\vec{\ell}+\vec{h})=0 iff Q(\vec{\ell})=Q(\vec{h})=0 (1\cdot(q^{2n-1}-(q-1)c) ways for this to happen) or Q(\vec{h})=-Q(\vec{\ell})\neq0 ((q-1)\cdot(q^{2n-1}+c) ways for this to happen), for a total of q^{2n-1}-(q-1)c + (q-1)(q^{2n-1}+c) = q^{2n} vectors \vec{v} such that Q(\vec{v})=0. Since there’s the same number of nonzero squares as nonsquares, it follows that there is some number d such that for each \alpha\neq0, \left|\{\vec{v}\mid Q(\vec{v})=\alpha\}\right|=q^{2n-1}+d if \alpha is square, and \left|\{\vec{v}\mid Q(\vec{v})=\alpha\}\right|=q^{2n-1}-d if \alpha is nonsquare.

We can compute these offsets c and d recursively. We can represent quadratic forms of square determinant with the identity matrix (i.e. x_1^2+\ldots+x_n^2). The number of vectors (x_1,\ldots,x_n) such that x_1^2+\ldots+x_n^2=\alpha is \sum_{\beta\in\mathbb{F}_q}\left|\left\{(x_{n-1},x_n)\mid x_{n-1}^2+x_n^2=\alpha-\beta\right\}\right|\cdot\left|\left\{(x_1,\ldots,x_{n-2})\mid x_1^2+\ldots+x_{n-2}^2=\beta\right\}\right|. (Why do I suggest decrementing n by 2 at a time? You could also try decrementing n by 1 each step, but I think 2 is easier because of the significance of the parity of the dimension). We can represent quadratic forms of nonsquare determinant with a matrix that is the identity except for the last diagonal entry being replaced with a nonsquare, and then compute how many vectors it assigns each scalar in terms of the square-determinant quadratic forms of one lower dimension.

It turns out that it matters whether -1 is a square in \mathbb{F}_q. To see this, consider the case of a 2-dimensional quadratic form of square determinant: x^2+y^2, and let’s see how many solutions there are to x^2+y^2=0. Assuming (x,y) is nonzero, x^2+y^2=0 iff \left(\frac{x}{y}\right)^2=-1. If -1 is not a square, then this is not possible, so (x,y)=(0,0) is the only solution. But if -1 is a square, then it has two square roots that \frac{x}{y} could be, and q-1 choices of overall scale, so there’s 2(q-1) nonzero solutions, and thus 2q-1 total solutions. -1 is a square iff q\equiv1\pmod4, so we can phrase this as that it matters whether q is 1 or -1 \mod 4.

If you state and solve the recurrence, you should find that:
\left|\left\{ \vec{v}\in\mathbb{F}_{q}^{2n}:Q\left(\vec{v}\right)=\alpha\right\} \right|=\begin{cases} q^{2n-1}+\text{\ensuremath{\epsilon}\ensuremath{\ensuremath{\left(Q\right)}}}\left(q-1\right)q^{n-1} & \alpha=0\\ q^{2n-1}-\text{\ensuremath{\epsilon}\ensuremath{\ensuremath{\left(Q\right)}}}q^{n-1} & \alpha\neq0 \end{cases}
\left|\left\{ \vec{v}\in\mathbb{F}_{q}^{2n+1}:Q\left(\vec{v}\right)=\alpha\right\} \right|=\begin{cases} q^{2n} & \alpha=0\\ q^{2n}+\text{\ensuremath{\epsilon}\ensuremath{\ensuremath{\left(Q\right)}}}q^{n} & \alpha\in\left(\mathbb{F}_{q}^{\times}\right)^{2}\\ q^{2n}-\text{\ensuremath{\epsilon}\ensuremath{\ensuremath{\left(Q\right)}}}q^{n} & \alpha\in\mathbb{F}_{q}^{\times}\setminus\left(\mathbb{F}_{q}^{\times}\right)^{2} \end{cases}
Where \epsilon\left(Q\right)\in\{\pm1\} is defined as follows:
If q\equiv1\pmod4, then \epsilon\left(Q\right)=1 if Q has square determinant, and -1 if Q has nonsquare determinant.
If q\equiv-1\pmod4, then \epsilon\left(Q\right) is defined as in the previous case if n is even, but is reversed if n is odd. Remember that the dimensions above were 2n and 2n+1, so, by n, I mean half the dimension (rounded down).

It turns out that counting how many vectors a quadratic form assigns each value is useful for another counting problem: Now we can determine the order of the automorphism group of a quadratic form.

If Q is an n-dimensional quadratic form of square determinant, it can be represented by the identity matrix. A matrix with columns (\vec{b}_1,\ldots,\vec{b}_n) is an automorphism of Q iff Q(\vec{b}_i)=1 and \left<\vec{b}_i,\vec{b}_j\right>_Q=0 for i\neq j. Once \vec{b}_1,\ldots,\vec{b}_i have been chosen, the space that’s Q-orthogonal to all of them is (n-i)-dimensional, and Q restricts to a quadratic form on it of square determinant, so the number of options for \vec{b}_{i+1} is the number of vectors that an (n-i)-dimensional quadratic form of square determinant assigns value 1 (which is square). Thus the order of the automorphism group of Q is the product of these over all 0\leq i<n.

If Q is an n-dimensional quadratic form of nonsquare determinant, we can do a similar thing, representing Q with a matrix like the identity, but with the first diagonal entry replaced with a nonsquare. Then the number of options for \vec{b}_1 is the number of vectors that a quadratic form of nonsquare determinant assigns some particular nonsquare value, and the rest of the product is unchanged.

If you evaluate these products, you should get:
\dim=2n: \left|\text{Aut}\left(Q\right)\right|=2q^{n(n-1)}\left(q^n-\epsilon(Q)\right)\prod_{k=1}^{n-1}(q^{2k}-1)
\dim=2n+1: \left|\text{Aut}\left(Q\right)\right|=2q^{n^2}\prod_{k=1}^{n}(q^{2k}-1)

You may notice that, in odd dimensions, the size of the automorphism group of a quadratic form does not depend on its determinant. In fact, the groups are the same. This is because multiplication by a nonsquare scalar sends a quadratic form of square determinant to a quadratic form of nonsquare determinant, and does not change its automorphism group.

As a sanity check, \text{Aut}(Q) is a subgroup of the general linear group, so its order should divide the order of the general linear group. \left|GL_n(\mathbb{F}_q)\right|=q^{n \choose 2}\prod_{k=1}^{n}(q^k-1) (which can be shown with a similar recursive computation). And indeed:

\dim=2n: \frac{\left|GL_{2n}(\mathbb{F}_q)\right|}{\left|\text{Aut}\left(Q\right)\right|}=\frac{1}{2}q^{n^2}\left(q^n+\epsilon(Q)\right)\prod_{k=0}^{n-1}(q^{2k+1}-1)
\dim=2n+1: \frac{\left|GL_{2n+1}(\mathbb{F}_q)\right|}{\left|\text{Aut}\left(Q\right)\right|}=\frac{1}{2}q^{n(n+1)}\prod_{k=0}^{n}(q^{2k+1}-1)

By the orbit-stabilizer theorem, these count the number of quadratic forms of each equivalence class.

The sum of the sizes of the two equivalence classes is the total number of (individual, not up to equivalence) nondegenerate quadratic forms on a vector space, or equivalently, the number of invertible symmetric matrices. This sum is
\dim=2n: q^{n(n+1)}\prod_{k=0}^{n-1}(q^{2k+1}-1)
\dim=2n+1: q^{n(n+1)}\prod_{k=0}^{n}(q^{2k+1}-1)

The leading terms of these are q^{2n \choose 2} and q^{2n+1 \choose 2}, respectively, which are the numbers of symmetric 2n\times2n and (2n+1)\times(2n+1) matrices, respectively. And the second terms of each are negative, which makes sense because we’re excluding singular matrices. You could compute the sizes of all equivalence classes of quadratic forms on an n-dimensional vector space, not just the nondegenerate ones, and add them up, and you should get exactly q^{n \choose 2}.

Fields of well-defined Euler characteristic

\left|\mathbb{R}\right| is, of course, infinite, but if you had to pick a finite number that acts most like the size of \mathbb{R}, it turns out that -1 is a decent answer in some ways. After all, \mathbb{R}_{>0} and \mathbb{R}_{<0} each are homeomorphic to \mathbb{R}, so they should probably all have the same size, and \mathbb{R}=\mathbb{R}_{<0}\sqcup\{0\}\sqcup\mathbb{R}_{>0}; \{0\} has size 1, and size should probably be additive, which would imply that, if we’re denoting the “size” of \mathbb{R} by \chi(\mathbb{R}), \chi(\mathbb{R})=2 \chi(\mathbb{R})+1, which implies \chi(\mathbb{R})=-1. Then, of course, \mathbb{R}^n should have “size” \chi(\mathbb{R}^n)=(-1)^n.

This might seem silly, but it’s deeper than it sounds. This notion of “size” is called Euler characteristic (this is actually different from what algebraic topologists call Euler characteristic, though it’s the same for compact manifolds). A topological space X has an Euler characteristic, denoted \chi(X), if it can be partitioned into finitely many cells, each of which is homeomorphic to \mathbb{R}^n for some n. If X can be partitioned into cells X_1,\lots,X_k, and X_i is homeomorphic to \mathbb{R}^{n_i}, then \chi(X_i)=(-1)^{n_i} and \chi(X)=\sum_i \chi(X_i)=\sum_i (-1)^{n_i}. This is well-defined, in the sense that multiple different ways of partitioning a space into cells that are homeomorphic to \mathbb{R}^n will give you the same Euler characteristic. Roughly speaking, this is because partitions of a space can be refined by splitting an n-dimensional cell into two n-dimensional cells and the n-1-dimensional boundary between them, this does not change the Euler characteristic, and any two partitions into cells have a common refinement.

Euler characteristic has a lot in common with sizes of finite sets, and is, in some ways, a natural generalization of it. For starters, the Euler characteristic of any finite set is its size. Some familiar properties of sizes of sets generalize to Euler characteristics, such as \chi(X\sqcup Y)=\chi(X)+\chi(Y), and \chi(X\times Y)=\chi(X)\chi(Y).

In fact, all of the counting problems we just did over finite fields of odd size applies just as well to fields of well-defined odd Euler characteristic. There are only two infinite fields that have an Euler characteristic: \mathbb{R}, with \chi(\mathbb{R})=-1, and \mathbb{C}, with \chi(\mathbb{C})=1.

Recall that in the counting problems in the previous section, we needed to know whether or not \mathbb{F}_q contains a square root of -1, and that it does iff q\equiv1\pmod4. This extends to fields with Euler characteristic just fine, since \chi(\mathbb{C})\equiv1\pmod4 and \mathbb{C} contains a square root of -1, and \chi(\mathbb{R})\not\equiv1\pmod4 and \mathbb{R} does not contain a square root of -1. In fact, this generalizes a bit. \mathbb{F}_q contains the primitive nth roots of unity iff q\equiv1\pmod n. \chi(\mathbb{C})\equiv1\pmod n for every n, and \mathbb{C} contains all roots of unity. \chi(\mathbb{R})\equiv1\pmod n only for n=1 and n=2, and the only roots of unity in \mathbb{R} are 1 itself and its primitive square root -1.

Anyway, let’s start with positive-definite real quadratic forms (i.e. those only taking positive values). If Q is an n-dimensional positive-definite real quadratic form, then
\left\{\vec{v}\mid Q(\vec{v})=\alpha\right\}\cong \begin{cases} \emptyset & \alpha<0\\ \left\{ \vec{0}\right\} & \alpha=0\\ \mathbb{S}^{n-1} & \alpha>0 \end{cases}
For negative-definite quadratic forms, of course, the positive and negative cases switch.

Now let’s try the indefinite quadratic forms. Recall that for every nondegenerate quadratic form Q on a real vector space V, there are Q-orthogonal subspaces P,N\subseteq V such that Q\restriction P is positive-definite, Q\restriction N is negative-definite, and P\oplus N=V. Let n_+=\dim P and n_-=\dim N. For \alpha>0, in order for \vec{x}\in P,\vec{y}\in N to satisfy Q(\vec{x}+\vec{y})=\alpha, \vec{y} can be anything, and then \vec{x} must lie on the (n_+-1)-dimensional sphere Q(\vec{x})=\alpha-Q(\vec{y}). If \alpha<0, then P and N switch roles. In order for \vec{x}\in P,\vec{y}\in N to satisfy Q(\vec{x}+\vec{y})=0, either \vec{x}=\vec{y}=\vec{0}, or there’s some \beta>0 such that \vec{x} lies on the (n_+-1)-dimensional sphere Q(\vec{x})=\beta and \vec{y} lies on the (n_--1)-dimensional sphere Q(\vec{y})=-\beta. Thus
\left\{\vec{v}\mid Q(\vec{v})=\alpha\right\}\cong \begin{cases} \mathbb{S}^{n_{-}-1}\times\mathbb{R}^{n_{+}} & \alpha<0\\ \mathbb{S}^{n_{+}-1}\times\mathbb{S}^{n_{-}-1}\times\mathbb{R}\cup\left\{ \vec{0}\right\} & \alpha=0\\ \mathbb{S}^{n_{+}-1}\times\mathbb{R}^{n_{-}} & \alpha>0 \end{cases}
With \mathbb{S}^{-1}=\emptyset, this also works for definite quadratic forms.

If you remove one point from \mathbb{S}^n, you get \mathbb{R}^n, so \chi(\mathbb{S}^n)=\chi(\mathbb{R}^n)+1=(-1)^n+1; that is, 2 if n is even, and 0 if n is odd. Thus the Euler characteristics of the above level sets of Q depend only on the parities of n_+ and n_-. The parity of n_- is equivalent to the sign of \det(Q), and n_++n_- is the dimension. So the Euler characteristics of the level sets of Q depend only on the parity of its dimension and the sign of its determinant, and
\det(Q)>0,\,\dim(V)\text{ even}: \chi\left(\left\{\vec{v}\in V\mid Q(\vec{v}=\alpha\right\}\right)=\begin{cases} 1 & \alpha=0\\ 0 & \alpha\neq0 \end{cases}
\det(Q)<0,\,\dim(V)\text{ even}: \chi\left(\left\{\vec{v}\in V\mid Q(\vec{v}=\alpha\right\}\right)=\begin{cases} -3 & \alpha=0\\ -2 & \alpha\neq0 \end{cases}
\det(Q)>0,\,\dim(V)\text{ odd}: \chi\left(\left\{\vec{v}\in V\mid Q(\vec{v}=\alpha\right\}\right)=\begin{cases} 0 & \alpha<0\\ 1 & \alpha=0\\ 2 & \alpha>0 \end{cases}
\det(Q)<0,\,\dim(V)\text{ odd}: \chi\left(\left\{\vec{v}\in V\mid Q(\vec{v}=\alpha\right\}\right)=\begin{cases} 2 & \alpha<0\\ 1 & \alpha=0\\ 0 & \alpha>0 \end{cases}
In comparison, if you take our earlier formula for the sizes of these level sets over \mathbb{F}_q and plug in q=-1, you get
\chi\left(\left\{\vec{v}\in\mathbb{R}^{2n}\mid Q(\vec{v})=\alpha\right\}\right)=\begin{cases} -1+2\epsilon(Q)(-1)^n & \alpha=0\\ -1+\epsilon(Q)(-1)^n & \alpha\neq0 \end{cases}
\chi\left(\left\{\vec{v}\in\mathbb{R}^{2n+1}\mid Q(\vec{v})=\alpha\right\}\right)=\begin{cases} 1-\epsilon(Q)(-1)^n & \alpha<0\\ 1 & \alpha=0\\ 1+\epsilon(Q)(-1)^n & \alpha>0 \end{cases}
\epsilon(Q)(-1)^n is the sign of the determinant, so these two calculations agree.

Now let’s look at complex quadratic forms. Since every complex number is square and \chi(\mathbb{C})=1\equiv1\pmod4, \epsilon(Q)=1 for every nondegenerate complex quadratic form Q. Thus our formulas for sizes of level sets of quadratic forms over finite fields predicts
\chi\left(\left\{\vec{v}\in\mathbb{C}^{2n}\mid Q(\vec{v})=\alpha\right\}\right)=\begin{cases} 1 & \alpha=0\\ 0 & \alpha\neq0 \end{cases}
\chi\left(\left\{\vec{v}\in\mathbb{C}^{2n+1}\mid Q(\vec{v})=\alpha\right\}\right)=\begin{cases} 1 & \alpha=0\\ 2 & \alpha\neq0 \end{cases}
In the odd-dimensional case, I’ve left out the nonsquare case, and relabeled the case where \alpha is a nonzero square as \alpha\neq0, because all complex numbers are squares.

It’s easy to verify that the zero-sets of complex quadratic forms have Euler characteristic 1. This is because, besides the origin, all of the other solutions to Q(\vec{v})=0 can be arbitrarily rescaled to get more solutions, and \chi(\mathbb{C}^\times)=0. That is, let X be the space of 1-dimensional subspaces on which Q is zero. Then \left\{\vec{v}\neq0\mid Q(\vec{v})\right\}=0\cong X\times\mathbb{C}^\times, so \chi\left(\left\{\vec{v}\mid Q(\vec{v}\right\}\right)=\chi(\{\vec{0}\})+\chi(\mathbb{C}^\times)\chi(X)=1+0\chi(X)=1.

The Euler characteristics of the other level sets of complex quadratic forms can be checked by induction. A 0-dimensional quadratic form takes no nonzero values, so \chi(\{\vec{v}\in\mathbb{C}^0\mid Q(\vec{v})=\alpha\})=\chi(\emptyset)=0. An n+1-dimensional quadratic form Q can be diagonalized as x_1^2+\ldots+x_{n+1}^2. For \alpha\neq0, the solutions to x_1^2+\ldots+x_{n+1}^2=\alpha can be partitioned into three pieces:
1. x_1^2+\ldots+x_n^2=0,\,x_{n+1}^2=\alpha. This has Euler characteristic \chi(\{x_1^2+\ldots+x_n^2=0\})\chi(\{x_{n+1}^2=\alpha\})=1\cdot2=2.
2. x_1^2+\ldots+x_n^2=\alpha,\,x_{n+1}^2=0. This has Euler characteristic \chi(\{x_1^2+\ldots+x_n^2=\alpha\})\chi(\{x_{n+1}=0\})=\chi(\{x_1^2+\ldots+x_n^2=\alpha\}).
3. For some \beta\notin\{0,\alpha\}, x_1^2+\ldots+x_n^2=\beta,\,x_{n+1}^2=\alpha-\beta. This has Euler characteristic
\chi(\mathbb{C}\setminus\{0,\alpha\})\chi(\{x_1^2+\ldots+x_n^2=1\})\chi(\{x_{n+1}^2=1\})=(-1)\cdot\chi(\{x_1^2+\ldots+x_n^2=1\})\cdot2=-2\cdot\chi(\{x_1^2+\ldots+x_n^2=1\}) (I’ve replaced \beta and \alpha-\beta with 1 where they appear, since both are nonzero, and all the nonzero level sets are homeomorphic).
Thus, summing these up, we get \chi(\{x_1^2+\ldots+x_{n+1}^2=\alpha\})=2-2\chi(\{x_1^2+\ldots+x_n^2=\alpha\}), explaining the alternation between 0 and 2.

There are some apparent disanalogies between finite fields and infinite fields with Euler characteristics. For instance, it is not true that exactly half of nonzero complex numbers are squares, at least in the sense that \left|\mathbb{C}^\times/\left(\mathbb{C}^\times\right)^2\right|=1\neq2. However, it is still true (vacuously) that the ratio between any two nonsquare complex numbers is square. And the spaces of nonzero square and nonsquare complex numbers have the same Euler characteristic, since \chi(\mathbb{C}^\times)=0=\chi(\emptyset). This had to be the case, because sufficiently non-pathological bijections preserve Euler characteristic, so sufficiently non-pathological two-to-one maps cut Euler characteristic in half, since the domain can be partitioned into two pieces in non-pathological bijection with the range.

And, while finite fields of odd characteristic have two nondegenerate n-dimensional quadratic forms up to equivalence, \mathbb{C} has just one, and \mathbb{R} has n+1 of them. \mathbb{C}‘s missing second nondegenerate quadratic form of each dimension can be addressed similarly to its missing nonsquare elements. The number of nondegenerate n-dimensional quadratic forms over \mathbb{F}_q in each equivalence class is a multiple of q-1, so, with q=1, this correctly predicts that the Euler characteristic of each equivalence class of n-dimensional complex quadratic form is 0, and one of the equivalence classes being empty is consistent with that.

The oversupply of equivalence classes of real quadratic forms is a little subtler. Our analysis of nondegenerate quadratic forms over finite fields of odd characteristic predicts that any two nondegenerate real quadratic forms whose determinants have the same sign should be equivalent, and this is not the case. To address this, let’s look at the heap of isomorphisms between two nondegenerate quadratic forms whose determinants have the same sign. A heap is like a group that has forgotten its identity element. The heap of isomorphisms between two isomorphic objects is the underlying heap of the automorphism group of one of them. In particular, if the automorphism group of some object is a Lie group, then the heap of isomorphisms between it and another object it is isomorphic to is homeomorphic to the automorphism group, and thus they have the same Euler characteristic. In the finite case, this is just saying that the heap of isomorphisms between two isomorphic objects has the same size as the automorphism group of one of them. So when we computed the sizes of automorphism groups of nondegenerate quadratic forms over \mathbb{F}_q, we were also computing sizes of isomorphism heaps between isomorphic pairs of nondegenerate quadratic forms over \mathbb{F}_q. Plugging in q=-1, this should also tell us the Euler characteristic of the heap of isomorphisms between two real quadratic forms whose determinants have the same sign. Notice that (with the exception of the cases where \dim=2 and \epsilon=-1, or where \dim=1), the order of the automorphism group of a nondegenerate quadratic form over \mathbb{F}_q is a multiple of q^2-1, so plugging in q=-1 predicts that the automorphism group of a nondegenerate real quadratic form has Euler characteristic 0, and hence so does the heap of isomorphisms between two nondegenerate real quadratic forms of the same dimension whose determinants have the same sign. This is consistent with said heap of isomorphisms being empty! The exceptions, the quadratic forms whose automorphism groups do not have Euler characteristic 0, are when \dim=2 and \det<0, or when \dim=1. These are exactly the cases when knowing the dimension of a nondegenerate real quadratic form and the sign of its determinant actually does tell you what the quadratic form is up to equivalence.