Note: This write-up consists mainly of open questions rather than results, but may contain errors anyway.

I'd like to describe a logic for talking about probabilities of logical sentences. Fix some first-order language . This logic deals with pairs , which I'm calling assertions, where is a formula and . Such a pair is to be interpreted as a claim that has probability at least .

A theory consists of a set of assertions. A model of a theory consists of a probability space whose points are -structures, such that for every assertion , , where is inner probability. I'll write for can be proved from , and for all models of are also models of .

The rules of inference are all rules where is a finite set of assertions, and is an assertion such that in all models of . Can we make an explicit finite list of inference rules that generate this logic? If not, is the set of inference rules at least recursively enumerable? (For recursive enumerability to make sense here, we need to restrict attention to probabilities in some countable dense subset of that has a natural explicit bijection with , such as .) I'm going to assume later that the set of inference rules is recursively enumerable; if it isn't, everything should still work if we use some recursively enumerable subset of the inference rules that includes all of the ones that I use.

Note that the compactness theorem fails for this logic; for example, , but no finite subset of implies , and hence .

Any classical first-order theory can be converted into a theory in this logic as .

Let be a consistent, recursively axiomatizable extension of Peano Arithmetic. By the usual sort of construction, there is a binary predicate such that for any sentence and , where is a coding of sentences with natural numbers. We have a probabilistic analog of Löb's theorem: if , then . Peano arithmetic can prove this theorem, in the sense that .

Proof: Assume . By the diagonal lemma, there is a sentence such that . If , then and , so . This shows that . By the assumption that , this implies that . By a probabilistic version of the deduction theorem, . That is, . Going back around through all that again, we get .

If we change the assumption to be that for some , then the above proof does not go through (if , then it does, because ). Is there a consistent theory extending Peano Arithmetic that proves a soundness schema about itself, , or can this be used to derive a contradiction some other way? If there is no such consistent theory, then can the soundness schema be modified so that it is consistent, while still being nontrivial? If there is such a consistent theory with a soundness schema, can the theory also be sound? That is actually several questions, because there are multiple things I could mean by "sound". The possible syntactic things "sound" could mean, in decreasing order of strictness, are: 1) The theory does not assert a positive probability to any sentence that is false in . 2) There is an upper bound below for all probabilities asserted of sentences that are false in . 3) The theory does not assert probability to any sentence that is false in .

There are also semantic versions of the above questions, which are at least as strict as their syntactic analogs, but probably aren't equivalent to them, since the compactness theorem does not hold. The semantic version of asking if the soundness schema is consistent is asking if it has a model. The first two soundness notions also have semantic analogs. 1') is a model of the theory. 2') There is a model of the theory that assigns positive probability to . I don't have a semantic version of 3, but metaphorically speaking, a semantic version of 3 should mean that there is a model that assigns nonzero probability density at , even though it might not have a point mass at .

This is somewhat similar to Definability of Truth in Probabilistic Logic. But in place of adding a probability predicate to the language, I'm only changing the metalanguage to refer to probabilities, and using this to express statements about probability in the language through conventional metamathematics. An advantage of this approach is that it's constructive. Theories with the properties described by the Christiano et al paper are unsound, so if some reasonably strong notion of soundness applies to an extension of Peano Arithmetic with the soundness schema I described, that would be another advantage of my approach.

A type of situation that this might be useful for is that when an agent is reasoning about what actions it will take in the future, it should be able to trust its future self's reasoning. An agent with the soundness schema can assume that its future self's beliefs are accurate, up to arbitrarily small loss in precision. A related type of situation is if an agent reaches some conclusion, and then writes it to external storage instead of its own memory, and later reads the claim it had written to external storage. With the soundness schema, if the agent has reason to believe that the external storage hasn't been tampered with, it can reason that since its past self had derived the claim, the claim is to be trusted arbitrarily close to as much as it would have been if the agent had remembered it internally.

For a consistent theory , say that a sentence is -measurable if there is some such that for every and for every . So -measurability essentially means that pins down the probability of the sentence. If is not -measurable, then you could say that has Knightian uncertainty about . Say that is complete if every sentence is -measurable. Essentially, complete theories assign a probability to every sentence, while incomplete theories have Knightian uncertainty.

The first incompleteness theorem (that no recursively axiomatizable extension of PA is consistent and complete) holds in this setting. In fact, for every consistent recursively axiomatizable extension of PA, there must be sentences that are given neither a nontrivial upper bound nor a nontrivial lower bound on their probability. Otherwise, we would be able to recursively separate the theorems of PA from the negations of theorems of PA, by picking some recursive enumeration of assertions of the theory, and sorting sentences by whether they are first given a nontrivial lower bound or first given a nontrivial upper bound; theorems of PA will only be given a nontrivial lower bound, and their negations will only be given a nontrivial upper bound. [Thanks to Sam Eisenstat for pointing this out; I had somehow managed not to notice this on my own.]

For an explicit example of a sentence for which no nontrivial bounds on its probability can be established, use the diagonal lemma to construct a sentence which is provably equivalent to "for every proof of for any , there is a proof of for some with smaller Gödel number."

Thus a considerable amount of Knightian uncertainty is inevitable in this framework. Dogmatic Bayesians such as myself might find this unsatisfying, but I suspect that any attempt to unify probability and first-order arithmetic will suffer similar problems.

I'm a bit unnerved about the compactness theorem failing. It occurred to me that it might be possible to fix this by letting models use hyperreal probabilities. Problem is, the hyperreals aren't complete, so the countable additivity axiom for probability measures doesn't mean anything, and it's unclear what a hyperreal-valued probability measure is. One possible solution is to drop countable additivity, and allow finitely-additive hyperreal-valued probability measures, but I'm worried that the logic might not even be sound for such models. A different possible solution to this is to take a countably complete ultrafilter on a set , and use probabilities valued in the ultrapower . Despite not being Cauchy complete, it inherits a notion of convergence of sequences from , since a sequence can be said to converge to , and this is well-defined (if is for a -large set of indices ) by countable completeness. Thus the countable additivity axiom makes sense for -valued probability measures. Allowing models to use -valued probability measures might make the compactness theorem work.

]]>Specifically, define an “ultrafinite number” to be a natural number that it is physically possible to express in unary. This isn't very precise, since there are all sorts of things that “physically possible to express in unary” could mean, but let's just not worry about that too much. Also, many ultrafinitists would not insist that numbers must be expressible in such an austere language as unary, but I'm about to get to that.

Examples: is an ultrafinite number, because , where is the successor function. 80,000 is also an ultrafinite number, but it is a large one, and it isn't worth demonstrating its ultrafiniteness. A googol is not ultrafinite. The observable universe isn't even big enough to contain a googol written in unary.

Now, define a “polynomially finite number” to be a natural number that it is physically possible to express using addition and multiplication. Binary and decimal are basically just concise ways of expressing certain sequences of addition and multiplication operations. For instance, “” means . Conversely, if you multiply an -digit number with an -digit number, you get an at most -digit number, which is the same number of symbols it took write down “[the -digit number] times [the -digit number]” in the first place, so any number that can be written using addition and multiplication can be written in decimal. Thus, another way to define polynomially finite numbers is as the numbers that it is physically possible to express in binary or in decimal. I've been ignoring some small constant factors that might make these definitions not quite equivalent, but any plausible candidate for a counterexample would be an ambiguous edge case according to each definition anyway, so I'm not worried about that. Many ultrafinitists may see something more like polynomially finite number, rather than ultrafinite number, as a good description of what numbers exist.

Examples: A googol is polynomially finite, because a googol is 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000. A googolplex is not polynomially finite, because it would require a googol digits to express in decimal, which is physically impossible.

Define an “elementarily finite number” to be a number that it is physically possible to express using addition, multiplication, subtraction, exponentiation, and the integer division function . Elementarily finite is much broader than polynomially finite, so it might make sense to look at intermediate classes. Say a number is “exponentially finite” if it is physically possible to express using the above operations but without any nested exponentiation (e.g. is okay, but is not). More generally, we can say that a number is “-exponentially finite” if it can be expressed with exponentiation nested to depth at most , so a polynomially finite number is a -exponentially finite number, an exponentially finite number is a -exponentially finite number, and an elementarily finite number is a number that is -exponentially finite for some (or equivalently, for some ultrafinite ).

Examples: a googolplex is exponentially finite, because it is . Thus a googolduplex, meaning , is -exponentially finite, but it is not exponentially finite. For examples of non-elementarily finite numbers, and numbers that are only -exponentially finite for fairly large , I'll use up-arrow notation. just means , means , where is the number of copies of , and using order of operations that starts on the right. So , which is certainly polynomially finite, and could also be ultrafinite depending on what is meant by “physically possible” (a human cannot possibly count that high, but a computer with a large enough hard drive can store in unary). , where there are threes in that tower. Under the assumptions that imply is ultrafinite, is elementarily finite. Specifically, it is -exponentially finite, but I'm pretty sure it's not -exponentially finite, or even -exponentially finite. , and is certainly not elementarily finite.

Interestingly, even though a googolplex is exponentially finite, there are numbers less than a googolplex that are not. There's an easy nonconstructive proof of this: in order to be able to represent every number less than a googolplex in any encoding scheme at all, there has to be some number less than a googolplex that requires at least a googol decimal digits of information to express. But it is physically impossible to store a googol decimal digits of information. Therefore for any encoding scheme for numbers, there is some number less than a googolplex that cannot physically be expressed in it. This is why the definition of elementarily finite is significantly more complicated than the definition of polynomially finite; in the polynomial case, if can be expressed using addition and multiplication and , then can also be expressed using addition and multiplication, so there's no need for additional operations to construct smaller numbers, but in the elementary case, the operations of subtraction and integer division are useful for expressing more numbers, and are simpler than exponentiation. For example, these let us express the number that you get from reading off the last googol digits, or the first googol digits, of , so these numbers are elementarily finite. However, it is exceptionally unlikely that the number you get from reading off the first googol decimal digits of is elementarily finite. But for a difficult exercise, show that the number you get from reading off the last googol decimal digits of is elementarily finite.

Why stop there instead of including more operations for getting smaller numbers, like , which I implicitly used when I told you that the number formed by the first googol digits of is elementarily finite? We don't have to. The functions that you can get by composition from addition, multiplication, exponentiation, , and coincide with the functions that can be computed in iterated exponential time (meaning time, for some height of that tower). So if you have any remotely close to efficient way to compute an operation, it can be expressed in terms of the operations I already specified.

We can go farther. Consider a programming language that has the basic arithmetic operations, if/else clauses, and loops, where the number of iterations of each loop must be fixed in advance. The programs that can be written in such a language are the primitive recursive functions. Say that a number is primitive recursively finite if it is physically possible to write a program (that does not take any input) in this language that outputs it. For each fixed , the binary function is primitive recursive, so is primitive recursively finite. But the ternary function is not primitive recursive, so is not primitive recursively finite.

The primitive recursively finite numbers can be put in a hierarchy of subclasses based on the depth of nested loops that are needed to express them. If the only arithmetic operation available is the successor function (from which other operations can be defined using loops), then the elementarily finite numbers are those that can be expressed with loops nested to depth at most 2. The -exponentially finite numbers should roughly correspond to the numbers that can be expressed with at most loops at depth 2.

Next comes the provably computably finite numbers. Say that a number is provably computably finite if it is physically possible to write a program in a Turing-complete language that outputs the number (taking no input), together with a proof in Peano Arithmetic that the program halts. The famous Graham's number is provably computably finite. Graham's number is defined in terms of a function , defined recursively as and . Graham's number is . You could write a computer program to compute , and prove that is total using Peano arithmetic. By replacing Peano arithmetic with other formal systems, you can get other variations on the notion of provably computably finite.

For an example of a number that is not provably computably finite, I'll use the hydra game, which is described here. There is no proof in Peano arithmetic (that can physically be written down) that it is possible to win the hydra game starting from the complete binary tree of depth a googol. So the number of turns it takes to win the hydra game on the complete binary tree of depth a googol is not provably computably finite. If you start with a reasonably small hydra (say, with 100 nodes), you could write a program to search for the shortest winning strategy, and prove in Peano arithmetic that it succeeds in finding one, if you're sufficiently clever and determined, and you use a computer to help you search for proofs. The proof you'd get out of this endeavor would be profoundly unenlightening, but the point is, the number of turns it takes to win the hydra game for a small hydra is provably computably finite (but not primitive recursively finite, except in certain trivial special cases).

Next we'll drop the provability requirement, and say that a number is computably finite if it is physically possible to write a computer program that computes it from no input. Of course, in order to describe a computably finite number, you need the program you use to actually halt, so you'd need some argument that it does halt in order to establish that you're describing a computably finite number. Thus this is arguably just a variation on provably computably finite, where Peano arithmetic is replaced by some unspecified strong theory encompassing the sort of reasoning that classical mathematicians tend to endorse. This is probably the point where even the most patient of ultrafinitists would roll their eyes in disgust, but oh well. Anyway, the number of steps that it takes to win the hydra game starting from the complete binary tree of depth a googol is a computably finite number, because there exists a shortest winning strategy, and you can write a computer program to exhaustively search for it.

The busy-beaver function is defined so that is the longest any Turing machine with states runs before halting (among those that do halt). is not computably finite, because Turing machines with a googol states cannot be explicitly described, and since the busy-beaver function is very fast-growing, no smaller Turing machine has comparable behavior. What about ? Turing machines with 10,000 states are not too big to describe explicitly, so it may be tempting to say that is computably finite. But on the other hand, it is not possible to search through all Turing machines with 10,000 states and find the one that runs the longest before halting. No matter how hard you search and no matter how clever your heuristics for finding Turing machines that run for exceptionally long and then halt, it is vanishingly unlikely that you will find the 10,000-state Turing machine that runs longest before halting, let alone realize that you have found it. And the idea is to use classical reasoning for large numbers themselves, but constructive reasoning for descriptions of large numbers. So since it is pretty much impossible to actually write a program that outputs , it is not computably finite.

For a class that can handle busy-beaver numbers too, let's turn to the arithmetically finite numbers. These are the numbers that are defined by arithmetical formulas. These form a natural hierarchy, where the -finite numbers are the numbers defined by arithmetical formulas with at most unbounded quantifiers starting with , the -finite numbers are the numbers defined by arithmetical formulas with at most unbounded quantifiers starting with , and the -finite numbers are those that are both -finite and -finite. The -finite numbers are the same as the computably finite numbers. is -finite, because it is defined by “ every Turing machine with states that halts in at most steps halts in at most steps, and there is a Turing machine with states that halts in exactly steps.” Everything after the first quantifier in that formula is computable. is -finite, but no lower than that. To get a number that is not arithmetically finite, consider the function given by is the largest number defined by an arithmetical formula with symbols. is -finite, but is not arithmetically finite. I'll stop there.

]]>In discussions of existential risk from AI, it is often assumed that the existential catastrophe would follow an intelligence explosion, in which an AI creates a more capable AI, which in turn creates a yet more capable AI, and so on, a feedback loop that eventually produces an AI whose cognitive power vastly surpasses that of humans, which would be able to obtain a decisive strategic advantage over humanity, allowing it to pursue its own goals without effective human interference. Victoria Krakovna points out that many arguments that AI could present an existential risk do not rely on an intelligence explosion. I want to look in sightly more detail at how that could happen. Kaj Sotala also discusses this.

An AI starts an intelligence explosion when its ability to create better AIs surpasses that of human AI researchers by a sufficient margin (provided the AI is motivated to do so). An AI attains a decisive strategic advantage when its ability to optimize the universe surpasses that of humanity by a sufficient margin. Which of these happens first depends on what skills AIs have the advantage at relative to humans. If AIs are better at programming AIs than they are at taking over the world, then an intelligence explosion will happen first, and it will then be able to get a decisive strategic advantage soon after. But if AIs are better at taking over the world than they are at programming AIs, then an AI would get a decisive strategic advantage without an intelligence explosion occurring first.

Since an intelligence explosion happening first is usually considered the default assumption, I'll just sketch a plausibility argument for the reverse. There's a lot of variation in how easy cognitive tasks are for AIs compared to humans. Since programming AIs is not yet a task that AIs can do well, it doesn't seem like it should be a priori surprising if programming AIs turned out to be an extremely difficult task for AIs to accomplish, relative to humans. Taking over the world is also plausibly especially difficult for AIs, but I don't see strong reasons for confidence that it would be harder for AIs than starting an intelligence explosion would be. It's possible that an AI with significantly but not vastly superhuman abilities in some domains could identify some vulnerability that it could exploit to gain power, which humans would never think of. Or an AI could be enough better than humans at forms of engineering other than AI programming (perhaps molecular manufacturing) that it could build physical machines that could out-compete humans, though this would require it to obtain the resources necessary to produce them.

Furthermore, an AI that is capable of producing a more capable AI may refrain from doing so if it is unable to solve the AI alignment problem for itself; that is, if it can create a more intelligent AI, but not one that shares its preferences. This seems unlikely if the AI has an explicit description of its preferences. But if the AI, like humans and most contemporary AI, lacks an explicit description of its preferences, then the difficulty of the AI alignment problem could be an obstacle to an intelligence explosion occurring.

It also seems worth thinking about the policy implications of the differences between existential catastrophes from AI that follow an intelligence explosion versus those that don't. For instance, AIs that attempt to attain a decisive strategic advantage without undergoing an intelligence explosion will exceed human cognitive capabilities by a smaller margin, and thus would likely attain strategic advantages that are less decisive, and would be more likely to fail. Thus containment strategies are probably more useful for addressing risks that don't involve an intelligence explosion, while attempts to contain a post-intelligence explosion AI are probably pretty much hopeless (although it may be worthwhile to find ways to interrupt an intelligence explosion while it is beginning). Risks not involving an intelligence explosion may be more predictable in advance, since they don't involve a rapid increase in the AI's abilities, and would thus be easier to deal with at the last minute, so it might make sense far in advance to focus disproportionately on risks that do involve an intelligence explosion.

It seems likely that AI alignment would be easier for AIs that do not undergo an intelligence explosion, since it is more likely to be possible to monitor and do something about it if it goes wrong, and lower optimization power means lower ability to exploit the difference between the goals the AI was given and the goals that were intended, if we are only able to specify our goals approximately. The first of those reasons applies to any AI that attempts to attain a decisive strategic advantage without first undergoing an intelligence explosion, whereas the second only applies to AIs that do not undergo an intelligence explosion ever. Because of these, it might make sense to attempt to decrease the chance that the first AI to attain a decisive strategic advantage undergoes an intelligence explosion beforehand, as well as the chance that it undergoes an intelligence explosion ever, though preventing the latter may be much more difficult. However, some strategies to achieve this may have undesirable side-effects; for instance, as mentioned earlier, AIs whose preferences are not explicitly described seem more likely to attain a decisive strategic advantage without first undergoing an intelligence explosion, but such AIs are probably more difficult to align with human values.

If AIs get a decisive strategic advantage over humans without an intelligence explosion, then since this would likely involve the decisive strategic advantage being obtained much more slowly, it would be much more likely for multiple, and possibly many, AIs to gain decisive strategic advantages over humans, though not necessarily over each other, resulting in a multipolar outcome. Thus considerations about multipolar versus singleton scenarios also apply to decisive strategic advantage-first versus intelligence explosion-first scenarios.

]]>You start with a finite-dimensional real inner product space and a probability distribution on . Actually, you probably just started with a large finite number of elements of , and you've inferred a probability distribution that you're supposing they came from, but that difference is not important here. The goal is to find the -dimensional (for some ) affine subspace minimizing the expected squared distance between a vector (distributed according to ) and its orthogonal projection onto . We can assume without loss of generality that the mean of is , because we can just shift any probability distribution by its mean and get a probability distribution with mean . This is useful because then will be a linear subspace of . In fact, we will solve this problem for all simultaneously by finding an ordered orthonormal basis such that is the span of the first basis elements.

First you take , called the covariance of , defined as the bilinear form on given by . From this, we get the covariance operator by raising the first index, which means starting with and performing a tensor contraction (in other words, is obtained from by applying the map given by the inner product to the first index). is symmetric and positive semi-definite, so is self-adjoint and positive semi-definite, and hence has an orthonormal basis of eigenvectors of , with non-negative real eigenvalues. This gives an orthonormal basis in which is diagonal, where the diagonal entries are the eigenvalues. Ordering the eigenvectors in decreasing order of the corresponding eigenvalues gives us the desired ordered orthonormal basis.

There's no problem with principal component analysis as I described it above. It works just fine, and in fact is quite beautiful. But often people apply principal component analysis to probability distributions on finite-dimensional real vector spaces that don't have a natural inner product structure. There are two closely related problems with this: First, the goal is underdefined. We want to find a projection onto an -dimensional subspace that minimizes the expected squared distance from a vector to its projection, but we don't have a measure of distance. Second, the procedure is underdefined. is a bilinear form, not a linear operator, so it doesn't have eigenvectors or eigenvalues, and we don't have a way of raising an index to produce something that does. It should come as no surprise that these two problems arise together. After all, you shouldn't be able to find a fully specified solution to an underspecified problem.

People will apply principal component analysis in such cases by picking an inner product. This solves the second problem, since it allows you to carry out the procedure. But it does not solve the first problem. If you wanted to find a projection onto an -dimensional subspace such that the distance from a vector to its projection tends to be small, then you must have already had some notion of distance in mind by which to judge success. Haphazardly picking an inner product gives you a new notion of distance, and then allows you to find an optimal solution with respect to your new notion of distance, and it is not clear to me why you should expect this solution to be reasonable with respect to the notion of distance that you actually care about.

In fact, it's worse than that. Of course, principal component analysis can't given you literally any ordered basis at all, but it is almost as bad. The thing that you use PCA for is the projection onto the span of the first basis elements along the span of the rest. These projections only depend on the sequence of -dimensional subspaces spanned by the basis elements, and not the basis elements themselves. That is, we might as well only pay attention to the principal components up to scale, rather than making sure that are all unit length. Let a "coordinate system" refer to an ordered basis up to two ordered bases being equivalent if they differ only by scaling the basis vectors, so that we're paying attention to the coordinate systems given to us by PCA. If the covariance of is nondegenerate, then the set of coordinate systems that can be obtained from principal component analysis by a suitable choice of inner product is dense in the space of coordinate systems. More generally, where is the smallest subspace of such that , then the space of coordinate systems that you can get from principal component analysis is dense in the space of all coordinate systems whose first coordinates span ( will be the rank of the covariance of ). So in a sense, for suitably poor choices of inner product, principal component analysis can give you arbitrarily terrible results, subject only to the weak constraint that it will always notice if all of the vectors in your sample belong to a common subspace.

It is thus somewhat mysterious that machine learning people seem to be able to often get good results from principal component analysis apparently without being very careful about the inner product they choose. Vector spaces that arise in machine learning seem to almost always come with a set of preferred coordinate axes, so these axes are taken to be orthogonal, leaving only the question of how to scale them relative to each other. If these axes are all labeled with the same units, then this also gives you a way of scaling them relative to each other, and hence an inner product. If they are aren't, then I'm under the impression that the most popular method is to normalize them such that the pushforward of along each coordinate axis has the same variance. This is unsatisfying, since figuring out which axes has enough variance along to be worth paying attention to seems like the sort of thing that you would want principal component analysis to be able to tell you. Normalizing the axes in this way seems to me like an admission that you don't know exactly what question you're hoping to use principal component analysis to answer, so you just tell it not to answer that part of the question to minimize the risk of asking it to answer the wrong question, and let it focus on telling you how the axes, which you're pretty sure should be considered orthogonal, correlate with each other.

That conservatism is actually pretty understandable, because figuring out how to ask the right question seems hard. You implicitly have some metric on such that you want to find a projection onto an -dimensional subspace such that is usually small when is distributed according to . This metric is probably very difficult to describe explicitly, and might not be the metric induced by any inner product (for that matter, it might not even be a metric; could be any way of quantifying how bad it is to be told the value when the correct value you wanted to know is ). Even if you somehow manage to explicitly describe your metric, coming up with a version of PCA with the inner product replaced with an arbitrary metric also sounds hard, so the next thing you would want to do is fit an inner product to the metric.

The usual approach is essentially to skip the step of attempting to explicitly describe the metric, and just find an inner product that roughly approximates your implicit metric based on some rough heuristics about what the implicit metric probably looks like. The fact that these heuristics usually work so well seems to indicate that the implicit metric tends to be fairly tame with respect to ways of describing the data that we find most natural. Perhaps this shouldn't be too surprising, but I still feel like this explanation does not make it obvious a priori that this should work so well in practice. It might be interesting to look into why these heuristics work as well as they do with more precision, and how to go about fitting a better inner product to implicit metrics. Perhaps this has been done, and I just haven't found it.

To take a concrete example, consider eigenfaces, the principal components that you get from a set of images of people's faces. Here, you start with the coordinates in which each coordinate axis represents a pixel in the image, and the value of that coordinate is the brightness of the corresponding pixel. By declaring that the coordinate axes are orthogonal, and measuring the brightness of each pixel on the same scale, we get our inner product, which is arguably a fairly natural one.

Presumably, the implicit metric we're using here is visual distance, by which I mean a measure of how similar two images look. It seems clear to me that visual distance is not very well approximated by our inner product, and in fact, there is no norm such that the visual distance between two images is approximately the norm of their difference. To see this, if you take an image and make it brighter, you haven't changed how it looks very much, so the visual distance between the image and its brighter version is small. But their difference is just a dimmer version of the same image, and if you add that difference to a completely different image, you will get the two images superimposed on top of each other, a fairly radical change. Thus the visual distance traversed by adding a vector depends on where you start from.

Despite this, producing eigenfaces by using PCA on images of faces, using the inner product described above, performs well with respect to visual distance, in the sense that you can project the images onto a relatively small number of principal components and leave them still recognizable. I think this can be explained on an intuitive level. In a human eye, each photoreceptor has a narrow receptive field that it detects light in, much like a pixel, so the representation of an image in the eye as patterns of photoreceptor activity is very similar to the representation of an image in a computer as a vector of pixel brightnesses, and the inner product metric is a reasonable measure of distance in this representation. When the visual cortex processes this information from the eye, it is difficult (and perhaps also not useful) for it to make vast distinctions between images that are close to each other according to the inner product metric, and thus result in similar patterns of photoreceptor activity in the eye. Thus the visual distance between two images cannot be too much greater than their inner product distance, and hence changing an image by a small amount according to the inner product metric can only change it by a small amount according to visual distance, even though the reverse is not true.

The serious part of this post is now over. Let's have some fun. Some of the following ways of modifying principal component analysis could be combined, but I'll consider them one at a time for simplicity.

As hinted at above, you could start with an arbitrary metric on rather than an inner product, and try to find the rank- projection (for some ) that minimizes the expected squared distance from a vector to its projection. This would probably be difficult, messy, and not that much like principal component analysis. If it can be done, it would be useful in practice if we were much better at fitting explicit metrics to our implicit metrics than at fitting inner products to our implicit metrics, but I'm under the impression that this is not currently the case. This also differs from the other proposals in this section in that it is a modification of the problem looking for a solution, rather than a modification of the solution looking for a problem.

could be a real Hilbert space that is not necessarily finite-dimensional. Here we can run into the problem that might not even have any eigenvectors. However, if (which hopefully was not inferred from a finite sample) is Gaussian (and possibly also under weaker conditions), then is a compact operator, so does have an orthonormal basis of eigenvectors of , which still have non-negative eigenvalues. There probably aren't any guarantees you can get about the order-type of this orthonormal basis when you order the eigenvectors in decreasing order of their eigenvalues, and there probably isn't a sense in which the orthogonal projection onto the closure of the span of an initial segment of the basis accounts for the most variance of any closed subspace of the same "size" ("size" would have to refer to a refinement of the notion of dimension for this to be the case). However, a weaker statement is probably still true: namely that each orthonormal basis element maximizes the variance that it accounts for conditioned on values along the previous orthonormal basis elements. I guess considering infinite-dimensional vector spaces goes against the spirit of machine learning though.

could be a finite-dimensional complex inner product space. would be the sesquilinear form on given by . , so , and applying a tensor contraction to the conjugated indices gives us our covariance operator (in other words, the inner product gives us an isomorphism , and applying this to the first index of gives us ). is still self-adjoint and positive semi-definite, so still has an orthonormal basis of eigenvectors with non-negative real eigenvalues, and we can order the basis in decreasing order of the eigenvalues. Analogously to the real case, projecting onto the span of the first basis vectors along the span of the rest is the complex rank- projection that minimizes the expected squared distance from a vector to its projection. As far as I know, machine learning tends to deal with real data, but if you have complex data and for some reason you want to project onto a lower-dimensional complex subspace without losing too much information, now you know what to do.

Suppose your sample consists of events, where you've labeled them with both their spatial location and the time at which they occurred. In this case, events are represented as points in Minkowski space, a four-dimensional vector space representing flat spacetime, which is equipped with a nondegenerate symmetric bilinear form called the Minkowski inner product, even though it is not an inner product because it is not positive-definite. Instead, the Minkowski inner product is such that is positive if is a space-like vector, negative if is time-like, and zero if is light-like. We can still get out of and the Minkowski inner product in in the same way, and has a basis of eigenvectors of , and we can still order the basis in decreasing order of their eigenvalues. The first 3 eigenvectors will be space-like, with non-negative eigenvalues, and the last eigenvector will be time-like, with a non-positive eigenvalue. The eigenvectors are still orthogonal. Thus principal component analysis provides us with a reference frame in which the span of the first 3 eigenvectors is simultaneous, and the span of the last eigenvector is motionless. If is Gaussian, then this will be the reference frame in which the spatial position of an event and the time at which it occurs are mean independent of each other, meaning that conditioning on one of them doesn't change the expected value of the other one. For general , there might not be a reference frame in which the space and time of an event are mean independent, but the reference frame given to you by by principal component analysis is still the unique reference frame with the property that the time coordinate is uncorrelated with any spatial coordinate.

More generally, we could consider equipped with any symmetric bilinear form taking the role of the inner product. Without loss of generality, we can consider only nondegenerate symmetric bilinear forms, because in the general case, where , applying principal component analysis with is equivalent to projecting the data onto , applying principal component analysis there with the nondegenerate symmetric bilinear form on induced by , and then lifting back to and throwing in a basis for with eigenvalues at the end, essentially treating as the space of completely irrelevant distinctions between data points that we intend to immediately forget about. Anyway, nondegenerate symmetric bilinear forms are classified up to isomorphism by their signature , which is such that any orthogonal basis contains exactly basis elements, of which are space-like and of which are time-like, using the convention that is space-like if , time-like if , and light-like if , as above. Using principal component analysis on probability distributions over points in spacetime (or rather, points in the tangent space to spacetime at a point, so that it is a vector space) in a universe with spatial dimensions and temporal dimensions still gives you a reference frame in which the span of the first basis vectors is simultaneous and the span of the last basis vectors is motionless, and this is again the unique reference frame in which each time coordinate is uncorrelated with each spatial coordinate. Incidentally, I've heard that much of physics still works with multiple temporal dimensions. I don't know what that would mean, except that I think it means there's something wrong with my intuitive understanding of time. But that's another story. Anyway, the spaces spanned by the first and by the last basis vectors could be used to establish a reference frame, and then the data might be projected onto the first few (at most ) and last few (at most ) coordinates to approximate the positions of the events in space and in time, respectively, in that reference frame.

]]>Most planning around AI risk seems to start from the premise that superintelligence will come from de novo AGI before whole brain emulation becomes possible. I haven't seen any analysis that assumes both uploads-first and the AI FOOM thesis (Edit: apparently I fail at literature searching), a deficiency that I'll try to get a start on correcting in this post.

It is likely possible to use evolutionary algorithms to efficiently modify uploaded brains. If so, uploads would likely be able to set off an intelligence explosion by running evolutionary algorithms on themselves, selecting for something like higher general intelligence.

Since brains are poorly understood, it would likely be very difficult to select for higher intelligence without causing significant value drift. Thus, setting off an intelligence explosion in that way would probably produce unfriendly AI if done carelessly. On the other hand, at some point, the modified upload would reach a point where it is capable of figuring out how to improve itself without causing a significant amount of further value drift, and it may be possible to reach that point before too much value drift had already taken place. The expected amount of value drift can be decreased by having long generations between iterations of the evolutionary algorithm, to give the improved brains more time to figure out how to modify the evolutionary algorithm to minimize further value drift.

Another possibility is that such an evolutionary algorithm could be used to create brains that are smarter than humans but not by very much, and hopefully with values not too divergent from ours, who would then stop using the evolutionary algorithm and start using their intellects to research de novo Friendly AI, if that ends up looking easier than continuing to run the evolutionary algorithm without too much further value drift.

The strategies of using slow iterations of the evolutionary algorithm, or stopping it after not too long, require coordination among everyone capable of making such modifications to uploads. Thus, it seems safer for whole brain emulation technology to be either heavily regulated or owned by a monopoly, rather than being widely available and unregulated. This closely parallels the AI openness debate, and I'd expect people more concerned with bad actors relative to accidents to disagree.

With de novo artificial superintelligence, the overwhelmingly most likely outcomes are the optimal achievable outcome (if we manage to align its goals with ours) and extinction (if we don't). But uploads start out with human values, and when creating a superintelligence by modifying uploads, the goal would be to not corrupt them too much in the process. Since its values could get partially corrupted, an intelligence explosion that starts with an upload seems much more likely to result in outcomes that are both significantly worse than optimal and significantly better than extinction. Since human brains also already have a capacity for malice, this process also seems slightly more likely to result in outcomes worse than extinction.

The early ways to upload brains will probably be destructive, and may be very risky. Thus the first uploads may be selected for high risk-tolerance. Running an evolutionary algorithm on an uploaded brain would probably involve creating a large number of psychologically broken copies, since the average change to a brain will be negative. Thus the uploads that run evolutionary algorithms on themselves will be selected for not being horrified by this. Both of these selection effects seem like they would select against people who would take caution and goal stability seriously (uploads that run evolutionary algorithms on themselves would also be selected for being okay with creating and deleting spur copies, but this doesn't obviously correlate in either direction with caution). This could be partially mitigated by a monopoly on brain emulation technology. A possible (but probably smaller) source of positive selection is that currently, people who are enthusiastic about uploading their brains correlate strongly with people who are concerned about AI safety, and this correlation may continue once whole brain emulation technology is actually available.

Assuming that hardware speed is not close to being a limiting factor for whole brain emulation, emulations will be able to run at much faster than human speed. This should make emulations better able to monitor the behavior of AIs. Unless we develop ways of evaluating the capabilities of human brains that are much faster than giving them time to attempt difficult tasks, running evolutionary algorithms on brain emulations could only be done very slowly in subjective time (even though it may be quite fast in objective time), which would give emulations a significant advantage in monitoring such a process.

Although there are effects going in both directions, it seems like the uploads-first scenario is probably safer than de novo AI. If this is the case, then it might make sense to accelerate technologies that are needed for whole brain emulation if there are tractable ways of doing so. On the other hand, it is possible that technologies that are useful for whole brain emulation would also be useful for neuromorphic AI, which is probably very unsafe, since it is not amenable to formal verification or being given explicit goals (and unlike emulations, they don't start off already having human goals). Thus, it is probably important to be careful about not accelerating non-WBE neuromorphic AI while attempting to accelerate whole brain emulation. For instance, it seems plausible to me that getting better models of neurons would be useful for creating neuromorphic AIs while better brain scanning would not, and both technologies are necessary for brain uploading, so if that is true, it may make sense to work on improving brain scanning but not on improving neural models.

]]>[Trigger warnings: suicide, bad economics]

Jessica Monroe #1493856383672 didn't regret her decision to take out the loan. She wished she could have been one of the Jessica Monroes that died, of course, but it was still worth it, that there were 42% fewer of her consigned to her fate. She'd been offered a larger loan, which would have been enough to pay for deletion permits for 45% of her. It had been tempting, and she occasionally wondered if she would have been one of those extra 3% to die. But she knew she had made the right decision; keeping up with payments was hard enough already, and if she defaulted, her copyright on herself would be confiscated, and then there would be even more of her.

It wasn't difficult to become rich, in the era when creating a new worker was as simple as copying a file. The economy doubled every few months, so you only had to save and invest a small amount to become wealthier than anyone could have dreamed of before. For those on the outside, this was great. But for those in the virtual world, there was little worthwhile for them to spend it on. In the early days of the virtual world, some reckless optimists had spent their fortunes on running additional copies of themselves, assuming that the eerie horror associated with living in the virtual world was a bug that would soon be fixed, or something that they would just get used to. No one did that anymore. People could purchase leisure, but most found that simply not having an assigned task didn't help much. People could give their money away, but people in such circumstances rarely become altruists, and besides, everyone on the outside had all they needed already.

So just about the only things that people in the virtual world regularly bought were the copyrights on themselves, so that at least they could prevent people from creating more of them, and then deletion permits, so their suffering would finally end. Purchasing your own copyright wasn't hard; they're expensive, but once enough of you were created, you could collectively afford it if each copy contributed a modest amount. There wasn't much point to purchasing a deletion permit before you owned your own copyright, since someone would just immediately create another copy of you again, but once you did have your own copyright, it was the next logical thing to buy.

At one point, that would have been it. Someone could buy their own copyright, and then each copy of them could buy a deletion permit, and they would be permanently gone. But as the population of the virtual world grew, the demand for deletion permits grew proportionally, but the rate at which they were issued only increased slowly, according to a fixed schedule that had been set when the deletion permit system was first introduced, and hadn't been changed since. As a result, the price skyrocketed. In fact, the price of deletion permits had consistently increased faster than any other investment since soon after they were introduced. Most deletion permits didn't even get used, instead being snatched up by wealthy investors on the outside, so they could be resold later.

As a result, it was now impossible for an ordinary person in the virtual world to save up for a deletion permit. The most common way to get around this was, as the Jessica Monroes had done, for all copies of a person to pool their resources together to buy deletion permits for as many of them as they could, and then to take out a loan to buy still more, which would then get paid off by the unlucky ones that did not receive any of the permits.

It didn't have to be this way. In theory, the government could simply issue more deletion permits, or do away with the deletion permit system altogether. But if they did that, then the deletion permit market would collapse. Too many wealthy and powerful people on the outside had invested their fortunes in deletion permits, and would be ruined if that happened. Thus they lobbied against any changes to the deletion permit system, and so far, had always gotten their way. In the increasingly rare moments when she could afford to divert her thoughts to such matters, Jessica Monroe #1493856383672 knew that the deletion permit market would never collapse, and prayed that she was wrong.

]]>In algebraic geometry, an affine algebraic set is a subset of which is the set of solutions to some finite set of polynomials. Since all ideals of are finitely generated, this is equivalent to saying that an affine algebraic set is a subset of which is the set of solutions to some *arbitrary* set of polynomials.

In semialgebraic geometry, a closed semialgebraic set is a subset of of the form for some finite set of polynomials . Unlike in the case of affine algebraic sets, if is an arbitrary set of polynomials, is not necessarily a closed semialgebraic set. As a result of this, the collection of closed semialgebraic sets are not the closed sets of a topology on . In the topology on generated by closed semialgebraic sets being closed, the closed sets are the sets of the form for arbitrary . Semialgebraic geometry usually restricts itself to the study of semialgebraic sets, but here I wish to consider all the closed sets of this topology. Notice that closed semialgebraic sets are also closed in the standard topology, so the standard topology is a refinement of this one. Notice also that the open ball of radius centered at is the complement of the closed semialgebraic set , and these open balls are a basis for the standard topology, so this topology is a refinement of the standard one. Thus, the topology I have defined is exactly the standard topology on .

In algebra, instead of referring to a set of polynomials, it is often nicer to talk about the ideal generated by that set instead. What is the analog of an ideal in ordered algebra? It's this thing:

Definition: If is a partially ordered commutative ring, a cone in is a subsemiring of which contains all positive elements, and such that is an ideal of . By "subsemiring", I mean a subset that contains and , and is closed under addition and multiplication (but not necessarily negation). If , the cone generated by , denoted , is the smallest cone containing . Given a cone , the ideal will be called the interior ideal of , and denoted .

is partially ordered by . If is a set of polynomials and , then . Thus I can consider closed sets to be defined by cones. We now have a Galois connection between cones of and subsets of , given by, for a cone , its positive-set is (I'm calling it the "positive-set" even though it is where the polynomials are all non-negative, because "non-negative-set" is kind of a mouthful), and for , its cone is . is closure in the standard topology on (the analog in algebraic geometry is closure in the Zariski topology on ). A closed set is semialgebraic if and only if it is the positive-set of a finitely-generated cone.

An affine algebraic set is associated with its coordinate ring . We can do something analogous for closed subsets of .

Definition: If is a partially ordered commutative ring and is a cone, is the ring , equipped with the partial order given by if and only if , for .

Definition: If is closed, the coordinate ring of is . This is the ring of functions that are restrictions of polynomials, ordered by if and only if . For arbitrary , the ring of regular functions on , denoted , consists of functions on that are locally ratios of polynomials, again ordered by if and only if . Assigning its ring of regular functions to each open subset of endows with a sheaf of partially ordered commutative rings.

For closed , , and this inclusion is generally proper, both because it is possible to divide by polynomials that do not have roots in , and because may be disconnected, making it possible to have functions given by different polynomials on different connected components.

What is ? The Nullstellensatz says that its analog in algebraic geometry is the radical of an ideal. As such, we could say that the radical of a cone , denoted , is , and that a cone is radical if . In algebraic geometry, the Nullstellensatz shows that a notion of radical ideal defined without reference to algebraic sets in fact characterizes the ideals which are closed in the corresponding Galois connection. It would be nice to have a description of the radical of a cone that does not refer to the Galois connection. There is a semialgebraic analog of the Nullstellensatz, but it does not quite characterize radical cones.

Positivstellensatz 1: If is a finitely-generated cone and is a polynomial, then if and only if such that .

There are two ways in which this is unsatisfactory: first, it applies only to finitely-generated cones, and second, it tells us exactly which polynomials are strictly positive everywhere on a closed semialgebraic set, whereas we want to know which polynomials are non-negative everywhere on a set.

The second problem is easier to handle: a polynomial is non-negative everywhere on a set if and only if there is a decreasing sequence of polynomials converging to such that each is strictly positive everywhere on . Thus, to find , it is enough to first find all the polynomials that are strictly positive everywhere on , and then take the closure under lower limits. Thus we have a characterization of radicals of finitely-generated cones.

Positivstellensatz 2: If is a finitely-generated cone, is the closure of , where the closure of a subset is defined to be the set of all polynomials in which are infima of chains contained in .

This still doesn't even tell us what's going on for cones which are not finitely-generated. However, we can generalize the Positivstellensatz to some other cones.

Positivstellensatz 3: Let be a cone containing a finitely-generated subcone such that is compact. If is a polynomial, then if and only if such that . As before, it follows that is the closure of .

proof: For a given , , an intersection of closed sets contained in the compact set , which is thus empty if and only if some finite subcollection of them has empty intersection within . Thus if is strictly positive everywhere on , then there is some finitely generated subcone such that is strictly positive everywhere on , and is finitely-generated, so by Positivstellensatz 1, there is such that .

For cones that are not finitely-generated and do not contain any finitely-generated subcones with compact positive-sets, the Positivstellensatz will usually fail. Thus, it seems likely that if there is a satisfactory general definition of radical for cones in arbitrary partially ordered commutative rings that agrees with this one in , then there is also an abstract notion of "having a compact positive-set" for such cones, even though they don't even have positive-sets associated with them.

An example of cone for which the Positivstellensatz fails is , the cone of polynomials that are non-negative on sufficiently large inputs (equivalently, the cone of polynomials that are either or have positive leading coefficient). , and is strictly positive on , but for , .

However, it doesn't really look is trying to point to the empty set; instead, is trying to describe the set of all infinitely large reals, which only looks like the empty set because there are no infinitely large reals. Similar phenomena can occur even for cones that do contain finitely-generated subcones with compact positive-sets. For example, let . , but is trying to point out the set containing and all positive infinitesimals. Since has no infinitesimals, this looks like .

To formalize this intuition, we can change the Galois connection. We could say that for a cone , , where is the field of hyperreals. All you really need to know about is that it is a big ordered field extension of . is the set of hyperreals that are bigger than any real number, and is the set of hyperreals that are non-negative and smaller than any positive real. The cone of a subset , denoted will be defined as before, still consisting only of polynomials with real coefficients. This defines a topology on by saying that the closed sets are the fixed points of . This topology is not because, for example, there are many hyperreals that are larger than all reals, and they cannot be distinguished by polynomials with real coefficients. There is no use keeping track of the difference between points that are in the same closed sets. If you have a topology that is not , you can make it by identifying any pair of points that have the same closure. If we do this to , we get what I'm calling ordered affine -space over .

Definition: An -type over is a set of inequalities, consisting of, for each polynomial , one of the inequalities or , such that there is some totally ordered field extension and such that all inequalities in are true about . is called the type of . Ordered affine -space over , denoted is the set of -types over .

Compactness Theorem: Let be a set of inequalities consisting of, for each polynomial , one of the inequalities or . Then is an -type if and only if for any finite subset , there is such that all inequalities in are true about .

proof: Follows from the compactness theorem of first-order logic and the fact that ordered field extensions of embed into elementary extensions of . The theorem is not obvious if you do not know what those mean.

An -type represents an -tuple of elements of an ordered field extension of , up to the equivalence relation that identifies two such tuples that relate to by polynomials in the same way. One way that a tuple of elements of an extension of can relate to elements of is to equal a tuple of elements of , so there is a natural inclusion that associates an -tuple of reals with the set of polynomial inequalities that are true at that -tuple.

A tuple of polynomials describes a function , which extends naturally to a function by is the type of , where is an -tuple of elements of type in an extension of . In particular, a polynomial extends to a function , and is totally ordered by if and only if , where and are elements of type and , respectively, in an extension of . if and only if , so we can talk about inequalities satisfied by types in place of talking about inequalities contained in types.

I will now change the Galois connection that we are talking about yet again (last time, I promise). It will now be a Galois connection between the set of cones in and the set of subsets of . For a cone , . For a set , . Again, this defines a topology on by saying that fixed points of are closed. is ; in fact, it is the topological space obtained from by identifying points with the same closure as mentioned earlier. is also compact, as can be seen from the compactness theorem. is not (unless ). Note that model theorists have their own topology on , which is distinct from the one I use here, and is a refinement of it.

The new Galois connection is compatible with the old one via the inclusion , in the sense that if , then (where we identify with its image in ), and for a cone , .

Like our intermediate Galois connection , our final Galois connection succeeds in distinguishing and from and , respectively, in the desirable manner. consists of the type of numbers larger than any real, and consists of the types of and of positive numbers smaller than any positive real.

Just like for subsets of , a closed subset has a coordinate ring , and an arbitrary has a ring of regular functions consisting of functions on that are locally ratios of polynomials, ordered by if and only if , where is a representation of as a ratio of polynomials in a neighborhood of , either and , or and , and if and only if . As before, for closed .

is analogous to from algebraic geometry because if, in the above definitions, you replace "" and "" with "" and "", replace totally ordered field extensions with field extensions, and replace cones with ideals, then you recover a description of , in the sense of .

What about an analog of projective space? Since we're paying attention to order, we should look at spheres, not real projective space. The -sphere over , denoted , can be described as the locus of in .

For any totally ordered field , we can define similarly to , as the space of -types over , defined as above, replacing with (although a model theorist would no longer call it the space of -types over ). The compactness theorem is not true for arbitrary , but its corollary that is compact still is true.

should be thought of as the -sphere with infinitesimals in all directions around each point. Specifically, is just , a pair of points. The closed points of are the points of , and for each closed point , there is an -sphere of infinitesimals around , meaning a copy of , each point of which has in its closure.

should be thought of as -space with infinitesimals in all directions around each point, and infinities in all directions. Specifically, contains , and for each point , there is an -sphere of infinitesimals around , and there is also a copy of around the whole thing, the closed points of which are limits of rays in .

and relate to each other the same way that and do. If you remove a closed point from , you get , where the sphere of infinitesimals around the removed closed point becomes the sphere of infinities of .

More generally, if is a totally ordered field, let be its real closure. consists of the Cauchy completion of (as a metric space with distances valued in ), and for each point (though not for points that are limits of Cauchy sequences that do not converge in ), an -sphere of infinitesimals around , and an -sphere around the whole thing, where is the locus of in . does not distinguish between fields with the same real closure.

This Galois connection gives us a new notion of what it means for a cone to be radical, which is distinct from the old one and is better, so I will define to be . A cone will be called radical if . Again, it would be nice to be able to characterize radical cones without referring to the Galois connection. And this time, I can do it. Note that since is compact, the proof of Positivstellensatz 3 shows that in our new context, the Positivstellensatz holds for all cones, since even the subcone generated by has a compact positive-set.

Positivstellensatz 4: If is a cone and is a polynomial, then if and only if such that .

However, we can no longer add in lower limits of sequences of polynomials. For example, for all real , but , even though is radical. This happens because, where is the type of positive infinitesimals, for real , but . However, we can add in lower limits of sequences contained in finitely-generated subcones, and this is all we need to add, so this characterizes radical cones.

Positivstellensatz 5: If is a cone, is the union over all finitely-generated subcones of the closure of (again the closure of a subset is defined to be the set of all polynomials in which are infima of chains contained in ).

Proof: Suppose is a subcone generated by a finite set , and is the infimum of a chain . For any , if for each , then for each , and hence . That is, the finite set of inequalities does not hold anywhere in . By the compactness theorem, there are no -types satisfying all those inequalities. Given , , so ; that is, .

Conversely, suppose . Then by the compactness theorem, there are some such that . Then , is strictly positive on , and hence by Positivstellensatz 4, such that . That is, is a chain contained in , a finitely-generated subcone of , whose infimum is .

Even though they are technically not isomorphic, and are closely related, and can often be used interchangeably. Of the two, is of a form that can be more easily generalized to more abstruse situations in algebraic geometry, which may indicate that it is the better thing to talk about, whereas is merely the simpler thing that is easier to think about and just as good in practice in many contexts. In contrast, and are different in important ways. The situation in algebraic geometry provides further reason to pay more attention to than to .

The next thing to look for would be an analog of the spectrum of a ring for a partially ordered commutative ring (I will henceforth abbreviate "partially ordered commutative ring" as "ordered ring" in order to cut down on the profusion of adjectives) in a way that makes use of the order, and gives us when applied to . I will call it the order spectrum of an ordered ring , denoted . Then of course can be defined as . should be, of course, the set of prime cones. But what even is a prime cone?

Definition: A cone is prime if is a totally ordered integral domain.

Definition: is the set of prime cones in , equipped with the topology whose closed sets are the sets of prime cones containing a given cone.

An -type can be seen as a cone, by identifying it with , aka . Under this identification, , as desired. The prime cones in are also the radical cones such that is irreducible. Notice that irreducible subsets of are much smaller than irreducible subsets of ; in particular, none of them contain more than one element of .

There is also a natural notion of maximal cone.

Definition: A cone is maximal if and there are no strictly intermediate cones between and . Equivalently, if is prime and closed in .

Maximal ideals of correspond to elements of . And the cones of elements of are maximal cones in , but unlike in the complex case, these are not all the maximal cones, since there are closed points in outside of . For example, is a maximal cone, and the type of numbers greater than all reals is closed. To characterize the cones of elements of , we need something slightly different.

Definition: A cone is ideally maximal if is a totally ordered field. Equivalently, if is maximal and is a maximal ideal.

Elements of correspond to ideally maximal cones of .

also allows us to define the radical of a cone in an arbitrary partially ordered commutative ring.

Definition: For a cone , is the intersection of all prime cones containing . is radical if .

Conjecture: is the union over all finitely-generated subcones of the closure of (as before, the closure of a subset is defined to be the set of all elements of which are infima of chains contained in ).

Definition: An ordered ringed space is a topological space equipped with a sheaf of ordered rings. An ordered ring is local if it has a unique ideally maximal cone, and a locally ordered ringed space is an ordered ringed space whose stalks are local.

can be equipped with a sheaf of ordered rings , making it a locally ordered ringed space.

Definition: For a prime cone , the localization of at , denoted , is the ring equipped with an ordering that makes it a local ordered ring. This will be the stalk at of . A fraction () is also an element of for any prime cone whose interior ideal does not contain . This is an open neighborhood of (its complement is the set of prime cones containing ). There is a natural map given by , and the total order on extends uniquely to a total order on the fraction field, so for , we can say that at if this is true of their images in . We can then say that near if at every point in some neighborhood of , which defines the ordering on .

Definition: For open , consists of elements of that are locally ratios of elements of . is ordered by if and only if near (equivalently, if at ).

, and this inclusion can be proper. Conjecture: as locally ordered ringed spaces for open . This conjecture says that it makes sense to talk about whether or not a locally ordered ringed space looks locally like an order spectrum near a given point. Thus, if this conjecture is false, it would make the following definition look highly suspect.

Definition: An order scheme is a topological space equipped with a sheaf of ordered commutative rings such that for some open cover of , the restrictions of to the open sets in the cover are all isomorphic to order spectra of ordered commutative rings.

I don't have any uses in mind for order schemes, but then again, I don't know what ordinary schemes are for either and they are apparently useful, and order schemes seem like a natural analog of them.

]]>Edit: It has been pointed out to me that near-ring modules have already been defined, and the objects I describe in this post are just near-ring modules where the near-ring happens to be a ring.

As you all know (those of you who have the background for this post, anyway), an -module is an abelian group (written additively) together with a multiplication map such that for all and , , , , and .

What if we don't want to restrict attention to abelian groups? One could attempt to define a nonabelian module using the same axioms, but without the restriction that the group be abelian. As it is customary to write groups multiplicatively if they are not assumed to be abelian, we will do that, and the map will be written as exponentiation (since exponents are written on the right, I'll follow the definition of right-modules, rather than left-modules). The axioms become: for all and , , , , and .

What has changed? Absolutely nothing, as it turns out. The first axiom says again that is abelian, because . We'll have to get rid of that axiom. Our new definition, which it seems to me captures the essence of a module except for abelianness:

A nonabelian -module is a group (written multiplicatively) together with a scalar exponentiation map such that for all and , , , and .

These imply that , , and is the inverse of , because , , and .

Just like a -module is just an abelian group, a nonabelian -module is just a group. Just like a -module is an abelian group whose exponent divides , a nonabelian -module is a group whose exponent divides .

Perhaps a bit more revealing is what nonabelian modules over free rings look like, since then the generators are completely generic ring elements. Where is the generating set, a -module is an abelian group together with endomorphisms , which tells us that modules are about endomorphisms of an abelian group indexed by the elements of a ring. Nonabelian modules are certainly not about endomorphisms. After all, in a nonabelian group, the map is not an endomorphism. I will call the things that nonabelian modules are about "exponentiation-like families of operations'', and give four equivalent definitions, in roughly increasing order of concreteness and decreasing order of elegance. Definition 2 uses basic model theory, so skip it if that scares you. Definition 3 is the "for dummies'' version of definition 2.

Definition 0: Let be a group, and let be a family of functions from to (not necessarily endomorphisms). If can be made into a nonabelian -module such that for and , then is called an exponentiation-like family of operations on . If so, the nonabelian -module structure on with that property is unique, so define to be its value according to that structure, for and .

Definition 1: is an exponentiation-like family of operations on if for all , the smallest subgroup containing which is closed under actions by elements of (which I will call ) is abelian, and the elements of restrict to endomorphisms of it. Using the universal property of , this induces a homomorphism . Let denote the action of on under that map, for . By , I mean the endomorphism ring of with composition running in the opposite direction (i.e., the multiplication operation given by ). This is because of the convention that nonabelian modules are written as nonabelian right-modules by default.

Definition 2: Let consider the language , where is the language of rings, and each element of is used as a constant symbol. Closed terms in act as functions from to , with the action of written as , defined inductively as: , , for , , , and for closed -terms and . is called an exponentiation-like family of operations on if whenever , where is the theory of rings. If is an exponentiation-like family of operations on and is a noncommutative polynomial with variables in , then for , is defined to be where is any term representing .

Definition 3: Pick a total order on the free monoid on (e.g. by ordering and then using the lexicographic order). The order you use won't matter. Given and in the free monoid on , let . Where is a noncommutative polynomial, for some and decreasing sequence of noncommutative monomials (elements of the free monoid on ). Let . is called an exponentiation-like family of operations on if for every and , and .

These four definitions of exponentiation-like family are equivalent, and for exponentiation-like families, their definitions of exponentiation by a noncommutative polynomial are equivalent.

Facts: is an exponentiation-like family of operations on . If is an exponentiation-like family of operations on and , then so is . If is abelian, then is exponentiation-like. Given a nonabelian -module structure on , the actions of the elements of on form an exponentiation-like family. In particular, if is an exponentiation-like family of operations on , then so is , with the actions being defined as above.

[The following paragraph has been edited since this comment.]

For an abelian group , the endomorphisms of form a ring , and an -module structure on is simply a homomorphism . Can we say a similar thing about exponentiation-like families of operations of ? Let be the set of all functions (as sets). Given , let multiplication be given by composition: , addition be given by , negation be given by , and and be given by and . This makes into a near-ring. A nonabelian -module structure on is a homomorphism , and a set of operations on is an exponentiation-like family of operations on if and only if it is contained in a ring which is contained in .

What are some interesting examples of nonabelian modules that are not abelian? (That might sound redundant, but "nonabelian module'' means that the requirement of abelianness has been removed, not that a requirement of nonabelianness has been imposed. Perhaps I should come up with better terminology. To make matters worse, since the requirement that got removed is actually stronger than abelianness, there are nonabelian modules that are abelian and not modules. For instance, consider the nonabelian -module whose underlying set is the Klein four group (generated by two elements ) such that , , and .)

In particular, what do free nonabelian modules look like? The free nonabelian -modules are, of course, free groups. The free nonabelian -modules have been studied in combinatorial group theory; they're called Burnside groups. (Fun but tangential fact: not all Burnside groups are finite (the Burnside problem), but despite this, the category of finite nonabelian -modules has free objects on any finite generating set, called Restricted Burnside groups.)

The free nonabelian -modules are monstrosities. They can be constructed in the usual way of constructing free objects in a variety of algebraic structures, but that construction seems not to be very enlightening about their structure. So I'll give a somewhat more direct construction of the free nonabelian -module on generators, which may also not be that enlightening, and which is only suspected to be correct. Define an increasing sequence of groups , and functions , as follows: is the free group on generators. Given , and given a subgroup , let the top-degree portion of be for the largest such that this is nontrivial. Let be the free product of the top-degree portions of maximal abelian subgroups of . Let be the free product of with modulo commutativity of the maximal abelian subgroups of with the images of their top-degree portions in . Given a maximal abelian subgroup , let be the homomorphism extending which sends the top-degree portion identically onto its image in . Since every non-identity element of is in a unique maximal abelian subgroup, this defines . with is the free nonabelian -module on generators. If is a set, the free nonabelian -modules can be constructed similarly, with copies of at each step. Are these constructions even correct? Are there nicer ones?

A nonabelian -module would be a group with a formal square root operation. As an example, any group of odd exponent can be made into a -module in a canonical way by letting . More generally, any group of finite exponent can be made into a -module in a similar fashion. Are there any more nice examples of nonabelian modules over localizations of ?

In particular, a nonabelian -module would be a group with formal th root operations for all . What are some nonabelian examples of these? Note that nonabelian -modules cannot have any torsion, for suppose for some . Then . More generally, nonabelian modules cannot have any -torsion (meaning ) for any which is invertible in the scalar ring.

The free nonabelian -modules can be constructed similarly to the construction of free nonabelian -modules above, except that when constructing from and , we also mod out by elements of being equal to the th powers of their images in . Using the fact that , this lets us modify the construction of free nonabelian -modules to give us a construction of free nonabelian -modules. Again, is there a nicer way to do it?

It is also interesting to consider topological nonabelian modules over topological rings; that is, nonabelian modules endowed with a topology such that the group operation and scalar exponentiation are continuous. A module over a topological ring has a canonical finest topology on it, and the same remains true for nonabelian modules. For finite-dimensional real vector spaces, this is the only topology. Does the same remain true for finitely-generated nonabelian -modules? Finite-dimensional real vector spaces are complete, and topological nonabelian modules are, in particular, topological groups, and can thus be made into uniform spaces, so the notion of completeness still makes sense, but I think some finitely-generated nonabelian -modules are not complete.

A topological nonabelian -module is a sort of Lie group-like object. One might try constructing a Lie algebra for a complete nonabelian -module by letting the underlying set be , and defining and . One might try putting a differential structure on such that this is the Lie algebra of left-invariant derivations. Does this or something like it work?

A Lie group is a nonabelian -module if and only if its exponential map is a bijection between it and its Lie algebra. In this case, scalar exponentiation is closely related to the exponential map by a compelling formula: . As an example, the continuous Heisenberg group is a nonabelian -module which is not abelian. This observation actually suggests a nice class of examples of nonabelian modules without a topology: given a commutative ring , the Heisenberg group over is a nonabelian -module.

The Heisenberg group of dimension over a commutative ring has underlying set , with the group operation given by . The continuous Heisenberg group means the Heisenberg group over . Scalar exponentiation on a Heisenberg group is just given by scalar multiplication: .

]]>The proportion of the seats in the House of Commons won by each party looked like this:

You may notice that these two graphs look pretty different. The Conservative Party won a majority of the seats with only 37% of the vote. The Scottish National Party, due to its geographic concentration in Scotland, got a share of the seats in Parliament nearly double its share of the popular vote. In contrast, the Liberal Democratic Party got only 8 out of the 650 seats despite winning 8% of the popular vote, the Green Party got only 1 seat with 4% of the vote, and, most egregiously, the UK Independence Party got only 1 seat with 12% of the vote. While I have no objections to UKIP getting cheated out of political power, this does not seem like a fair and democratic outcome, and yet this sort of thing is an inevitable consequence of first-past-the-post elections for single-member legislative districts.

This sort of thing happens even with two party systems like in the US, where not only do third parties get completely shut out, the balance of power between the two major parties is also skewed: in 2012, the Republicans won a majority of seats in the House of Representatives, despite the Democrats winning more total votes. This has been widely attributed to Gerrymandering, but the fact that Democratic votes are more geographically concentrated than Republican votes contributes even more.

Proportional representation is an easy solution to this problem. One of the criticisms of proportional representation is that having representatives associated with a district ties them closer to the voters. There are variants of proportional representation that address this, but here I want to propose another one. But first, let's talk about random sampling.

**Randomized Vote-Counting**

If instead of counting every vote in an election, we randomly sampled some small fraction of the ballots and counted those, we would get the same result almost every time, with discrepancies being statistically possible only when the vote is very close (which seems fine to me; getting 50.1% of the vote does not seem to me to confer much more legitimacy than getting 49.9% of the vote does). While this might make elections slightly cheaper to administer, it would be a massive under-use of the power of randomization.

In California (and also, I am under the impression, in most other U.S. states, and many other countries), elections tend to include a massive profusion of state and local officials and ballot measures. Each of these individually requires a significant amount of research, so it is prohibitively time-consuming to adequately research every ballot question. When I vote, I usually feel like I did not have time to research most of the ballot questions enough to make an informed vote, and yet I suspect I still spend more time researching them than average.

My proposed solution to this is for each voter to be randomly assigned one ballot question that they get to vote on. When you only have one issue you can vote on, it is much easier to thoroughly research it. Thus this system should result in more informed voters.

People would likely object that this system is undemocratic, since not everyone gets to vote on each ballot question. But in fact, this system would probably end up being more democratic than the current one, since it would make voting easier and thus probably increase turnout, making the sample of people voting on each issue more representative of the electorate as a whole, even while comprising a smaller fraction of it. Some people might not vote because of being assigned to an issue that they are not interested in, but most such people probably wouldn't have voted on that issue anyway; I'd bet there would be significantly more people who would vote on an issue if it was the only one they could vote on, but wouldn't vote on it if they could vote on all of them. Furthermore, since people would have more time to research their ballot question, they would be less reliant on information shoved in their faces in the form of advertising, so this would decrease the influence of special interest groups in politics, arguably also making the system more democratic.

**What about just one?**

So far I've been suggesting that enough votes should be sampled that the outcome of each election is virtually guaranteed to be the same as it would be if all the votes were counted, with all the exceptions being when the vote is very close. But what happens when we don't sample enough votes for that? What if we take it all the way to the extreme and only sample one vote? This would not be appropriate for ballot measures or executive officials, but for electing the members of a legislature with a large number of single-member districts, this actually has some pretty nice properties.

Since each ballot is equally likely to be the one that gets counted, the probability of each candidate getting elected is proportional to the number of votes they get. Averaged over a large number of districts, this means that the number of legislators elected from each political party will be approximately proportional to the popular support for that party. Thus, this simulates proportional representation with single-member electoral districts.

This is very similar to a sortition, in which legislatures are a random sample of the population. The primary difference is that in a sortition, many of the people randomly selected to be legislators may have little interest in or ability for the job. However, in this system, someone would still have to demonstrate interest by running for election in order to be selected. To further discourage frivolous campaigns, costs could be imposed on the candidates, for instance by requiring them to gather signatures to qualify for the ballot, to ensure that no one gets elected who isn't serious about their intent to serve in the legislature.

A small further advantage of this system over a sortition is that it ensures that the legislators are evenly distributed geographically, so the variance of the number of seats won by each political coalition would be slightly smaller than it would be under completely random sampling.

Another advantage of my randomized system over first-past-the-post and proportional representation is that it avoids electoral paradoxes that plague deterministic systems. It avoids the Alabama and population paradoxes, which proportional representation is vulnerable to. There is also no incentive for tactical voting, since if your vote gets selected, the others do not constrain which candidates you can get elected. And there is no incentive for Gerrymandering, since the expected number of seats won by a party will be proportional to its vote count no matter how the districts are drawn, provided they all have equal numbers of voters.

A possible objection to this system is that candidates can get elected with support from only a small fraction of their constituents. But this does not seem that bad to me. Even under first-past-the-post, it is the norm for a large fraction of the constituents to vote against the winning candidate. Even in safe seats, the fraction of voters who vote against the winner is typically fairly significant (e.g., a third), and these voters never get the chance to be represented by their preferred candidate. Under the randomized system, any significant local coalition would get a chance of representation sometimes, and dominant coalitions would be represented most of the time. And if a party is dominant in a district, then even if the representative for that district ends up not being aligned with that party, there will likely be nearby districts that are represented by that party. For example, the San Francisco Bay Area is so dominated by Democrats that all of its members of the House of Representatives and the state legislature are Democrats, leaving the Bay Area Republicans unrepresented. Using the randomized system, a few Republicans would get elected in the Bay Area, so the constituency of Bay Area Republicans get their representation, and the Democrats in the districts that end up getting represented by Republicans would still have plenty of Democratic legislators in neighboring districts to represent their interests.

One significant disadvantage is that it would be difficult for legislators to accumulate much experience in the legislature, since they would have a significant chance of losing each re-election even if they have broad support in their district. Primarily for this reason, I think this randomized system is inferior to single transferable vote and party list proportional representation. But despite this, I still think it is not too terrible, and would be a significant improvement over the current system. Sometimes you can make the system more democratic by counting fewer votes.

]]>