[ home / bans / all ] [ spg ] [ qa / jp / cry ] [ f / ec ] [ b / poll ] [ tv / bann ] [ toggle-new / tab2 ]

/b/ - Boson Technology

Also known as Boson /g/

New Reply

Whitelist Token
Password (For file deletion.)
Markup tags exist for bold, itallics, header, spoiler etc. as listed in " [options] > View Formatting "

[Return] [Bottom] [Catalog]

File:genius_comes_17.jpg (768.16 KB,1064x1500)


Today I begin a programming project to construct real numbers and various operations on them. I don't mean the wimpy limited precision floating point numbers that you've probably used if you've ever written a program in your life. Nor do I mean arbitrary precision numbers which can have as many digits as you like (until your computer runs out of memory) but still have irrecoverable rounding error because the digits end at some point. I mean exact representations of numbers that can be queried to any precision (which in practice may be limited by time and memory constraints).

This has been done many times before and probably much better than I will manage, so I don't expect to create anything really novel or useful. This is primarily for my own edification, to learn more about numbers and the computable/constructive reals. In this thread I will blog about what I'm doing and discuss the philosophy of numbers with anyone who's interested.


File:EhsOdjsUwAAMJdA.jpg (166.68 KB,1307x1085)

Wish you the best of luck OP. I'll be keen on reading updates to this thread when they come. Maybe I'll learn a thing or two about programming during it too.


File:02-fig1.png (16.96 KB,355x164)

First I need to decide on a representation. There are many ways of representing real numbers. Some well-known representations are Dedekind cuts and Cauchy sequences. Dedekind cuts represent a real number by saying which rational numbers (fractions) are smaller than it and which are equal or greater. Cauchy sequences represent a real number with a sequence of rational numbers that converge to it. You can look up the exact definitions on Wikipedia, which include the all-important details of how to formulate the definitions without referencing the real numbers and creating a circular definition. Another representation that everyone knows is to represent a real number as an infinite string of decimal digits.

Dedekind cuts are a very simple and natural-feeling representation, but I haven't yet been able to figure out how to make addition work with them in a way I can actually implement on a computer. Maybe I could make it work if I read a bit more about what other people have done.

Cauchy sequences in their full generality feel a bit too complicated for my taste. It's possible the complexity at the beginning makes the implementation of stuff later on easier, though. But I'm going to try something else first.

Infinite decimals are nice, and I want whatever representation I use to be convertible into decimal form (rounded at some arbitrary point). You couldn't store an infinite decimal in memory, of course, but you could imagine a function which takes the position of a digit as input, computes the digit, and outputs the digit. A big problem with this is the 0.999... = 1.000... issue. If you are representing a number very close to 1, computing whether the digit in the ones place is 0 or 1 could take a very long time, since you have to compute the number to a high enough precision to tell whether it is larger or smaller than 1. If the number happens to be exactly one, it might even cause our program to get into an infinite loop.

What I'm going to try first is something more general than the decimal representation (and with the problem I've describe avoided, I hope) and more specific than the Cauchy representation.


File:coq.png (12.6 KB,256x256)

Thanks. I'm going to be using an interactive theorem prover called Coq, which allows you to write programs and prove theorems about those programs. It's important to verify that these things actually satisfy the properties of real numbers.

I'm hoping to get to the point where I can construct complex numbers and prove the fundamental theorem of algebra. That's the theorem that every polynomial of degree n [for example, x^3 - 7x + 6 has degree 3 because the highest power is x^3] can be factored into n linear factors [in this case, (x-1)*(x-2)*(x+3)], each of which indicates a solution to the equation when the polynomial is set equal to zero [x^3 - 7x + 6 = 0 has solutions x=1, x=2, and x=-3]. It's a very useful theorem, but the proof isn't trivial. Ideally I'll not only have a proof but a constructive proof, that is, a verified program that finds all the solutions.

But how far I get ultimately depends on how fast this goes and when I get tired of it.


>It's a very useful theorem, but the proof isn't trivial.
I believe it, but I'm a bit confused how one would even go about constructing a solution to find all solutions to a polynomial, since there's stuff like >4th degree polynomials having no equation to find their solutions.


File:01_Irrational_300RV.jpg (54.78 KB,600x394)

>What I'm going to try first is something more general than the decimal representation (and with the problem I've describe avoided, I hope) and more specific than the Cauchy representation.

My initial attempt will go like this: A real number will be represented by a function that outputs a fractional approximations to the number given the denominator of the fraction. Specifically, given an integer n, it will output an integer m such that m/n is less than 1/n away from the number. For example, if the number is pi, and you called the function with the integer 7, both 21 and 22 would be acceptable outputs. If you called the function with the number 100, it would output either 314 or 315.

Given this representation, it would be easy to extract the decimal digits of a number; simply call the function with the desired power of 10 as an argument. But you would never be sure the last digit was correct, since it could be rounded up or down. This is necessary to avoid the 0.999... = 1.000... related hangup I mentioned. If the number if very close to 1, it doesn't have to compute the number to a very high precision; it could simply give you the approximation 1000/1000 for any number between 0.999 and 1.001, because it wouldn't have to know which side of 1 the number was on.


There's no formula for the solutions using only the tools of addition, subtraction, multiplication, division, integer powers, and nth roots. But that doesn't mean you can't find the solutions numerically to whatever precision you desire.


File:fta-sketch.png (28.96 KB,953x967)

One proof goes like this: Draw two complex planes, in one of which we'll be plotting inputs to the polynomial, and in the other we'll be plotting the outputs. If you move your point in the input plane around a sufficiently big circle, it can be shown that the point in the point in the output plane goes around the origin n times, where n is the degree of the polynomial. The number of times a loop goes around a point is called its winding number.

If you split the loop in the input plane into two smaller loops, you get two smaller loops in the output plane. If the dividing line between the two output subloops passes through the origin, that means you've found an input that gives an output of zero. Then you can use polynomial long division to find an (n-1)-degree equation with the remaining solutions. But usually there will not be a solution precisely on the dividing line. In that case, the winding numbers of both subloops should add up to the original winding number. If you further divide the loops into smaller and smaller pieces, focusing on the subloops that have nonzero winding number, you find smaller and smaller input loops whose corresponding output loops make smaller and smaller loops around the origin. As the loops become smaller, they approach points in the input plane which map to the origin in the output plane. These are your solutions.

Of course, making this all rigorous, especially the idea of winding numbers, is a bit of work.


File:coqide-q.png (58.33 KB,1920x1054)

Finally got Coq updated to the latest version (8.13.2).

The Coq standard library


comes with implementations of arbitrary-size integers and rationals, which I'll be using instead of re-implementing since it's the reals I'm interested in learning about. It also comes with an implementation of constructive reals (marked experimental), but using that would defeat the point of this exercise. Possibly of interest is the standard library's abstract interface for constructive reals; at some point I'd like to see if I can get my implementation to implement the interface. But I'll ignore it for now since I don't want to influence my first design attempt too much.


File:valid-invalid.png (25.33 KB,1330x622)

To implement >>6899 we could consider defining the reals as follows:

Definition R := Z -> Z.

"Z" is the type of arbitrary-size integers (imported indirectly via QArith), so this is saying the type "R" is the type of functions from integers to integers.

But not all functions that take integers as input and output will be valid for our purpose. As illustrated in the pic, a function that maps

1 => 1 [0/1 < x < 2/1]
2 => 2 [1/2 < x < 3/2]
3 => 2 [1/3 < x < 3/3]
4 => 3 [2/4 < x < 4/4]

could be a valid function since all these ranges overlap. But a function that maps

1 => 1 [0/1 < x < 2/1]
2 => 2 [1/2 < x < 3/2]
3 => 2 [1/3 < x < 3/3]
4 => 6 [5/4 < x < 7/4]

because it is impossible for our number to be both greater than 5/4 and less than 3/3.

Coq has the capacity to define subset types. A member of a subset type is a member of the original type along with a proof that it satisfies some property. The proof part is just there for type checking purposes, and gets removed if you ask to turn a Coq function into an executable program.

We need all our lower bounds to be lower than our upper bounds, so we can define our type as follows:

Require Import QArith.

Definition Rfun := positive -> Z.

Definition is_valid_Rfun f :=
forall x y, (f x - 1) # x < (f y + 1) # y.

Definition R := {f | is_valid_Rfun f}.

I've also changed the input type from "Z" to "positive". This simplifies the definition of the validity condition. If x was negative, [f(x)-1]/x would become the upper bound and [f(x)+1]/x the lower bound, and I'd have to add logic to handle that. The "positive" type is used by Coq as one of the arguments for the constructor "#" of the rational type "Q", so it's also convenient for this reason. And I don't have to worry about what to do if the function is called with value 0.


To find the sum of two numbers at precision 1/n, we'll need to ask for the addends (numbers to be added) at a higher precision. You might think we need precision 1/(2n) because the maximum possible error on the sum is 1/(2n) + 1/(2n) = 1/n. But suppose we want a sum with precision 1/10 and so request the addends with precision 1/20, and the rational approximations we get back are 6/20 and 1/20. When we add them together, we get 7/20, it has to be rounded to either 6/20 = 3/10 or 8/20 = 4/10, introducing another 1/20 of error.

What we'll instead need to request the addends at precision 1/(4n). Each addend has an error less than 1/(4n), and then we introduce an additional error of at most 2/(4n) when we round the numerator to the nearest multiple of 4 so that the fraction can be reduced from denominator 4n to n.

So the addition of two of our functions can be defined as

Definition Rfun_plus (f g : Rfun) : Rfun :=
fun x => ((f (4 * x)%positive + g (4 * x)%positive + 2) / 4)%Z.

Here, the ": Rfun" part are type annotations. Coq can infer types in many scenarios, so I'll only add them when they're not obvious, either to the reader or to Coq. The "%positive" and "%Z" are scope keys. They tell Coq that symbols like "+", "-", "*", and "/" inside the "%Z" are operations on numbers of the "Z" type, that is, integers. And inside the "%positive", they're operations on numbers of the "positive" type, positive integers.

We're not done with addition yet, since we've only defined an operation on the type "Rfun". To make this into an operation on "R" (functions with proof of validity as defined in >>6913), we'll need to write a function that takes proofs the input functions are valid and returns a proof the output function is valid. In other words, a proof of the implication "if the input is valid, then the output is valid." This is just one example of how a logical idea (implication) can be understood in terms of a computational idea (functions). There are many others, and this relationship between logic and programming, called the Curry–Howard correspondence, is fundamental to how Coq works.



File:lessthanorequal.png (21.45 KB,1365x486)

>In other words, a proof of the implication "if the input is valid, then the output is valid."

Before I start proving this, I want to define a few other operations on the raw "Rfun" type. To prove that one of our functions is valid, we said we needed to prove that all the lower bounds were lower than all the upper bounds (>>6913). We can generalize this idea to define a few other things. If we want to prove that one number is less than or equal to a second number, we need to prove that the lower bounds of the first number are all lower than the upper bounds of the second number. To prove that two numbers are equal, we need to prove that the lower bounds of both numbers are lower than the upper bounds of both numbers.

We can define "less than or equal" like this:

Definition Rfun_le (f g : Rfun) : Prop :=
forall x y, (f x - 1) # x < (g y + 1) # y.

Given this definition, we can define two numbers to be equal if each is less than or equal to the other. And we can restate our definition of validity in terms of "less than or equal"; a function makes a valid number if it is less than or equal to itself.

Definition Rfun_eq (f g : Rfun) : Prop :=
Rfun_le f g /\ Rfun_le g f.

Definition is_valid_Rfun (f : Rfun) : Prop :=
Rfun_le f f.

I've explicitly indicated the type of Rfun_le, Rfun_eq, and is_valid_Rfun to emphasize the difference between booleans (type "bool" in Coq) and propositions (type "Prop"). For a program to tell the numbers between two numbers that are equal and two numbers that are merely very, very close, it would have to compute both numbers to infinite precision. So we will not attempt to write a function that takes in two numbers and delivers a true or false answer -- type "bool" -- telling whether they are exactly equal. But we can in some cases show that two numbers are exactly equal by a mathematical proof. What we have done is define the type of proof that will be required to demonstrate equality. If a proof of this type exists for two given numbers, the proposition that they are equal is true. If it can be shown that no proof of this type can exist, the proposition is false. In Coq and other type theory based systems, we represent propositions as types whose members are proofs. "Prop" is a higher-order type whose members are types of proofs.


I think I'll add the type annotation to Rfun and R too. These also define types, so their type is a higher-order type of types. But instead of being in "Prop", they are in a different type of types, called "Set", which obeys slightly different rules. There are several reasons for the design decision in Coq to keep "Set" and "Prop" separate, but one of them is so that proof objects, whose types are in "Prop", can be removed from Coq functions when they are turned into programs executable outside Coq.

Definition Rfun : Set :=
positive -> Z.

Definition R : Set :=
{f | is_valid_Rfun f}.

Also, let's stick all the code so far in a public place:


I should give due credit to some inspirations. The method of representing real numbers described in >>6899 is similar to the "Eudoxus reals" of Schanuel in that we are considering functions from integers to integers, but the tests of validity and equality are different. And the definition of "less than or equal" >>6917 and some of the definitions that follow from it are basically the same as Conway's definition for surreal numbers.


File:uu.png (1.15 MB,1280x720)

Since "less than or equal" doesn't output a real number, we don't have to prove anything to define it on "R" instead of on "Rfun". We can just use "proj1_sig" to extract a member of "Rfun" from a member of the subtype "R".

Definition Rle (x y : R) : Prop :=
Rfun_le (proj1_sig x) (proj1_sig y).

Since the definition of equality doesn't depend directly on the implementation, I decided to get rid of the definition on "Rfun" an just define it on "R" directly.

Definition Req (x y : R) : Prop :=
Rle x y /\ Rle y x.

We can also define "greater than or equal" as "less than or equal" flipped around:

Definition Rge (x y : R) : Prop :=
Rle y x.

You might think we would define "less than" as the negation of "greater than or equal". But instead we will define it like this:

Definition Rfun_lt (f g : Rfun) : Prop :=
exists x y, (f x + 1) # x <= (g y - 1) # y.

Definition Rlt (x y : R) : Prop :=
Rfun_lt (proj1_sig x) (proj1_sig y).

The reason to choose this definition has to do with the nature of existence proofs. To prove the negation of A >= B, we would need to assume that every lower bound of B is lower than every upper bound of A, and derive a contradiction from that assumption. From this information, we could reasonably conclude that there is some lower bound of B which is greater than or equal to some upper bound of A. But it wouldn't tell us how to find them.

Mathematics has many examples of things that can be proven to "exist" even though nobody can show you one of them. One famous example is the Banach–Tarski paradox, which says that there's a way to decompose a ball (the interior of a sphere) into five pieces which can be moved and rotated so as to reassemble them into two identical copies of the original ball.

Existence proofs in Coq are not like this, at least not by default. To prove that something exists with a given property, you have to write a program that will actually find it, along with a proof of the property for that object. You can add axioms to Coq to let you reason more like a typical mathematician, but then if you try to run your proof as a program, it can't finish because it needs a proof of the axiom, and you don't have one.

By defining "less than" as requiring a direct existence proof, we require a proof of A < B to explicitly tell us to what precision we need to calculate A and B to find an upper bound on A less than or equal to a lower bound on B. (A is strictly less than its upper bounds, and B is strictly greater than its lower bounds.)

Some reasonable questions:

1. Why should we care what information the proof objects contain when they will be erased upon creating an executable program outside Coq?

2. If the proof doesn't tell us how to find the bounds on A and B showing A < B, can't we just find out by calculating the numbers at increasingly greater precision? We know it will eventually terminate since the bounds have to exist.

I think I have a partial answer to (1), which I'll talk about it in a later post after I've completely tested my thoughts. But I'm still trying to figure these questions out.


>But you would never be sure the last digit was correct, since it could be rounded up or down.
Correction: It could be more than the last digit incorrect. For example, 0.9991 could be rounded to 1.000.


>But suppose we want a sum with precision 1/10 and so request the addends with precision 1/20, and the rational approximations we get back are 6/20 and 1/20. When we add them together, we get 7/20, it has to be rounded to either 6/20 = 3/10 or 8/20 = 4/10, introducing another 1/20 of error.

Thinking on this some more, it seems wasteful, so I've changed the implementation a little. Now "Rfun" is

Definition Rfun : Set :=
positive -> Q.

and the idea is that if you request the number with precision 1/n, any denominator is acceptable as long as the result is within 1/n of the correct value. The implementations of <=, < and + have been updated accordingly:

Definition Rfun_le (x y : Rfun) : Prop :=
forall tolx toly, x tolx - (1 # tolx) < y toly + (1 # toly).

Definition Rfun_lt (x y : Rfun) : Prop :=
exists tolx toly, x tolx + (1 # tolx) <= y toly + (1 # toly).

Definition Rfun_plus (x y : Rfun) : Rfun :=
fun tol => Qred (x (2 * tol)%positive + y (2 * tol)%positive).

The reason for the old behavior was to make computing decimals easier, so I've written a function "RQapprox_w_den" that emulates the old behavior, returning a fraction p/q within 1/q of the true value. I've also added an "RQapprox" function that at the moment just calls the "Rfun" type function in the number to return an approximation within 1/n of the true value. This is intended to create a layer of abstraction so functions can call "RQapprox" to get approximations and won't have to be updated if the underlying implementation of "R" is changed again.

Require Import Qround.

Definition RQapprox_w_den (den : positive) (x : R) : Q :=
Qfloor (RQapprox (2 * den) x * (Zpos den # 1) + (1 # 2)) # den.

Definition Rfun_plus (x y : Rfun) : Rfun :=
fun tol => Qred (x (2 * tol)%positive + y (2 * tol)%positive).


>Definition Rfun_le (x y : Rfun) : Prop :=
> forall tolx toly, x tolx - (1 # tolx) < y toly + (1 # toly).

oops, should be

Definition Rfun_lt (x y : Rfun) : Prop :=
exists tolx toly, x tolx + (1 # tolx) <= y toly - (1 # toly).


File:open-implies-infinitesimal….png (18.73 KB,1195x326)

I've noticed a problem with this representation >>6899 which also impacts the slightly revised version in >>6935. Consider the function which when asked to calculate a real number with precision 1/n, outputs the rational number 1/n. So for any positive integer n, this number is supposed to greater than 0/n and less than 2/n. That makes the number greater than 0, but for any positive rational number p/q, the number is less than 2/(2q) and therefore less than p/q. So this implies an infinitesimal number. But there's nothing in the validity condition that forbids this function.

(If we go by our definition of "Rfun_lt" instead of the intended relationship of the number with its bounds, this number is not greater than the number represented by a function that always returns 0, which would be the most obvious way to define 0 in this system. But it is greater than what we would probably define to be its additive inverse, a function that when asked to calculate to precision 1/n returns -1/n.)

There's nothing inherently wrong with infinitesimals, but we're trying to model the reals, and the reals don't include infinitesimals. If a number system includes infinitesimals, that makes it something other than the reals.

Fortunately, this is pretty easy to fix at this point. We just need to change the meaning of calculating to precision 1/n to mean calculating with error less than or equal to 1/n. Given this change, "<" must be changed to "<=" and vice versa in the definitions of "Rfun_le" and "Rfun_lt":

Definition Rfun_le (x y : Rfun) : Prop :=
forall tolx toly, x tolx - (1 # tolx) <= y toly + (1 # toly).

Definition Rfun_lt (x y : Rfun) : Prop :=
exists tolx toly, x tolx + (1 # tolx) < y toly - (1 # toly).


Note that although there's nothing wrong with infinitesimals, there is something wrong with equality not being transitive, that is, having numbers such that a = b and b = c but a <> c as mentioned in the parenthetical remarks. So even if you wanted infinitesimals, you'd need to fix things. On that note I should correct >>6921. This was not the definition of "less than or equal" that Conway used for the surreals. His definition was that the first number was less than all the second number's upper bounds, and the first number's lower bounds were all less than the second number (if we take x < y to mean not y <= x). Not sure why I got that mixed up in my head.


Working on a function to decide whether one real is less than or greater than another given a proof that they're not equal (so we don't get stuck forever testing at greater and greater precision). The fact that a proof of "less than" gives me two possibly different levels of precision rather than one is annoying me. Thinking about changing the validity condition so that the intervals the number is supposed to be in are required to be nested; that is, when you raise the precision, the lower bounds can only get higher, and the upper bounds can only get lower.


>Working on a function to decide whether one real is less than or greater than another given a proof that they're not equal (so we don't get stuck forever testing at greater and greater precision).

While thinking about this, I thought it might be easier to do comparison to zero first, which was my original goal anyway. (I'm thinking forward to division, where dividing by zero is disallowed but dividing by positive and negative numbers close to zero are both allowed and produce drastically different results.)

So I added a conversion from the rational type "Q" to the real type "R".

Definition Q2Rfun (x : Q) : Rfun :=
fun tol => x.

Theorem Q2Rfun_valid : forall x, is_valid_Rfun (Q2Rfun x).
intros x tol1 tol2.
apply Qplus_le_r.

Definition Q2R (x : Q) : R :=
exist is_valid_Rfun (Q2Rfun x) (Q2Rfun_valid x).

This makes the first proof that any particular "Rfun" function is valid.


Some progress towards this. I've proven that if x is nonzero, then it either has a negative upper bound or a positive lower bound.

Theorem Rneq0_exists_witness :
forall x, Rneq x (Q2R 0) -> exists t,
RQapprox t x + (1 # t) < 0 \/ 0 < RQapprox t x - (1 # t).

Proof is in the repo. Usually Coq proof scripts aren't very informative to read unless you're stepping through them, anyway.

So even in code where the proof object containing the number we have to input to the function to find one of these bounds has been thrown away, it should be possible (although not terribly efficient) to test every positive number until we find one that generates either a negative upper bound or a positive lower bound.

There's a module called ConstructiveEpsilon where they've worked out how to recover numbers from thrown-away proof objects in this manner. The non-obvious (at least to me) is how to convince Coq that the function terminates. Coq doesn't let you write functions that aren't proven to terminate.



I've also proven that the approximations generated for a real number x are within 1/n of x as defined by the "less than or equal to" function defined on R.

Theorem RQapprox_spec_l :
forall t x, Rle (Q2R (RQapprox t x - (1 # t))) x.

Theorem RQapprox_spec_u :
forall t x, Rle x (Q2R (RQapprox t x + (1 # t))).

That's pretty ugly; it ought to look like:

Theorem RQapprox_spec_l :
forall t x, Q2R (RQapprox t x - (1 # t)) <= x.

Theorem RQapprox_spec_u :
forall t x, x <= Q2R (RQapprox t x + (1 # t)).

Or even:

Theorem RQapprox_spec :
forall t x, Q2R (RQapprox t x - (1 # t)) <= x <= Q2R (RQapprox t x + (1 # t)).

But to do that, I need to tell Coq that "<=" means the function "Rleq". I haven't done that yet.


Comparison to zero is done.

Regarding >>6925
>1. Why should we care what information the proof objects contain when they will be erased upon creating an executable program outside Coq?

In my previous testing with ConstructiveEpsilon, it would work when I used a normal constructive proof:

Require Import ConstructiveEpsilon.
Require Import Arith.
Require Import Classical.
Require Extraction.

Definition k := 0.

Lemma lol1 : exists n, k = n.
exists k.

Definition z1 := proj1_sig (constructive_indefinite_ground_description_nat (fun n => k = n) (Nat.eq_dec k) lol1).

Eval compute in z1.

When I had it compute z1, it successfully found 0.

But when I invoked the axiom of the excluded middle, the computation within Coq didn't work:

Lemma lol2 : exists n, k = n.
destruct (classic (exists n, k = n)) as [H|H].
- trivial.
- contradict H.
exists k.

Definition z2 := proj1_sig (constructive_indefinite_ground_description_nat (fun n => k = n) (Nat.eq_dec k) lol2).

Eval compute in z2.

Computing z2 didn't get hung in an infinite loop, but it spat out a program instead of a number. I'll show the output in the next post if there's room.


The output of z2:

= let (a, _) :=
(fix linear_search (m : nat) (b : before_witness (fun n : nat => 0 = n) m) {struct b} : {n : nat | 0 = n} :=
match m as n return ({0 = n} + {0 = n -> False}) with
| 0 => left eq_refl
| S m0 => right (O_S m0)
| left yes => exist (fun n : nat => 0 = n) m yes
| right no =>
linear_search (S m)
(match b with
| stop _ _ p =>
fun not_p : 0 = m -> False =>
match not_p p return (before_witness (fun n : nat => 0 = n) (S m)) with
| next _ _ b0 => fun _ : 0 = m -> False => b0
end no)
end) 0
(let (n, p) :=
match classic (exists n : nat, 0 = n) with
| or_introl H => H
| or_intror H =>
match H (ex_intro (fun n : nat => 0 = n) 0 eq_refl) return (exists n : nat, 0 = n) with
end in
(fix O_witness (n0 : nat) :
before_witness (fun n1 : nat => 0 = n1) n0 -> before_witness (fun n1 : nat => 0 = n1) 0 :=
n0 as n1
return (before_witness (fun n2 : nat => 0 = n2) n1 -> before_witness (fun n2 : nat => 0 = n2) 0)
| 0 => fun b : before_witness (fun n1 : nat => 0 = n1) 0 => b
| S n1 =>
fun b : before_witness (fun n2 : nat => 0 = n2) (S n1) =>
O_witness n1 (next (fun n2 : nat => 0 = n2) n1 b)
end) n (stop (fun n0 : nat => 0 = n0) n p)) in
: nat


On the other hand, the extracted programs from

Recursive Extraction z1.
Recursive Extraction z2.

were the same other than the change in variable name.

So I expected when I had comparison with zero done, I would be able to get results in Coq when I used a constructive proof that the number wasn't equal to zero, and that I wouldn't get a satisfactory result otherwise. But my testing so far gives me different results.

Lemma Rneq10 : Rneq (Q2R 1) (Q2R 0).
apply Q2R_neq.

Eval compute in Rpositive_bool (Q2R 1) Rneq10.

This constructive proof gives me true, as expected.

Lemma Rneqn10 : Rneq (Q2R (-1)) (Q2R 0).
apply Q2R_neq.

Eval compute in Rpositive_bool (Q2R (-1)) Rneqn10.

This constructive proof gives me false, as expected.

Require Import Classical.

Lemma Rneq10 : Rneq (Q2R 1) (Q2R 0).
destruct (classic (Rneq (Q2R 1) (Q2R 0))) as [H|H].
- trivial.
- contradict H.
apply Q2R_neq.

Eval compute in Rpositive_bool (Q2R 1) Rneq10.

This non-constructive proof using the axiom of the excluded middle still gives me the correct answer of true.

Axiom Rneq10 : Rneq (Q2R 1) (Q2R 0).

Eval compute in Rpositive_bool (Q2R 1) Rneq10.

Even assuming 1 <> 0 as an axiom works.

Axiom Rneq00 : Rneq (Q2R 0) (Q2R 0).

Eval compute in Rpositive_bool (Q2R 0) Rneq00.

The bogus axiom 0 <> 0 gives me the bogus result true.

I'm still working out what's going on.


Oh, I see what's happening. Because I'm using numbers that I made with "Q2R", the conversion from rationals function, all the rational approximations are the same, so it can calculate the rational approximation whose sign I'm testing without having to know what precision it's calculating to. I'll need to make some numbers with something else to test with.


Addendum: If all of the rational approximations returned are positive, Coq can sometimes figure out that they're positive without knowing the precision. This works for testing, though:

Definition alt1fun : Rfun :=
fun t => 1 - (1 # t).

Lemma alt1fun_valid : is_valid_Rfun alt1fun.
intros t1 t2.
unfold alt1fun.
rewrite <- (Qplus_0_l 1) at 2.
apply Qplus_le_l.

Definition alt1 :=
exist is_valid_Rfun alt1fun alt1fun_valid.

I had to go through and change "Qed" to "Defined" in the existence proofs because otherwise Coq throws away the value in existence proofs and just keeps track of the fact that they're true. But once that was fixed, Rpositive_bool gave the correct answer of true when given a constructive proof without axioms:

Lemma Rneq_alt1_0 : Rneq alt1 (Q2R 0).
exists 4 % positive, 4 % positive.

Eval compute in Rpositive_bool alt1 Rneq_alt1_0.

But it didn't find the answer when given a proof using the axiom of excluded middle:

Require Import Classical.

Lemma Rneq_alt1_0_indirect : Rneq alt1 (Q2R 0).
destruct (classic (Rneq alt1 (Q2R 0))) as [H|H].
- trivial.
- contradict H.
exists 4 % positive, 4 % positive.

Eval compute in Rneq0_witness alt1 Rneq_alt1_0_indirect.


Even though it's not possible to go from "not less than or equal" to "greater than" without using the axiom of excluded middle, it can be shown pretty easily that the current definition of "less than or equal" is equivalent to "not greater than".

Theorem Rfun_le_not_lt : forall x y, Rfun_le x y <-> ~ Rfun_lt y x.
intros x y.
- intros H1 [tx [ty H2]].
specialize (H1 ty tx).
contradict H2.
apply Qle_not_lt, H1.
- intros H tx ty.
apply Qnot_lt_le.
contradict H.
exists ty, tx.
apply H.

So I'm thinking about redefining "less than or equal" as "not greater than".


>So I'm thinking about redefining "less than or equal" as "not greater than".
After trying rewriting a few theorems, it looks like it's more trouble than it's worth. The rewrite is easy but it adds needless steps to the proofs. The proof of "RQapprox_spec_l" was what made up my mind; a proof of the form a <= b -> b <= c -> a <= c would have to be replaced with a proof of the form ~ b < a -> b <= c -> ~ c < a.


Made some changes to type "R" to simplify things and make it easier to update the underlying implementation if need be.

"R" is now an inductive type instead of piggybacking off the subset type. An inductive type is specified by telling what type of objects you need to use to construct the type. In general, there can be more than one constructor, but for the current implementation of "R" there is only one constructor "Rmake". It takes as arguments a function for getting rational approximations of the number, and a proof that the function satisfies the validity condition.

Inductive R : Set :=
| Rmake (x : Rfun) (p : is_valid_Rfun x) : R.

The function "RQapprox", as before, can be used to get the rational approximations. I've changed it so the real number comes before the precision, because I thought the old order was more confusing, plus this makes it easier to get the function instead of the result of the function, if that's what you want. The theorem "RQapprox_spec" confirms that a function "RQapprox" satisfies the validity condition, so that proofs that need to know this don't have to reach into the "R" object to get the validity proof inside. The theorems formerly known as "RQapprox_spec_l" and "RQapprox_spec_r", which confirmed the intended lower and upper bounds on the number given the approximation, have been renamed to "RQapprox_lower_bound" and "RQapprox_upper_bound".

Everything is set up now to work through the interface of the functions "RQapprox", "Rmake", and "is_valid_Rfun" and the theorem "RQapprox_spec". If the implementation of "R" changes in the future, which it might, these functions can be updated to invoke the new interface while keeping their old behavior.


File:[MoyaiSubs] Mewkledreamy -….jpg (223.39 KB,1920x1080)

I don't understand any of this


There's a lot of technical stuff for sure. But I hope the general idea of what I'm doing is comprehensible. Do you understand what I mean when I say that I'm representing real numbers in the computer by computer programs that calculate an approximate value for the number, and that you can ask the program for as precise an approximation you want?


So it's kind of like symbolic computation?


It's similar but there are differences.

Symbolic computation might take an expression like sqrt(2) * sqrt(2) and apply some rule to figure out it's exactly 2. And some expressions it wouldn't be able to do anything with, like 3 * sqrt(2) + 5.

The goal of this thing is to find the approximate value of something like sqrt(2) * sqrt(2) or 3 * sqrt(2) + 5. You could do that with ordinary floating point numbers, but there would be a rounding error at each step, and the accuracy of the answer at the end is dependent on how many steps are involved in the calculation. With this thing, the user decides what accuracy is needed for the final result, and the program figures out how to get it.

If it was trying to do 3 * sqrt(2) + 5 for example, then it would tell the addition program what accuracy you need, and the addition program would have to figure out how accurately 3 * sqrt(2) and 5 need to be computed. And this continues down the chain; the multiplication program, given an accuracy target from the addition program, would have to figure out how accurately 3 and sqrt(2) need to be computed. And so on.


I rewrote the comparison function, and now it works for comparison of any two numbers, not just comparison to zero.

I also moved a lot of proof details out of existence proofs into separate lemmas because the existence proofs need to use "Defined" (see >>6973) meaning Coq keeps the details of the proof in memory. The only details I need it to keep are how to calculate the number that's been proven to satisfy some property, so the details of verifying it satisfies the property can go into a proof using "Qed".

The new comparison function plus proofs ends up being a bit longer, so I moved it to a separate file. I'm considering trimming it down to just what it needs to compare with zero while keeping the other improvements. But first I want to work a bit on other stuff.


I realized a problem with my model. Consider the operation of squaring a number. Suppose we need the estimate to be within 0.01 of the correct result. If the number we're trying to square is 2, calculating it with an accuracy of 0.001 would work; 1.999^2 and 2.001^2 are both less than 0.01 away from 4. But if we're trying to square 2000, that's not enough accuracy; 1999.999^2 and 2000.001^2 are each a little more than 4 away from the correct result 4000000. We can make it work by changing the accuracy to 0.000001; 1999.999999^2 and 2000.000001^2 are both within 0.01 of 4000000. Those of you who know a little calculus will recognize that the slope of y = x^2 is 2x. The larger the number I want to square, the greater the slope of the function, and the more accuracy I need to achieve my target accuracy on the square of the number.

The problem here is that we first have to know what the number is before knowing with what accuracy to calculate it.

One way to handle this would be to first request the number at some arbitrary accuracy, then use the first estimate of the number to calculate the accuracy we need. The problem with this solution is that it causes an exponential blowup in the number of function calls. Consider calculating (3*pi^2)^4. To calculate (3*pi^2)^4, it calculates 3*pi^2 twice, at two different levels of accuracy. Each calculation of 3*pi^2 calculates pi^2 twice, so we have to calculate pi^2 four times. And each calculation of pi^2 calculates pi twice, so we have to calculate pi eight times.

The easiest approach I've thought of so far is to have each real number come with an upper bound on its absolute value.


>One way to handle this would be to first request the number at some arbitrary accuracy, then use the first estimate of the number to calculate the accuracy we need. The problem with this solution is that it causes an exponential blowup in the number of function calls.

>The easiest approach I've thought of so far is to have each real number come with an upper bound on its absolute value.

I thought of what might be a simpler way to do it. I could have each real number come with a suggested initial accuracy for computing the first estimate (and perhaps with a precalculated value at that accuracy). Then the convention would be that calculating the output of a function at the initial accuracy only requires calculating the inputs at the initial accuracy.


>The problem with this solution is that it causes an exponential blowup in the number of function calls. Consider calculating (3*pi^2)^4. To calculate (3*pi^2)^4, it calculates 3*pi^2 twice, at two different levels of accuracy. Each calculation of 3*pi^2 calculates pi^2 twice, so we have to calculate pi^2 four times. And each calculation of pi^2 calculates pi twice, so we have to calculate pi eight times.

Thinking about this some more, I don't think this kind of behavior can be avoided. For example, if we're calculating sin(x) via the Taylor series x - x^3/3! + x^5/5! - x^7/7! + ..., it will call the function to calculate an approximation to x with each term. This will have to be solved with memoization, which means that functions remember what their result from the last time they were called with a given input instead of computing it all over again.



I've decided to change the implementation of a real number to the following:

(1) a sequence of lower and upper bounds for the number, implemented as a stream.
(2) a function which when given a target accuracy, picks a pair of bounds out of the sequence that are sufficiently close to each other
(3) proofs that the bounds of (1) are nonempty and nested, and that the function (2) works

Streams in Coq are a co-inductive type, something I'm not very familiar with and am having to learn about. The gist is co-inductive types can represent infinite data types. Streams represent an infinite sequence of values, which are only actually calculated when needed. Once entries in the sequence have been calculated, they don't need to be calculated again the next time someone asks for that entry. So they can be used to implement memoization.

Here's where Streams are in Coq's standard library:

In the middle of rewriting what has to be rewritten now. In good news, I've got my first co-inductive proof working:

Theorem Q2R_all_nonempty :
forall x, QCI.all_nonempty (Q2R_stream x).
intro x.
cofix IH.
split; [|exact IH].
unfold QCI.nonempty.
apply Qle_refl.


Looking at this again, this is too complex. That's only one of three proof obligations just to implement a constant. The implementation details of memoization don't need to be exposed to the function performing the calculation, and they shouldn't be relevant to the proof that the function works. Starting another pass at the redesign.


I found a nice approach. I'm adding a calculated error to the estimates generated by the function, then letting the target accuracy be an arbitrary rational number and requiring target accuracy * error <= 1. Then the target accuracy can be set to zero which gives you the bounds on the number.


Yay! Addition has been defined.

Compute R.Qapprox_w_den (R.plus (R.ofQ 3) (R.ofQ 5)) 100.

= 8.00
: Q


Defining addition took a while because I was thinking about how best to prove the validity conditions for the result.

To review and clarify how my implementation of real numbers works right now:
I'm currently representing a real number as a structure with three parts:

(1) A function which takes a target accuracy as input and delivers a rational estimate of the number and an (inclusive) error bound on the estimate.

(2) A proof that the error bounds on the function are consistent; that is, that for any two target accuracies, the resulting error bounds
estimate(target1) - error(target1) <= x <= estimate(target1) + error(target1)
estimate(target2) - error(target2) <= x <= estimate(target2) + error(target2)
overlap. It is sufficient to prove that for any input target accuracies target1, target2,
estimate(target1) - error(target1) <= estimate(target2) + error(target2)
since this automatically implies that
estimate(target2) - error(target2) <= estimate(target1) + error(target1)
estimate(target1) - error(target1) <= estimate(target1) + error(target1)
estimate(target2) - error(target2) <= estimate(target2) + error(target2)
will also hold, by changing the values of target1 and target2.

(3) A proof that the errors on the output acheive the target accuracy, as defined by
target * error(target) <= 1.
For positive values of the target accuracy, this means the error must be less than or equal to 1/target, a number that we can make as small as we want but will never be zero. If the target accuracy is zero or negative, any non-negative error is acceptable.

For part (1), the addition function delivers an estimate of the result (at accuracy = target) by adding the estimates of the inputs (at accuracy = 2 * target). The error on the estimate is obtained by adding the errors on the input estimates.

For the output of the addition function, part (3) is fairly simple to prove. When asked to calculate the result with accuracy = target, it in turn requests the inputs with accuracy = 2*target. Since the inputs are of the real number type, they satisfy part (3) of its definition, meaning the addition function can rely on the errors on the input acheiving their target accuracy:
2 * target * error(input1) <= 1
2 * target * error(input2) <= 1
Adding these inequalities and dividing by 2, we get:
target * (error(input1) + error(input2)) <= 1
target * error(output) <= 1


File:errors and overlap.png (7.04 KB,869x665)

Part (2) is the one I spent a lot of time thinking about. It's fairly easy to prove for addition, but I wanted to prove it in a way that would also be easy for more complicated functions.

To prove that the error bounds overlap, we first show that the errors are computed correctly. That is, that if x and y are both within a given error range
x0 - dx <= x <= x0 + dx
y0 - dy <= y <= y0 + dy
the result of adding x and y should be within the error range we compute:
(x0 + y0) - (dx + dy) <= x + y <= (x0 + y0) + (dx + dy)

The next step is fairly generic, so let's first imagine we're trying to prove part (2) for a function of one variable. See pic related. When we evaluate the function with two different target accuracies, it requests its input with two different target accuracies. These two requests may result in different estimates of the input number. But since the input is of the real number type, is satisfies part (2) of its definition, meaning the error ranges on the two estimates overlap. Since the error ranges overlap, there is a point within both ranges. We'll use the midpoint between the larger lower bound and the smaller lower bound. Since the point is within both ranges, if we evaluate the function on this point, the output will be within the error ranges for both output estimates, provided the errors were calculated correctly. Thus the error ranges for the output estimates overlap, and part (2) is satisfied. This argument easily generalizes to functions of multiple real numbers.


File:setoid hell.jpg (50.79 KB,681x291)

We can do a bit better, though. Although I haven't gotten to it yet, I'll soon want to prove that adding equal numbers produces equal output. That is, if
a = c
b = d
a + b = c + b.

The notion of equal real numbers deserves a little explanation. Our real number type is based on ways of calculating approximations for a number. For any number, there are many ways of calculating it. For example, approximate values of pi can be calculated by the well-known but highly inefficient sum
4 - 4/3 + 4/5 - 4/7 + 4/11 - 4/13 + 4/15 - 4/17 + ...
or the more practical sum
(16/5 - 16/(3*5^3) + 16/(5*5^5) - 16/(7*5^7) + ...) - (4/239 - 4/(3*239^3) + 4/(5*239^5) - 4/(7*239^7) + ...)
or various other methods. So we will have multiple different representations of the same number.

Coq's standard library has a definition of equality for which something like
a = c -> b = d -> a + b = c + b
can be proven trivially. We could try to prove this type of equality for different ways of calculating the same number, but this notion of equality is too strong for our purposes. The reason is that Coq's equality implies that all properties of equal objects be the same. But that isn't true for our representations of real numbers because different ways of approximating a number will yield different estimates of the number.

Since we can't use Coq's definition of equality, we have to define our own. What we are constructing, a type plus a notion of equivalence on that type, is called a setoid. (Or rather, it will be a setoid once I prove reflexivity, symmetry, and transitivity.) The definition I have know is that two representations of a number are equal if the error ranges on the estimates from one always overlap the error ranges on the estimates from the other. Since we're using our own definition of equality, we'll need our own proof of
a = c -> b = d -> a + b = c + b
and corresponding proofs for all the other functions that take real numbers as inputs.

This proof isn't finished yet, but looking forward to having to prove this, I've generalized the proof of part (2) for the output of addition a little. I'm letting the two overlapping estimates for the inputs come from two different representations of a number, provided those representations are equal. Having shown that the two output ranges overlap, we can easily be able to show that the two output numbers are equal.


I made the generic part of this proof into a separate theorem. I couldn't figure out a way to state the theorem for functions of n real numbers that wasn't more trouble than it was worth, so I wrote separate theorems for functions of 1 and 2 real numbers. If I find myself needing to use the theorem for functions of 3 or 4 real numbers, I'll revisit this.


Transitivity relations on the "less than" ("lt") and "less than or equal" ("le") relations are now proven.

That is, if a < b and b < c, a < c.
If a <= b and b <= c, a <= c.
If a < b and b <= c, a < c.
If a <= b and b < c, a < c.

"Equals" ("eqv") proven an equivalence relation, meaning:

a == a
If a == b, b == a.
If a == b and b == c, then a == c.

And addition proven to be compatible with it:

If a == b and c == d, then a + b == c + d

Also proved that "equals" is equivalent to the negation of "neq", which may be a bad name since "neq", requiring a constructive existence proof, is not equivalent to the negation of "eqv". I think some people use the word "apart" for this reason.


So Coq has a very powerful system for declaring notations so that you can write x + y instead of plus x y everywhere. But apparently Coq doesn't allow you to import notations from a module without importing the entire fucking module with all of the namespace pollution that entails. You can declare a submodule within a source file and then declare the notations outside the module so that they can be used without polluting the namespace, but then you can't use your own notations. I looked at how their standard library solves the problem, and the answer was to declare all the notations twice, once inside, then again outside.


I guess that's what I'm going to have to do...


Notations for less than etc. and addition are now implemented. Also several other minor proofs have been proven.


Always a good feeling to delete code.

$ git commit -am "Simplify."
[master e5c1198] Simplify.
 1 file changed, 67 insertions(+), 133 deletions(-)

And it still computes, even inside Coq, without getting hung up on opaque stuff. Reworking an old example >>6973 to the changes in the code since then:

Require Import Coq.QArith.Qminmax.

Definition alt1fun : RE.estimator :=
  fun t => let err := (/ Qmax t 1)%Q in
    RE.make (1 - err) err.

Lemma alt1_consistent : RE.consistent alt1fun.
  intros t1 t2.
  unfold alt1fun, RE.min, RE.max.
  rewrite <- (Qplus_0_l 1) at 3.
  apply Qplus_le_l.
  apply Qle_shift_div_r.
  - apply Q.max_lt_iff.
  - rewrite Qmult_0_l.

Lemma alt1_meets_target : RE.meets_target alt1fun.
  intro t.
  apply Qle_shift_div_r.
  - apply Q.max_lt_iff.
  - rewrite Qmult_1_l.
    apply Q.max_le_iff.
    apply Qle_refl.

Definition alt1 :=
  R.make alt1fun alt1_consistent alt1_meets_target.

Lemma Rapart_alt1_0 : alt1 =/= (R.ofQ 0).
  exists 0%Q, 3%Q.

Compute Rlt_bool alt1 (R.ofQ 0) Rapart_alt1_0.

     = false
     : bool


I proved that real numbers that always return the same approximate values for a given target accuracy are equal. That made it easy to prove addition commutative,
Theorem plus_comm : forall x y, x + y == y + x.since it follows from the commutativity of rational number addition. Associativity will require something extra because in x + (y + z), x gets computed at the same target accuracy as y + z, which means that y and z are computed at twice the target accuracy as x.


Associativity done. Wrote a function to compute numbers at a higher target accuracy, and proved that the output of the function was equal to the input. Then used that to fix the hangup with associativity.


4chanX is broken :DDDD


Fixed, thanks for bringing it to my attention. For the future I made >>7155 which will be a better place to talk about breakages.


I'm making another minor change to the specification of the real number type. Instead of requiring errors on the returned estimates to satisfy target_accuracy * error_bound <= 1 as explained in >>7045, I'm changing it to strictly less than one, target_accuracy * error_bound < 1. The reason is that I'm seeing many situations where I want an error smaller than a given value.

One simple example is finding the decimal approximation to a real number. Originally, before I distinguished between target accuracies (propagated backward) and error bounds (propagated forward), I wanted the decimal approximations to either be the correct answer rounded up or rounded down (although you wouldn't know which in any particular case). That is, if I computed a number to 6 decimal places and got the answer 0.463200, it would tell me that the actual number was strictly larger than 0.463199 and strictly smaller than 0.463201. To achieve this, we need to estimate the number with error strictly less than 0.0000005. That's because an estimate that would be rounded to 0.463200 could be anywhere between 0.4631995 (inclusive according to the usual rule) and 0.4632005 (exclusive according to the usual rule). If the estimate is 0.4631995 and the error is 0.0000005 or greater, 0.463199 would be a possible value of the number.

Another recent example is the proof I'm working on of the fact that you can add a number to both sides of an inequality. That is, if x < y, then x + z < y + z.

The proof goes as follows. If x < y, then by the definition I've chosen for "less than", you can get the functions for calculating x and y to provide rational estimates x1, y2 with rational error bounds dx1, dy2 on the estimates respectively such that x1 + dx1 < y2 - dy2. Let's call the difference between these two numbers eps = (y2 - dy2) - (x1 + dx1). Let us request estimates of x + z and y + z each with errors smaller than eps/3. Then the calculation of x + z will request estimates x3, z3 of x and z with error bounds dx3, dz3 smaller than eps/6, and will return x3 + z3 as the estimate of the sum with error bound dx3 + dz3. Similarly, the calculation of y + z will request estimates y3, z3 with error bounds dy3, dz3 smaller than eps/6, and will return an estimate y3 + z3 and an error bound dy3 + dz3. We have

   (x3 + z3)  + (dx3 + dz3)
 < (x3 + z3)  - (dx3 + dz3) + 2eps/3  [since (dx3 + dz3) < eps/3]
 = (x3 - dx3) + (z3 - dz3) + 2eps/3
<= (x1 + dx1) + (z3 - dz3) + 2eps/3  [the type requires error bounds on different estimates x to be consistent]
 = [(y2 - dy2) - eps] + (z3 - dz3) + 2eps/3
 = (y2 - dy2) + (z3 - dz3) - eps/3
<= (y2 + dy3) + (z3 - dz3) - eps/3    [the type requires error bounds on different estimates y to be consistent]
 < (y2 - dy3) + (z3 - dz3)            [since dy3 < eps/6]
 = (y2 + z3)  - (dy3 + dz3)

which satisfies the definition of "less than" in x + z < y + z.

Note that the error bounds still have to use less than or equal; that is, an error bound dx on an estimate x1 is saying that x1 - dx <= x <= x1 + dx. Otherwise, you can define a function that states that the number is between 0 < x < 2eps for any positive rational number eps. But the real numbers aren't supposed to have infinitesimals. Changing the require relationship between target accuracies to error bounds to use "less than" does not create this problem, though. As long as "less than or equal" is used for the error bounds, any set of error intervals satisfies one of the following:

(1) All error intervals include zero, in which case the number being represented is zero.
(2) At least one error interval has a lower bound greater than zero, in which case there is a positive rational less than or equal to the number.
(3) At least one error interval has an upper bound less than zero, in which case there is a negative rational greater than or equal to the number.

In no case is the number an infinitesimal. Conditions (2) and (3) cannot happen simultaneously because the definition of the type disallows inconsistent bounds on the number.

There might be situations I haven't seen yet in which I need the error bounds of estimates to be less than or equal to some target value, rather than strictly less than. But in that case we can just request an error bound strictly less than our target, and it will automatically be less than or equal to our target. This is simpler than what I have to do under the current definition, which is arbitrarily picking some number between 0 and the number I need the error bounds to be strictly less than (for example, by dividing the number by 2), and requesting error bounds less than or equal to that.


Correction, The last three lines should contain y3 not y2:

   (x3 + z3)  + (dx3 + dz3)
 < (x3 + z3)  - (dx3 + dz3) + 2eps/3  [since (dx3 + dz3) < eps/3]
 = (x3 - dx3) + (z3 - dz3) + 2eps/3
<= (x1 + dx1) + (z3 - dz3) + 2eps/3  [the type requires error bounds on different estimates x to be consistent]
 = [(y2 - dy2) - eps] + (z3 - dz3) + 2eps/3
 = (y2 - dy2) + (z3 - dz3) - eps/3
<= (y3 + dy3) + (z3 - dz3) - eps/3    [the type requires error bounds on different estimates y to be consistent]
 < (y3 - dy3) + (z3 - dz3)            [since dy3 < eps/6]
 = (y3 + z3)  - (dy3 + dz3)


File:leningrad.jpg (78.29 KB,768x479)

>2. If the proof doesn't tell us how to find the bounds on A and B showing A < B, can't we just find out by calculating the numbers at increasingly greater precision? We know it will eventually terminate since the bounds have to exist.

This question was bugging me. The general question is this. Suppose we have a statement P(n) about natural numbers and an algorithm for computing whether P(n) is true for any given number (we say that P is "decidable.") Suppose also that although we haven't found an n such that P(n) is true, we have proven that the assumption that P(n) is false for all n leads to a contradiction. Then we should be able to find such an n such that P(n) is true by simply testing the natural numbers one by one until we find one that works. Since we now have an algorithm for computing n such that P(n) is true, it seems like this ought to count as a constructive proof that there exists an n such that P(n). The question is whether this can be proven in Coq or the intuitionistic logic it is based on, or if it would need to be added as an axiom in order to be used.

After some searching, I found a thread on Reddit asking the question:

So apparently this is called "Markov's principle":

It is known to be independent of (cannot be proved true or false in) Martin-Lof type theory:
And also of CIC, the logic Coq is based on:

As an interesting aside, the Reddit thread had an interesting explanation of an algorithm which solves NP-hard problems in polynomial time assuming P = NP:
>You enumerate proofs and algorithms. If there is any provably correct algorithm, then the pair of it and its proof will come up in finite time, and easily be checkable as correct. Then just run that polytime algorithm. Finite time plus polytime is still polytime.


Additive inverses and subtraction are now implemented. Additive inverses have been proven to be additive inverses, that is, x + (-x) == 0. Also proved some more theorems about adding to both sides of inequalities.


While planning out how to implement multiplication, I started thinking about >>7028 again. I had a plan to deal with the exponential blowup in the number of levels of accuracy to be computed by putting an optimizing layer between each number object and whatever was using the number. When you requested the number at a given accuracy, it would change the request to one of several preset levels of accuracy, and it would record the result so that the next time that accuracy was requested, it wouldn't have to be recalculated. That would fix the exponential blowup problem, but it was unsatisfying because it was still recalculating the same thing over and over again at many different levels of accuracy. I wanted to find a way to avoid this.

An alternate approach to what I'd been doing so far, at which I made an aborted attempt as described in >>7038, is representing numbers as a sequence that gets more and more accurate. To calculate a number to a desired accuracy, you just keep calculating the items in the sequence until the error level falls low enough. It's possible to get an exponential blowup with this approach too, in the form of exponentially many items to calculate before you get to the desired accuracy, but this can be ameliorated with an optimizing layer that skips items in the sequence when it's not converging fast enough. And there's still the problem of wasteful recalculation.

Then it occurred to me: What if we represent the numbers as infinite sums? Consumers of numbers, rather than receiving better and better approximations and doing a full recalculation based on the full number, would receive small corrections to the original number, and would make small corrections in their calculations. Although we haven't avoided repeatedly recalculating the numbers, with this approach, each recalculation only does new work. It seems much less wasteful. It's also closer to the ways we usually represent and calculate numbers, such as infinite decimals and Taylor series.

After planning out how to do addition, subtraction, and multiplication in this new representation, I've started rewriting the code. Some of the code won't change much; I'm keeping more or less the same definition of "less than", "equals", and "apart". Some parts, like determining whether a number is positive or negative (given it's apart from zero), have become much simpler. Right now, I'm a little more than halfway through the rewrite. I'm re-implementing addition next.


Finally got addition defined again. That was more work than I expected. (I could have gotten it done easier by just adding termwise but I wanted to do it by interleaving terms taking them from the series with the greater contribution to the remaining error first to avoid calculating one of the numbers the far more precision than I needed. That turned out to be a bit complicated to prove convergent.)

[Return] [Top] [Catalog] [Post a Reply]
Delete Post [ ]

[ home / bans / all ] [ spg ] [ qa / jp / cry ] [ f / ec ] [ b / poll ] [ tv / bann ] [ toggle-new / tab2 ]