[ home / bans / all ] [ qa / jp / cry ] [ spg ] [ f / ec ] [ b / poll ] [ tv / bann ] [ toggle-new / tab ]

/b/ - Boson Technology

Also known as Boson /g/

New Reply

Options
Comment
File
Whitelist Token
Spoiler
Password
(For file deletion.)
Markup tags exist for bold, itallics, header, spoiler etc. as listed in " [options] > View Formatting "


[Return] [Bottom] [Catalog]

File:genius_comes_17.jpg (768.16 KB,1064x1500)

 No.6894[View All]

Today I begin a programming project to construct real numbers and various operations on them. I don't mean the wimpy limited precision floating point numbers that you've probably used if you've ever written a program in your life. Nor do I mean arbitrary precision numbers which can have as many digits as you like (until your computer runs out of memory) but still have irrecoverable rounding error because the digits end at some point. I mean exact representations of numbers that can be queried to any precision (which in practice may be limited by time and memory constraints).

This has been done many times before and probably much better than I will manage, so I don't expect to create anything really novel or useful. This is primarily for my own edification, to learn more about numbers and the computable/constructive reals. In this thread I will blog about what I'm doing and discuss the philosophy of numbers with anyone who's interested.
130 posts and 36 image replies omitted. Click reply to view.

 No.9020

>>9019
it is trivial, but how is that a problem?
The same way is argued for the semidecidability of the halting problem.

 No.9034

What wording would you use to describe a property which, being true for some natural number k, must be true for all subsequent natural numbers? The concept seems to come up a lot in this.

 No.9035

>>9034
Isn't that the principle of induction, or something related to it?

 No.9037

>>9035
That's a way of proving things. I'm talking about describing something already known. To give a specific example, one of the requirements chosen for x to represent a real number is that for any rational number eps > 0 there be a k such that width(x[k]) [the width of the kth interval being used to define x] is less then eps. But it's not just one k for which this is true; we can prove that since the widths of the intervals are decreasing, if it's true for some k, it's true for all k that come after.

Ideas for words to describe this property: increasingly true, increasing, monotone, monotonic, isotone, isotonic, weakening, upward, true upward, onward, true onward, expanding, improving, permanent, once-always

 No.9444

Haven't done much with the code lately, but I've been working out how to do arithmetic with infinite (signed-digit) decimals.

   . 5   2   5   2   5
   ---------------------------
   ±10 ±20 ±30 ±40 ±50 ±60 ±70
                           ._
           .   .   .   .   510
     0  42 510 232 500 222 490
   . 0  42  90 132 180 222 270
   ---------------------------
 × . 7   8   7   8   7   8
   ---------------------------
.6|     42  48  42  48  42  48
 6|         42  48  42  48  42
 6|             42  48  42  48
 6|                 42  48  42
 6|                     42  48
 6|                         42


(there seems to be a bug in the way the 2/22/22 thing interacts with code blocks)

 No.9460

I've started to work on the code a bit again. So far I've added a few more useful theorems, some of which I plan to use to prove important stuff about multiplication and inequalities.

>>8920
The following will be a common pattern, so it will also be convenient to prove as a lemma so we can repeat things less often:
¥ If a < b and c < d, then there is a natural number k such that a[k] < b[k] and c[k] < d[k]. [Recall that this is an abbreviation for max(a[k]) < min(b[k]) and max(c[k]) < min(d[k]).]
Suppose a < b and c < d. Then there is a k1 such that max(a[k1]) < min(b[k1]) and a k2 such that max(c[k2]) < min(d[k2]). Let k be the larger of k1 and k2. Then max(a[k]) ≤ max(a[k1]) < min(b[k1]) ≤ min(b[k]) and max(c[k]) ≤ max(c[k2]) < min(d[k2]) ≤ min(d[k]).

For example, this can be used to shorten the proof that < is transitive:
¥ If x < y and y < z, then x < z.
Suppose x < y and y < z. Then there is a k such that max(x[k]) < min(y[k]) and max(y[k]) < min(z[k]). Then max(x[k]) < min(y[k]) ≤ max(y[k]) < min(z[k]). Thus x < z.

>>8925
Although they follow obviously from the compatibility of < with =, for technical convenience I added some theorems about transitivity of < and =:
¥ If x < y and y = z, then x < z.
¥ If x = y and y < z, then x < z.
These are trivial consequences of the fact that x = y implies x ≤ y and the already proven theorems about transitivity of < and ≤.

>>8975
Another property I forgot: The additive inverse of 0 is 0.
¥ -0 = 0
Proof is by the usual automatic method.

It's also convenient to know
¥ If x = -y, then y = -x.
Suppose x = -y. Then y = -(-y) = -x.

>>8984
Another useful fact: A number is positive if and only if its additive inverse is negative. A number is negative if and only if its additive inverse is positive.
¥ x > 0 if and only if -x < 0.
Suppose x > 0. Then -x < -0 = 0. Suppose -x < 0. Then -x < -0, and x > 0.
¥ x < 0 if and only if -x > 0.
Suppose x < 0. Then -x > -0 = 0. Suppose -x > 0. Then -x > -0, and x < 0.

A number is greater than another if and only if their difference is positive:
¥ x < y if and only if y - x > 0
This follows from adding (-x) to both sides of the inequality.

>>8992
Another useful equation about multiplication:
¥ (-x) * (-y) = x * y.
Proof is by the usual automatic method.

 No.9461

Now let's start to cover the relationship between multiplication and inequalities.

We start with the fact that multiplying two positive numbers results in a positive number.
¥ x > 0 and y > 0 imply x * y > 0
Suppose x > 0 and y > 0. Then there is a k such that x[k] > [0,0]Q and y[k] > [0,0]Q. The lower bound min((x*y)[k]) can be written as r*s, where r ∈ x[k] and s ∈ y[k]. Since r ≥ min(x[k]) > 0 and s ≥ min(y[k]) > 0, we have min((x*y)[k]) > 0. Therefore x*y > 0.

Multiplying a postitive times a negative results in a negative.
¥ x > 0 and y < 0 imply x * y < 0
Suppose x > 0 and y < 0. Then -y > 0. Thus -(x*y) = x*(-y) > 0. It follows that x*y < 0.

Multiplying two negatives makes a positive.
¥ x < 0 and y < 0 imply x * y > 0
Suppose x < 0 and y < 0. Then -x > 0 and -y > 0. Therefore x*y = (-x)*(-y) > 0.

We can also go the opposite direction:
¥ If y > 0, then x * y > 0 implies x > 0.
Suppose y > 0 and x*y > 0. Then there is a k such that min(y[k]) > 0 and min((x*y)[k]) > 0. Because min(x[k]) ∈ x[k] and min(y[k]) ∈ y[k], we have min(x[k]) * min(y[k]) ∈ (x*y)[k]. Thus min(x[k]) * min(y[k]) ≥ min((x*y)[k]) > 0. Combined with min(y[k]) > 0, this implies min(x[k]) > 0. Therefore x > 0.

From these we can establish
¥ If z > 0, then x < y if and only if x * z < y * z.
1. Suppose z > 0 and x < y. Then y - x > 0, and y*z - x*z = (y-x)*z > 0. Therefore x*z < y*z.
2. Suppose z > 0 and x*z < y*z. Then (y-x)*z = y*z - x*z > 0, and y - x > 0. Therefore x < y.

A quick lemma, provable automatically:
¥ (x - y) * (-z) = y * z - x * z

If we multiply by a negative number, the inequalities are reversed:
¥ If z < 0, then x < y if and only if x * z > y * z.
Suppose z < 0. Note that -z > 0.
1. Suppose x < y. Then y - x > 0, and x*z - y*z = (y-x)*(-z) > 0. Therefore x*z > y*z.
2. Suppose x*z < y*z. Then (x-y)*(-z) = y*z - x*z > 0, and x - y > 0. Therefore x < y.

Because x ≤ y is defined as x > y being false, we have:
¥ If z > 0, then x ≤ y if and only if x * z ≤ y * z.
¥ If z < 0, then x ≤ y if and only if x * z ≥ y * z.

We can also prove
¥ If z <> 0, then x <> y if and only if x * z <> y * z.
Suppose z <> 0. Then z < 0 or z > 0.
If z < 0, then x <> y, defined as x < y or x > y, is equivalent to x*z > y*z or x*z < y*z, which is the same as x*z <> y*z.
If z > 0, then x <> y, defined as x < y or x > y, is equivalent to x*z < y*z or x*z > y*z, which is the same as x*z <> y*z.

Because x = y is defined as x <> y being false, we immediately have
¥ If z <> 0, then x = y if and only if x * z = y * z.

Because multiplication is commutative, everything we have proven about multiplication on the right also works on the left:
¥ x < 0 and y > 0 imply x * y < 0
¥ If x > 0, then x * y > 0 implies y > 0.
¥ If z > 0, then x < y if and only if z * x < z * y.
¥ If z < 0, then x < y if and only if z * x > z * y.
¥ If z > 0, then x ≤ y if and only if z * x ≤ z * y.
¥ If z < 0, then x ≤ y if and only if z * x ≥ z * y.
¥ If z <> 0, then x <> y if and only if z * x <> z * y.
¥ If z <> 0, then x = y if and only if z * x = z * y.

 No.9462

File:65f2fdcb4e33a95257136bf053….jpg (308.31 KB,600x766)

What do you think would be a good way to represent signed digits on Kissu? The idea is that (using base ten as an example) instead of using the digits 0 through 9, you use the digits -9 through 9. For example, if a number has 6 in the tens place and -9 in the ones place, that's 6*10 + -9*1 = 51.

So I'm looking for a way to write the negative digits.

With a line over them is one option. You can do this with Unicode tricks, but support varies.
Do you see a line over the 3 or the 4 below? It's supposed to be over the 3, but some fonts might not render it correctly.
3̅4

In >>9444 I used code tags to make things monospaced and an underscore on the line above. This works, but has limitations, including the need to add a mostly blank line above the numbers.
_
34


Strikethrough with Unicode is another option, but again, support varies.
3̵4

There's also strikethrough with markup. And various other formatting options.
34
But they won't work (and shouldn't) inside code tags. The monospace font of code tags might be needed to line up numbers in a calculation.
[str]3[/str]4

There are a lot of altered versions of numbers in Unicode, and some of them seem to be better supported than the overline or strikethrough, but they seem to often mess up alignment, even inside code blocks.
3⑨4⑨4⑨4
3949494


I could try using letters, but then you have to do extra mental work to convert it to the number in you head, and that's annoying.

Any options I haven't thought of?

 No.9463

File:[MoyaiSubs] Mewkledreamy -….jpg (374.35 KB,1920x1080)

>>9462
>What do you think would be a good way to represent signed digits on Kissu?
Uhh... create new numbers from scratch?
Yes, I think that's the ideal solution. We will have to name and draw them, but I think math has needed this for a while.

 No.9464

>>9463
That could be fun, although we'd need Vern to install a special font in order to be able to type them. And it would be best if they bear some connection to their positive counterparts so that we don't have to memorize a whole new plus and times table just to calculate with them.

 No.9465

>>9464
Oh. Yeah, I guess you'd actually need something like that, huh. Isn't there some weird unicode thing that makes the text afterwards show backwards? I remember people doing it on desuarchive

 No.9466

File:Komatta_Kotoko_2.png (1.37 MB,1280x720)

>create new numbers
what the hell is going on here

 No.9467

>>9465
There's a thing that makes the characters go from right to left instead of left to right, but it doesn't change the characters themselves.

There are also scripts for writing ʇxǝʇ uʍop-ǝpᴉsdn or with various other transformations. They work by finding characters that look similar to a flipped version of your original character. That might be useful, but it wouldn't work for the number 8.

Let's try it anyway (using https://www.upsidedowntext.com/):
123456789
ƖᄅƐㄣϛ9ㄥ86

111111111133333333335555555555
ƖƖƖƖƖƖƖƖƖƖƐƐƐƐƐƐƐƐƐƐϛϛϛϛϛϛϛϛϛϛ

Some but not all of the characters break monospacing on my machine. And of course upside-down doesn't work well for 6, 8, and 9.

mirrored text using https://lingojam.com/MirrorYourText
123456789
ƖςƐμटმ٢8୧

111111111122222222223333333333444444444466666666667777777777
ƖƖƖƖƖƖƖƖƖƖςςςςςςςςςςƐƐƐƐƐƐƐƐƐƐμμμμμμμμμμმმმმმმმმმმ٢٢٢٢٢٢٢٢٢٢

Looks better. The choices for 5 and 9 still break monospacing for me a bit, and of course 8 still doesn't work. Still a lot of the numbers aren't very recognizable after transformation.

 No.9468

>>9466
Not really new numbers but new numerals.

Here's the problem. Suppose we want to add two real numbers starting with 0.333 and 0.666. If the numbers really go on like that forever, you could just add the digits pairwise to get 0.999... (needs some work to prove it's valid, but it is), which is the same as 1. But what if somewhere in the 0.666... number there's a 5 or a 7? If there's a 5 somewhere your answer has to be a little less than 1, meaning it has to start with 0.999, and if there's a 7 somewhere your answer has to be a little greater than 1, meaning it starts with 1.000. You don't know which to use until you find that digit. But since the numbers might add up to 1 exactly, you may never find that digit. That means you can never write down any digits of the answer.

If you add digits -9 through -1, this problem goes away. You can start with 1.000 in all three cases. If you need the answer to be slightly less than 1, you use a negative digit when needed. If you need the answer to be slightly bigger than 1, you can use a positive digit.

 No.9469

>>9467
Hmm, that stuff looks kind of confusing on second thought.
I guess you could highlight the stuff

 No.9471

File:die nerds.png (1021.29 KB,989x754)


 No.9477

I changed some of the theorems in >>8966 since it turns out to be useful to prove the implication both ways:
¥ Assuming Markov, x ≤ y is false if and only if x > y is true.
¥ Assuming Markov, x ≥ y is false if and only if x < y is true.
¥ Assuming Markov, "w < x or y < z" is not false if and only if "w < x or y < z" is true.
¥ Assuming Markov, x = y if and only if x <> y is true.

The proof in the reverse direction (P -> not not P) is trivial:
Suppose P. If we assume not P, we have a contradiction. Therefore not not P.

 No.9480

>>9477
Also useful to show
¥ Assuming the law of the excluded middle, not (not P) if and only if P.
1. Suppose not (not P). Either P is true, in which case we are done, or P is false, contradicting not (not P).
2. Suppose P. If we assume not P, we have a contradiction. Therefore not not P.

 No.9481

>>9468
Interesting... I think I kind of get it.

 No.9482

Now back to multiplication and inequalities. But first, let's prove a useful lemma about apartness.
¥ x <> Q2R(r) if and only if there exists a k such that r ∈ x[k] is false.
1. Suppose x <> Q2R(r). Then there is a k such that max(x[k]) < r or a k such that min(x[k]) > r. Both contradict r ∈ x[k].
2. Suppose r ∈ x[k] is false. Either r < min(x[k]) or r ≥ min(x[k]). If r < min(x[k]), we have Q2R(r) < x, and we are done. If r ≥ min(x[k]), it cannot be the case that r ≤ max(x[k]). Therefore r > max(x[k]) and Q2R(r) > x.

¥ x * y <> 0 if and only if x <> 0 and y <> 0
1. Suppose x*y <> 0. Then there is a k such that 0 ∈ (x*y)[k] is false. It cannot be the case that 0 ∈ x[k] or 0 ∈ y[k], since either would make 0 = 0*0 ∈ (x*y)[k]. Therefore x <> 0 and y <> 0.
2. Suppose x <> 0 and y <> 0. Either x < 0 or x > 0; similarly, either y < 0 or y > 0. In all cases, x*y < 0 or x*y > 0. Therefore x*y <> 0.

Since equality is defined as the negation of apartness, we immediately have
¥ x * y = 0 if and only if it is not true that both x <> 0 and y <> 0

If we allow the use of some axioms, we can restate this in more familiar ways.

First, a quick lemma.
¥ not (P or Q) is equivalent to (not P) and (not Q)
1. Suppose it is not (P or Q). If we assume P, we can prove P or Q, which contradicts not (P or Q). Therefore not P. By a similar argument, we can conclude not Q. Therefore (not P) and (not Q).
2. Suppose not P and not Q. If we assume P or Q, we either have P, contradicting not P, or Q, contradicting not Q. Therefore not (P or Q).

¥ Assuming Markov's principle, x * y = 0 if and only if it is not false that x = 0 or y = 0.
Assuming Markov's principle, x <> 0 is equivalent to the negation of x = 0, and y <> 0 is equivalent to the negation of y = 0. So x * y = 0 is equivalent to not ((not x = 0) and (not y = 0)), which is equivalent to not (not (x = 0 or y = 0)).

As before, by "false" I mean that we can deduce a contradiction from the statement. By "not false" I mean that if we assume that we can deduce a contradiction from the statement, then we can deduce a contradiction. In intuitionistic logic, a statement being proven "not false" does not in general prove the statement true. We can prove a "not false" statement true by invoking the axiom of excluded middle (see >>9480), but that would make our proof non-constructive. As mentioned in >>8959, "It is not false that P or Q" can be thought of as a way of expressing in the setting of intuitionistic logic what we would call just "P or Q" in classical logic. If we've proven this, we know that either P or Q is true, but we don't necessarily have a constructive proof that would tell us which one is true.

Is there a way to show that no constructive proof exists? I was thinking about this in >>9015. I suspect that there's no way to prove it constructively, but I haven't proven that yet.

¥ Assuming the law of the excluded middle, x * y = 0 if and only if x = 0 or y = 0.
LEM implies Markov, and makes "it is not false that" equivalent to the statement being true.

 No.9486

A few more quick theorems before we move on to division.

¥ x * y > 0 if and only if (x > 0 and y > 0) or (x < 0 and y < 0)
1. Suppose x*y > 0. Then x*y <> 0. We have x <> 0, meaning x < 0 or x > 0, and y <> 0, meaning y < 0 or y > 0. We can eliminate the x < 0, y > 0 and x > 0, y < 0 cases because they imply x*y < 0. Therefore x > 0 ans y > 0 or x < 0 or y < 0.
2. Suppose either x > 0 and y > 0 or x < 0 and y < 0. In either case we have x*y > 0.

Similarly
¥ x * y < 0 if and only if either x > 0 and y < 0 or x < 0 and y > 0.

¥ If x ≥ 0 and y ≥ 0, then x * y ≥ 0.
Suppose x ≥ 0 and y ≥ 0. Assume to the contrary that x*y < 0. Either x > 0 and y < 0, contradicting y ≥ 0, or x < 0 and y > 0, contradicting x ≥ 0. Therefore x*y ≥ 0.

¥ If y ≥ 0, then x * y > 0 implies x > 0.
Suppose y ≥ 0 and x * y > 0. Either x > 0 and y > 0, in which case we are done, or x < 0 and y < 0, contradicting y ≥ 0.

¥ If z ≥ 0, then x * z < y * z implies x < y.
Suppose z ≥ 0 and x * z < y * z. Then (y - x) * z = y * z - x * z > 0, and y - x > 0 by the previous theorem. Thus x < y.

Because multiplication is commutative, we also have
¥ If x ≥ 0, then x * y > 0 implies y > 0.
¥ If z ≥ 0, then z * x < z * y implies x < y.

 No.9490

>>8984
Some more theorems about addition:
¥ The sum of two positive numbers (x, y > 0) is a positive number.
¥ The sum of two negative numbers (x, y < 0) is a positive number.
¥ The sum of two numbers which are ≥ 0 is ≥ 0.
¥ The sum of two numbers which are ≤ 0 is ≤ 0.

These are immediate consequences of
¥ If w < x and y < z, then w + y < x + z.
¥ If w ≤ x and y ≤ z, then w + y ≤ x + z.
and the fact that 0 + 0 = 0.

 No.9491

>>8967
Another addition: some obvious consequences of
> r < s if and only if Q2R(r) < Q2R(s).
> r <> s if and only if Q2R(r) <> Q2R(s).
are
¥ 0 < 1
¥ 0 <> 1

 No.9492

Checking our progress by comparing what we've proven so far with the axioms of the reals according to baby Rudin.

Order axioms
> (i) If x ∈ S and y ∈ S then one and only one of the statements x < y, x = y, y < x is true.
The "one" part is proven here >>8947 (assuming LEM / using double-negated or).
The "only one" part is proven in >>8922 (note that by our definitions proving that x = y implies x ≤ y and x ≥ y is the same as proving it implies the falsehood of y < x and x < y)

> (ii) If x, y, z ∈ S, if x < y and y < z, then x < z.
Proven in >>8920

Field axioms
> (A1) If x ∈ F and y ∈ F, their sum x + y is in F.
We defined the function in >>8972 and all functions in Coq are total.

> (A2) Addition is commutative: x + y = y + x.
> (A3) Addition is associative: (x + y) + z = x + (y + z) for all x, y, z ∈ F.
> (A4) F contains an element 0 such that 0 + x = x for every x ∈ F.
Proven in >>8973

> (A5) To every x ∈ F corresponds an element -x ∈ F such that x + (-x) = 0.
Proven in >>8975

> (M1) If x ∈ F and y ∈ F, then their product xy is in F.
We defined the function in >>8991 and all functions in Coq are total.

> (M2) Multiplication is commutative: xy = yx for all x, y ∈ F.
> (M3) Multiplication is associative: (xy)z = x(yz) for all x, y, z ∈ F.
Proven in >>8992

> (M4) F contains an element 1 ≠ 0 such that 1x = x for every x ∈ F.
That 1 ≠ 0 is proven in >>9491 (note <> is symmetric per >>8922, and if we want to be picky, we can get the negation of 1 = 0 by noting that it is the double negation of 1 <> 0, and statements in intuitionistic logic imply their double negation).
That 1x = x is proven in >>8992.

> (M5) If x ∈ F and x ≠ 0 then there exists an element 1/x ∈ F such that x · (1/x) = 1.
Still needs proving.

> (D) The distributive law x (y + z) = xy + xz holds for all x, y, z ∈ F.
Proven in >>8992

Ordered field axioms
> (i) x + y < x + z if x, y, z ∈ F and y < z,
Proven in >>8984

> (ii) xy > 0 if x ∈ F, y ∈ F, x > 0, and y > 0.
Proven in >>9461

Least-upper-bound property
> Suppose S is an ordered set, E ⊂ S, and E is bounded above. Suppose there exists an α ∈ S with the following properties:
> (i) α is an upper bound of E.
> (ii) If γ < α then γ is not an upper bound of E.
> Then α is called the least upper bound of E [that there is at most one such α is clear from (ii)] or the supremum of E, and we write α = sup E.
> An ordered set S is said to have the least-upper-bound property if the following is true:
> If E ⊂ S, E is not empty, and E is bounded above, then sup E exists in S.
Still needs proving.

 No.9504

File:6f252db240ee46616a7616cb07….png (2.5 MB,1400x1980)

Division is multiplication by the reciprocal of a number, also called the additive inverse. So we should first define the reciprocal.

The approach we have taken with all the other operations so far is to take the bounds on the inputs at a particular step number and ask, if we know the inputs are within this interval, what are the smallest and largest possible values for the output? That gives us the interval for the output at the same step number.

Let's try the same thing with the reciprocal now. If we know our value is between r and s inclusive (r ≤ x ≤ s), what can we say about the value of its reciprocal?

If r and s are both positive, then we have bounds 1/s ≤ x ≤ 1/r.
If r and s are both negative, then we have bounds 1/s ≤ x ≤ 1/r again.
But what about when r is negative and s is positive? Then there's no lower or upper limit on what 1/x can be.

Division by zero is undefined, so if a number is zero, we shouldn't be trying to take its reciprocal in the first place. But some of the intervals for a nonzero number could contain zero. On the other hand, the intervals for a nonzero number can't contain zero forever. Eventually there has to be a lower bound greater than zero or an upper bound less than zero.

We can fix this issue by searching for that first interval which does not contain zero and using it as a replacement for all the prior intervals which do contain zero. After this preprocessing, we can use the usual approach.

While writing the comparison function, we've already written a function >>8963 to perform this sort of search. It takes real numbers w, x, y, z and a proof of w < x or y < z, and finds the first k such that max(w[k]) < min(x[k]) or max(y[k]) < min(z[k]). By feeding it the numbers x, 0, 0, x and a proof of x <> 0 (defined as x < 0 or 0 < x), it is guaranteed to find a k such that max(x[k]) < 0 or 0 < min(x[k]), or in other words, 0 ∉ x[k]. Let's call this k value find_zeroless(x, p) where x is the number and p is the proof that x <> 0.

We'll adopt the practice used in Coq of denoting the reciprocal of x as /x to distinguish it from the division 1/x, which is to be defined in terms of /x as 1/x = 1 · /x. For division of rational numbers, Coq handles division by zero by simply assigning a bogus value of zero to the result. This won't work for real numbers because we need to explicitly use the proof that x <> 0, and there's no way to decide whether x <> 0 or x = 0 by an algorithm given x. So in order to compute the reciprocal of x, we'll need to pass both x and a proof p that x <> 0. Let's denote this as /x †p. Sometimes when writing about /x and find_zeroless, where there is no ambiguity, we will omit the proof argument for brevity, but it will still be there, at least until program extraction.

We define
¥ /[a, b]Q = [/b, /a]Q
¥ (/x †p)[k] = /(x[max{k, find_zeroless(x, p)}])

Our reciprocal operation on rational intervals behaves as we want provided 0 is not a member:
¥ r ∈ [a, b]Q implies /r ∈ /[a, b]Q provided 0 ∉ [a, b]Q.
Either 0 < a ≤ r ≤ b or a ≤ r ≤ b < 0. In either case /b ≤ /r ≤ /a.

And in the cases we use it, 0 will indeed not be a member:
¥ If k ≥ find_zeroless(x), then 0 ∉ x[k].
We already have 0 ∉ x[find_zeroless(x)]. Since x[k] ⊆ x[find_zeroless(x)], this implies 0 ∉ x[k].
In particular it follows that
¥ 0 ∉ x[max{k, find_zeroless(x)}].

Now let's prove that /x meets the conditions to be a member of our real number type.

First, nestedness:
>That each interval contains the endpoints of itself and all subsequent intervals. That is, if k2 ≥ k1, then min(x[k2]) ∈ x[k1] and max(x[k2]) ∈ x[k1].
Suppose k2 ≥ k1. Let k1' = max{k1, find_zeroless(x)} and k2' = max{k2, find_zeroless(x)}. We also have k2' ≥ k1', and thus min(x[k2']) ∈ x[k1'] and max(x[k2']) ∈ x[k1']. Since 0 ∉ x[k1'], it follows that
min((/x)[k2]) = /max(x[k2']) ∈ /(x[k1']) = (/x)[k1]
max((/x)[k2]) = /min(x[k2']) ∈ /(x[k1']) = (/x)[k1].

Some quick lemmas before proving the next part:
¥ If a ≤ b and 0 ∉ [a, b]Q, then min{a², b²} > 0.
Either 0 < a ≤ b or a ≤ b < 0. In either case a² > 0 and b² > 0. Therefore min{a², b²} > 0.

¥ If r, s ∈ [a, b]Q and 0 ∉ [a, b]Q, then r·s ≥ min{a², b²}.
Either 0 < a ≤ r, s, in which case r·s ≥ a·s ≥ a² ≥ min(a², b²}, or r, s ≤ b < 0, in which case r·s ≥ b·s ≥ b² ≥ min{a², b²}.

Now we can prove convergence:
>That for any positive rational number, there will eventually be an interval in the sequence whose width (maximum - minimum) is smaller than that number.
Suppose eps > 0. Let n = find_zeroless(x) and c = min{min(x[n])², max(x[n])²}. Since min(x[n]) ≤ max(x[n]) and 0 ∉ x[n], we have c > 0. Thus eps·c > 0, and there exists a k such that width(x[k]) < eps·c. Let k' = max{k, n} and [a, b]Q = x[k']. Since a, b ∈ x[n] and 0 ∉ x[n], we have a·b ≥ c > 0 and a, b ≠ 0. Therefore
width(/(x[k'])) = 1/a - 1/b = (b - a) / (a·b) = width(x[k']) / (a·b) ≤ width(x[k]) / c < eps.

Thus /x as we have defined it is a real number.

 No.9505

Proving equations involving division will require a modified version of the lemma in >>8983.
¥ If for every k ≥ n there exist rational numbers r ∈ x[k] and s ∈ y[k] such that r = s, then x = y.
Suppose such an n exists. Assume to the contrary that x <> y. Then either there is a k such that x[k] < y[k] or a k such that x[k] > y[k]. In either case, let k' = max{k, n}. Since k' ≥ n, there are rational numbers r and s such that r ∈ x[k'], s ∈ y[k'], and r = s. Since k' ≥ k, r ∈ x[k] and s ∈ y[k]. We have either r ≤ max(x[k]) < min(y[k]) ≤ s or s ≤ max(y[k]) < min(x[k]) ≤ r. Both contradict r = s.

We'll also need this:
¥ If x <> 0 as proven by p, k ≥ find_zeroless(x, p), and r ∈ x[max{k, find_zeroless(x, p)}], then /r ∈ (/x †p)[k].
Since 0 ∉ x[max{k, find_zeroless(x, p)}], r ∈ x[max{k, find_zeroless(x, p)}] implies /r ∈ /(x[max{k, find_zeroless(x, p)}]) = (/x †p)[k].

We can now show our definition of /x is compatible with the equality relation we have defined for real numbers.
¥ If x = y and p and q are proofs of x <> 0 and y <> 0 respectively, then /x †p = /y †q.
Let n = max{find_zeroless(x, p), find_zeroless(y, q)}, and suppose we are given a k ≥ n. This means k ≥ find_zeroless(x, p) and k ≥ find_zeroless(y, p). Since x = y, there exists an r such that r ∈ x[k] = x[max{k, find_zeroless(x, p)] and r ∈ y[k] = y[max{k, find_zeroless(y, q)]. Therefore /r ∈ (/x †p)[k] and /r ∈ /(y †q)[k]. By the preceding lemma, we conclude /x †p = /y †q.

The following lemma will be helpful in automatic proofs of equations involving division:
¥ If k ≥ find_zeroless(x), then min x[k] <> 0.
This follows from the similar lemma showing 0 ∉ x[k] and the fact that min x[k] ∈ x[k].

All of these equalities can be proven automatic fashion.
¥ /(/x) = x
¥ Q2R(/r) = /(Q2R(r))
¥ /1 = 1
¥ /(-x) = -(/x)
¥ x · /x = 1
¥ /x · x = 1
¥ /(x · y) = (/x) · (/y)
In all of the above equations, any number whose reciprocal is taken must be apart from zero.

Some of the automatic proofs are a bit different before, so let's demonstrate with the all-important
¥ x · /x = 1.
Let n = find_zeroless(x), and suppose we are given a k ≥ n. For the left hand side, we have min(x[max{k, n}]) ∈ x[max{k, n}], so /min(x[max{k, n}]) ∈ (/x)[k]. Combine this with min(x[k]) ∈ x[k], and we have min(x[k]) · /min(x[max{k, n}]) ∈ (x · /x)[k]. For the right hand side, we simply have 1Q ∈ 1[k]. It remains to show that these rational numbers are equal. Since x <> 0 and k ≥ find_zeroless(x), we have min(x[k]) <> 0. We can now assert the corresponding rational number theorem, min(x[k]) · / min(x[k]) = 1Q. Since k ≥ n, we can replace k with max{k, n} to get min(x[k]) · /min(x[max{k, n}]) = 1Q. Therefore x · /x = 1.

>>9492
> (M5) If x ∈ F and x ≠ 0 then there exists an element 1/x ∈ F such that x · (1/x) = 1.
Now proven.

 No.9511

More theorems about reciprocals. But first:
>>8984
>>9460
> x > 0 if and only if -x < 0.
> x < 0 if and only if -x > 0.
These can be usefully combined into
¥ x <> 0 if and only if -x <> 0.

Back to reciprocals.

¥ x > 0 if and only if /x > 0
We have x · /x = 1 > 0. Thus if x > 0, /x > 0, and if /x > 0, x > 0.

¥ x < 0 if and only if /x < 0
x < 0 ↔ -x > 0 ↔ /(-x) > 0 ↔ -(/x) > 0 ↔ /x < 0

¥ If x <> 0, then /x <> 0.
Follows from the definition of <> as < or >.

¥ If x > 0 and y > 0, then x < y if and only if /y < /x.
¥ If x < 0 and y < 0, then x < y if and only if /y < /x.
We can prove both of these the same way. If either x > 0 and y > 0 or x < 0 and y < 0, we have xy > 0 and /(xy) = (/x)(/y) > 0.
Then x < y ↔ x · y · /y < y · x · /x ↔ /y < /x.

¥ If x > 0 and y > 0, then x ≤ y if and only if /y ≤ /x.
¥ If x < 0 and y < 0, then x ≤ y if and only if /y ≤ /x.
These follow from the previous because ≤ is defined as not >.

¥ If x, y <> 0, then x <> y if and only if /x <> /y.
If x, y <> 0, then xy <> 0. Then x <> y ↔ x · y · /y <> y · x · /x ↔ /y <> /x ↔ /x <> /y.

¥ If x, y <> 0, then x = y if and only if /x = /y.
Follows from the previous because = is defined as not <>.

¥ If x, y <> 0, then /x < /y if and only if 0 < y < x, y < x < 0, or x < 0 < y.
1. Suppose /x < /y. Either x > 0 or x < 0, and either y > 0 or y < 0.
1a. If x > 0 and y > 0, we have 0 < y < x.
1b. If x > 0 and y < 0, we have /y < 0 < /x, contradicting /x < /y.
1c. If x < 0 and y > 0, we have x < 0 < y.
1d. If x < 0 and y < 0, we have y < x < 0.
2. Suppose 0 < y < x, y < x < 0, or x < 0 < y.
2a. If 0 < y < x, then /x < /y.
2b. If y < x < 0, then /x < /y.
2c. If x < 0 < y, then /x < 0 < /y.

 No.9512

File:609e5d497b09acd668d55da25d….jpg (366.09 KB,708x1000)

What would be a good way to make these numbers easily available for people here to play around with them? Things like doing addition / subtraction / multiplication / division and eventually other more complicated operations to arbitrarily many decimal points. (I'll need to do some work on converting these things to decimals first.)

I was thinking about using the program extraction feature to export the programs to some other language, then make a little website with a REPL on it. (For those unfamiliar, REPL means read-eval-print-loop; you type the thing you want to evaluate in some programming language and it prints the result.)

I could also do something similar without exporting from Coq by using this thing:
https://coq.vercel.app/
I'd probably have to include some examples on the page.

Another option is to use the program extraction feature to make the objects available in [insert your favorite language here], so that interested people could download something and play with it on their machine.

Are there any particular programming languages lots of people here are familiar that would be worth targeting?

 No.9513

File:__cirno_touhou_drawn_by_yu….jpg (183.94 KB,400x1150)

Division is defined as multiplying by the reciprocal:
¥ x/y †p = x·(/y †p)
(The †p indicates a proof of y <> 0).

Since multiplication and the reciprocal are compatible with equality, so is division:
¥ If x1 = x2 and y1 = y2, and p1 and p2 prove y1 <> 0 and y2 <> 0 respectively, then x1 / y1 †p1 = x2 / y2 †p2.
Suppose x1 = x2 and y1 = y2. Then /y1 †p1 = /y2 †p2. Therefore x1 / y1 †p1 = x1 · (/y1 †p1) = x2 · (/y2 †p2) = x2 / y2 †p2.

A quick lemma to help with automated proofs:
¥ If r ∈ x[k] and s ∈ y[max{k, find_zeroless(y, p)}], then r/s ∈ (x/y †p)[k].
If s ∈ y[max{k, find_zeroless(y, p)}], we have /s ∈ (x/y †p)[k]. Given this and r ∈ x[k], we have r/s = r·(/s) ∈ (x·(/y †p))[k] = (x/y †p)[k].

Division is the inverse of multiplication. A few ways of stating this:
¥ If y <> 0, (x · y) / y = x.
¥ If y <> 0, (x / y) · y = x.
These can be proven automatically.

Another way to say this:
¥ If y <> 0, x / y = z if and only if x = z · y.
1. Suppose x / y = z. Then x = (x / y) · y = z · y.
2. Suppose x = z · y. Then x / y = (z · y) / y = z.

You can divide both sides of an inequality by a positive number. Dividing both sides by a negative number reverses the direction of the inequality.
¥ If z > 0, then x < y if and only if x / z < y / z.
¥ If z > 0, then x ≤ y if and only if x / z ≤ y / z.
¥ If z < 0, then x < y if and only if x / z > y / z.
¥ If z < 0, then x ≤ y if and only if x / z ≥ y / z.
These follow trivially from the related theorems about multiplication, and the fact that the reciprocal of a positive number is positive, and the reciprocal of a negative number is negative.

Next is completeness.

 No.9514

File:1398634481174.jpg (400.15 KB,1478x2095)

>>9512
Hope you don't mind waiting a while, but I think I need a bit of time before I can digest and make a proper response to anything math.

 No.9515

File:[MoyaiSubs] Mewkledreamy M….jpg (310.57 KB,1920x1080)

>>9512
>What would be a good way to make these numbers easily available for people here to play around with them?
I really don't know anything about programming, but an interactive website like you mentioned is probably the best way. This stuff is way over my head and I'm a visual/interactive learning person.
Math... we must destroy it.

 No.9519

>>9514
No rush, it will probably be a bit before I work on this. I'm going to start proving stuff about completeness next, and I'll probably write the programs to convert to and from decimal representations as part of that. I think these theorems are going to be more interesting than proving elementary properties of the basic arithmetic operations.

 No.9526

File:1531025680408.png (1.92 MB,1898x833)

There are several versions of the completeness axiom for the real numbers. Probably the most commonly used one is the supremum property, also known as the least-upper-bound property or Dedekind completeness. I talked about this axiom in >>9492, and it's the only one on the list I haven't proven yet.

There's a fun paper called "Real Analysis in Reverse" which examines lots of different properties of the real numbers, and talks about which ones, taken together with the ordered field axioms (every axiom in >>9492 except the last one, or in other words, most of what you learned in high school algebra), imply the supremum property, and which ones don't.
https://arxiv.org/abs/1204.4483

The author suggests treating the paper as a series of puzzles:
>It’s been fun for me to write this article, and I have a concrete suggestion for how the reader may have the same kind of fun. Start by reading the second section and trying to decide on your own which of the properties are equivalent to completeness and which aren’t. If you’re stumped in some cases, and suspect but cannot prove that some particular properties aren’t equivalent to completeness, read the third section to see if any of the ordered fields discussed there provide a way to see the inequivalence. And if you’re still stumped, read the fourth section. You can treat the article as a collection of math puzzles, some (but not all) of which can be profitably contemplated in your head.

You may want to try them out for yourself. Here's the list of properties from the paper (you may want to read them in the paper itself, where the typesetting will be better):
>(1) The Dedekind Completeness Property: Suppose S is a nonempty subset of R that is bounded above. Then there exists a number c that is an upper bound of S such that every upper bound of S is greater than or equal to c.
>(2) The Archimedean Property: For every x ∈ R there exists n ∈ N with n > x. Equivalently, for every x ∈ R with x > 0 there exists n ∈ N with 1/n < x.
>(3) The Cut Property: Suppose A and B are nonempty disjoint subsets of R whose union is all of R, such that every element of A is less than every element of B. Then there exists a cutpoint c ∈ R such that every x < c is in A and every x > c is in B . (Or, if you prefer: Every x ∈ A is ≤ c, and every x ∈ B is ≥ c. It’s easy to check that the two versions are equivalent.) Since this property may be unfamiliar, we remark that the Cut Property follows immediately from Dedekind completeness (take c to be the least upper bound of A).
>(4) Topological Connectedness: Say S ⊆ R is open if for every x in S there exists ϵ > 0 so that every y with |y−x| < ϵ is also in S. Then there is no way to express R as a union of two disjoint nonempty open sets. That is, if R = A ∪ B with A, B nonempty and open, then A ∩ B is nonempty.
>(5) The Intermediate Value Property: If f is a continuous function from [a, b] to R, with f(a) < 0 and f(b) > 0, then there exists c in (a, b) with f(c) = 0.
>(6) The Bounded Value Property: If f is a continuous function from [a, b] to R, there exists B in R with f(x) ≤ B for all x in [a, b].
>(7) The Extreme Value Property: If f is a continuous function from [a, b] to R, there exists c in [a, b] with f(x) ≤ f(c) for all x in [a, b].
>(8) The Mean Value Property: Suppose f : [a, b] → R is continuous on [a, b] and differentiable on (a, b). Then there exists c in (a, b) such that f′(c) = (f(b)−f(a))/(b−a).
>(9) The Constant Value Property: Suppose f : [a, b] → R is continuous on [a, b] and differentiable on (a, b), with f′(x) = 0 for all x in (a, b). Then f(x) is constant on [a, b].
>(10) The Convergence of Bounded Monotone Sequences: Every monotone increasing (or decreasing) sequence in R that is bounded converges to some limit.
>(11) The Convergence of Cauchy Sequences: Every Cauchy sequence in R is convergent.
>(12) The Fixed Point Property for Closed Bounded Intervals: Suppose f is a continuous map from [a, b] ⊂ R to itself. Then there exists x in [a, b] such that f(x) = x.
>(13) The Contraction Map Property: Suppose f is a map from R to itself such that for some constant c < 1, |f(x)−f(y)| ≤ c|x−y| for all x, y. Then there exists x in R such that f(x) = x.
>(14) The Alternating Series Test: If a₁ ≥ a₂ ≥ a₃ ≥ ... and a_n → 0, then ∑_{n=1}^∞ (−1)ⁿ a_n converges.
>(15) The Absolute Convergence Property: If ∑_{n=1}^∞ |a_n| converges in R, then ∑_{n=1}^∞ a_n converges in R.
>(16) The Ratio Test: If | a_{n+1} / a_n | → L in R as n → ∞, with L < 1, then ∑_{n=1}^∞ a_n converges in R.
>(17) The Shrinking Interval Property: Suppose I₁ ⊇ I₂ ⊇ ... are bounded closed intervals in R with lengths decreasing to 0. Then the intersection of the I_n’s is nonempty.
>(18) The Nested Interval Property: Suppose I₁ ⊇ I₂ ⊇ ... are bounded closed intervals in R. Then the intersection of the I_n’s is nonempty.

Some of these imply the supremum property, and some don't. And sometimes two of the properties taken individually don't imply the supremum property, but do imply it when taken together.

The particular properties I'm planning to talk about over the next few posts are (1), (2), (3), (11), and (17).

 No.9532

File:anim-b.gif (402.21 KB,160x212)

>>9526
Let's start by proving the Archimedean property.

But first we'll need to be able to convert between various number types. Let's follow the pattern of Q2R in denoting the conversion functions. These already exist in the Coq standard library (although not by the name we're giving them here):
¥ N2Z converts natural numbers to integers
¥ Z2Q converts integers to rational numbers
¥ Z2N converts integers to natural numbers (The result for negative numbers is unspecified. As implemented, it's 0, but we should avoid relying on that.)
We can easily define the additional functions N2Q, N2R, and Z2R by combining these and our own Q2R.

With that done, we now prove the Archimedean property:
¥ For every real number x, there is a natural number n such that N2R(n) > x.
This proof is pretty simple. Because of the way we represent real numbers, we already have a series of rational upper bounds for x. So we can just take the first upper bound and use the first natural number integer greater than it. To be specific, let
n = Z2N(max{Qfloor(max(x[0])) + 1, 0})
where Qfloor is a function (from the standard libary) which for a given rational number returns the greatest integer less than or equal to it. Note that max{Qfloor(max(x[0])) + 1, 0} ≥ 0. Then we have
N2R(n) = Z2R(N2Z(Z2N(max{Qfloor(max(x[0])) + 1, 0}))) = Z2R(max{Qfloor(max(x[0])) + 1, 0}) ≥ Z2R(Qfloor(max(x[0])) + 1) > Q2R(max(x[0])) ≥ x.

This has the important corollary:
¥ For every real number x, if x > 0, there is a natural number n such that /N2R(n) < x.
(Recall that we use /x to denote the reciprocal of x.)
Since x > 0, /x is defined and is also > 0. By the previous theorem, there is an n such that N2R(n) > /x. Since N2R(n) > /x > 0, we have /N2R(n) < //x = x.

 No.9533

File:293185cb9050e4822079318cc7….jpg (101.12 KB,800x600)

>>9526
Next up:
>(17) The Shrinking Interval Property: Suppose I₁ ⊇ I₂ ⊇ ... are bounded closed intervals in R with lengths decreasing to 0. Then the intersection of the I_n’s is nonempty.

This is reminiscent of how we constructed our real numbers in the first place. The main difference is that in our construction of the real numbers, the endpoints of the intervals had to be rational numbers, but in this theorem, the endpoints are themselves real numbers.

Since we're trying to be constructive whenever possible, rather than just proving the existence of a number in the intersection, we'd like to have an algorithm to calculate it. Calculating the number, as usual, means computing a sequence of nested intervals with rational endpoints that converge to the right result, based on the sequences of intervals that represent the input numbers. The efficiency of the algorithm matters. We're going to be using it later on to compute infinite sums. I've personally gone through a few designs for this algorithm in my head, and I'll post my current design once it's implemented and proven.

You might be able to come up with something better than what I've got. How would you do it?

 No.9534

>>9533
Design 3 turned out not to work. I think design 4 will work though.

 No.9535

>>9533
Our input is a sequence of pairs of real numbers, representing upper and lower bounds. And we're representing the real numbers themselves as sequences of pairs of rational numbers, representing upper and lower bounds. So we have a sequence of pairs of sequences of pairs of rational numbers. And our output needs to be a sequence of pairs of rational numbers.

design 1
My first idea here was the simple idea of making a sequence where the kth output lower bound is the kth rational lower bound from the kth real lower bound, and the kth output upper bound is the kth rational upper bound from the kth real upper bound.

The problem is that while this gives us valid upper and lower bounds for the number, they don't necessarily converge. To see why, consider the sequence of real intervals

[0A, 0A], [0B, 0B], [0C, 0C], ...

where the 0A, 0B, 0C, ... are all different representations of the real number 0, with

0A = [-1/1, 1/1]Q, [-1/2, 1/2]Q, [-1/3, 1/3]Q, [-1/4, 1/4]Q, ...
0B = [-1/1, 1/1]Q, [-1/1, 1/1]Q, [-1/2, 1/2]Q, [-1/3, 1/3]Q, ...
0C = [-1/1, 1/1]Q, [-1/1, 1/1]Q, [-1/1, 1/1]Q, [-1/2, 1/2]Q, ...

Even though the interval sequences defining the real numbers converge to zero, and the sequence of real numbers converges to zero, if we merge them together with the simple recipe, we get the sequence

[-1/1, 1/1]Q, [-1/1, 1/1]Q, [-1/1, 1/1]Q, ...

which doesn't converge at all! So we'll have to do something a bit smarter.

 No.9540

>>9533
>>9535
design 2
An obvious modification to design 1 which ought to make it converge: Instead of just using the kth rational lower bound from the kth real lower bound, look at all the kth rational lower bounds from real lower bounds 0 through k and take the best (the largest). Similarly look at all the kth rational upper bounds from real upper bounds 0 through k and take the best (the smallest).

For the example from >>9535 that broke convergence for design 1, we get
¥ [-1/1, 1/1]Q, [-1/2, 1/2]Q, [-1/3, 1/3]Q, ...
converging to zero as desired.

It's a simple solution, but it feels like it might require us to do some needless work. Let's look at how it would perform in a concrete example. We'll try using nested real intervals to calculate tan⁻¹(√(1/2)).

One way to calculate √(1/2) is the simple method in >>8908. We can work out one decimal place at a time by squaring the candidate number and seeing if it comes out lower than 1/2. We get the following sequence:
¥ [0.7, 0.8]Q, [0.70, 0.71]Q, [0.707, 0.708]Q, [0.7071, 0.7072]Q, [0.70710, 0.70711]Q, ...

The arctangent function has the following power series, which converges when |x| ≤ 1:
¥ tan⁻¹(x) = x - x³/3 + x⁵/5 - x⁷/7 + x⁹/9 - x¹¹/11 + ...

A nice thing about this series is that it's an alternating series, meaning the terms we're adding are alternately positive and negative. This combined with the fact that the absolute value of the terms is decreasing means that each partial sum is a new bound on the number. So we immediately get the following nested interval representation for the case 0 ≤ x ≤ 1 (when -1 ≤ x ≤ 0, the intervals need to be in the other direction):
¥ tan⁻¹(x) = the number in the intersection of
¥ [x - x³/3, x],
¥ [x - x³/3, x - x³/3 + x⁵/5],
¥ [x - x³/3 + x⁵/5 - x⁷/7, x - x³/3 + x⁵/5],
¥ [x - x³/3 + x⁵/5 - x⁷/7, x - x³/3 + x⁵/5 - x⁷/7 + x⁹/9],
¥ [x - x³/3 + x⁵/5 - x⁷/7 + x⁹/9 - x¹¹/11, x - x³/3 + x⁵/5 - x⁷/7 + x⁹/9],
¥ ...

Note that to handle the case of irrational x, these need to be intervals with real number endpoints. So we'll have to use the algorithm we're writing to find it.

Let's start by working out the real numbers involved. With our existing algorithms for arithmetic operations we would get
¥ x = [0.7, 0.8]Q, [0.70, 0.71]Q, [0.707, 0.708]Q, [0.7071, 0.7072]Q, [0.70710, 0.70711]Q, ...
¥ x - x³/3 = [0.7 - 0.8³/3, 0.8 - 0.7³/3]Q, [0.70 - 0.71³/3, 0.71 - 0.70³/3]Q, [0.707 - 0.708³/3, 0.708 - 0.707³/3]Q, ...
¥ x - x³/3 + x⁵/5 = [0.7 - 0.8³/3 + 0.7⁵/5, 0.8 - 0.7³/3 + 0.8⁵/5]Q, [0.70 - 0.71³/3 + 0.70⁵/5, 0.71 - 0.70³/3 + 0.71⁵/5]Q, [0.707 - 0.708³/3 + 0.707⁵/5, 0.708 - 0.707³/3 + 0.708⁵/5]Q, ...
¥ x - x³/3 + x⁵/5 - x⁷/7 = [0.7 - 0.8³/3 + 0.7⁵/5 - 0.8⁷/7, 0.8 - 0.7³/3 + 0.8⁵/5 - 0.7⁷/7]Q, [0.70 - 0.71³/3 + 0.70⁵/5 - 0.71⁷/7, 0.71 - 0.70³/3 + 0.71⁵/5 - 0.70⁷/7]Q, [0.707 - 0.708³/3 + 0.707⁵/5 - 0.708⁷/7, 0.708 - 0.707³/3 + 0.708⁵/5 - 0.707⁷/7]Q, ...

Merging the bounds together with the design 2 algorithm, we get
¥ tan⁻¹(x) =
¥ [max{0.7 - 0.8³/3}, min{0.8}]Q,
¥ [max{0.70 - 0.71³/3, 0.70 - 0.71³/3}, min{0.71, 0.71 - 0.70³/3 + 0.71⁵/5}]Q,
¥ [max{0.707 - 0.708³/3, 0.707 - 0.708³/3, 0.707 - 0.708³/3 + 0.707⁵/5 - 0.708⁷/7}, min{0.708, 0.708 - 0.707³/3 + 0.708⁵/5, 0.708 - 0.707³/3 + 0.708⁵/5}]Q,
¥ ...

It's honestly not as bad as I thought it would be. On the one hand, we're clearly doing a lot of unnecessary comparisons. It's obvious in this case that the best bound out of each set that we're taking the max/min of will be the last bound. But I thought computing the extraneous bounds would require a lot of extra work, when in fact they can be computed with work we would have to do anyway. It's enough to make me consider implementing this simple algorithm instead of design 4. But before I do that, I'll write out design 4 so I can see if it's really that much more complex when I write out all the details.

 No.9545

File:real number pair m is no l….png (35.94 KB,870x666)

design 3
Let m index the pairs of real number bounds, and let n index the pairs of rational number bounds that define each real number. At a particular m and n, you have four rational numbers. There are two real numbers, an upper bound and a lower bound, and at stage n each of those real numbers has a rational upper and lower bound.

The motivation: Notice that if we fix m and increase n, we get increasingly better bounds, but only up to a point. Unless the two real numbers are actually equal, eventually the lower bound on the upper bound becomes larger than the upper bound on the lower bound. This creates a limit on how close the upper bound on the upper bound and the lower bound on the lower bound can get to each other. If the next pair of real numbers (real number pair m+1) has upper bounds on the upper bound and lower bounds on the lower bound that are inside this gap, then the real number m is guaranteed to be no longer useful for computing better approximations to our target value. I've drawn a crude picture to try to explain what I mean.

The idea of design 3 was to increment n at each step, but only increment m if you encounter a situation like the one I just described. And at each such step, use the upper bound on the upper bound and the lower bound on the lower bound as rational bounds on the target value.

But I hadn't entirely thought this idea through, and after further thought I realized that the situation triggering an advance in m wasn't guaranteed to happen, even if real number pair m exhibited a gap preventing further progress. After some further thinking, I came to design 4, which I think will work.

design 4
This is pretty similar to the previous except with a different condition for triggering m to increment. This time, the trigger is when for real number pair m the gap between the lower bound on the upper bound and the upper bound on the lower bound is at least half the distance between the upper bound on the upper bound and lower bound on the lower bound. As before, we use the upper bound on the upper bound and the lower bound on the lower bound as bounds on the target value. A complication is that these bounds are not guaranteed to be nested, but that can be fixed by taking the minimum of all the upper bounds found so far, and the maximum of all the lower bounds found so far.

I think that since this version of the real numbers has been focused on simplicity, and the efficiency problems weren't as bad as I thought, I'm going to go with design 2 for now. I might look at design 4 again when I try for a more efficient version.

 No.9561

File:0153ab75bfc07880ffbae5b0b9….jpg (561.66 KB,919x1286)

>>9462
I had another idea regarding signed digits. The digits could be written as something like

↓1, ↓2, ↓3, ↓4, ↓5, ↓6, ↓7, ↓8, ↓9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9

with the ↓ representing -10. So for example 2.65000↓37 would be equal to 2.6499937.

You could also write it as a line through all the previous digits until you encounter a nonzero one, like 2.6500037. Since there's no ↓0 digit, there's no ambiguity here; you can tell the strikethrough is coming from the ↓3. This wouldn't work if the negative digit was the first nonzero digit, but in that case, you could just use a negative sign in front of the whole number and adjust how you write digits accordingly.

The nice thing about this is that it makes it easy to convert the result into standard form.

 No.9592

I'm thinking maybe I should start working on conversion to / from decimals earlier than I planned so I can have something for people to play around with as I talked about in >>9512.

In other news, some small bits of progress as I work on the proof of the nested interval theorem:

Proved some more missed theorems that ought to have proven earlier.
>>8977
>>9513
Proofs are by the usual automatic method.
¥ Q2R(r - s) = Q2R(r) - Q2R(s)
¥ If s <> 0 and Q2R(s) <> 0 by p, then Q2R (r / s) = Q2R(r) / Q2R(s) † p.

>>8920
A useful lemma about proving x < y:
¥ For any k1 and k2, if max(x[k1]) < min(y[k2]), then x < y.
Let k be the larger of k1 and k2. Then max(x[k]) ≤ max(x[k1]) < min(y[k2]) ≤ min(y[k]), proving x < y.

Also some progress on proof automation. Coq has some nice tactics called ring and field that can be used to automate reasoning about basic algebra, like proving (a - b) + (b - c) = a - c. To use them, you first have to prove the ring / field axioms (already done, although I had to reverse the direction of some equations), then declare the structure a ring / field to Coq. I had to use the less powerful ring option (which can't do stuff with division) for the technical reason that Coq's field tactic expects division to take only two numbers as arguments, whereas we have to feed in an additional proof that the denominator is nonzero.

 No.9594

>>6896
>If the number happens to be exactly one, it might even cause our program to get into an infinite loop.
But why though ?
Can you give a little explanation on what that is ?

 No.9595

>>9594
>Can you give a little explanation on what that is ?
Can you give a little explanation on why that is ?

 No.9596

>>9594
Because the number may be exactly 1 but we don't know if it's exactly 1. In order to output that first "1" digit instead of a "0" we need to compute the number to sufficient precision to know that it's not less than 1. But if the number happens to be exactly 1, infinite precision is required. We can't output "0." either, because the number could be slightly larger than 1, and in order to know this isn't the case, we need infinite precision.

To give a more concrete example, think about what happens when we first compute √2 and then square it. Computing √2 means being able to compute ever-better approximations to √2. With more computation, we can know that the value is between 1.4 and 1.5, between 1.41 and between 1.42, between 1.414 and 1.415, and so on. More computation means better precision. What does that tell us about the square of this number? First we know it's between 1.96 and 2.25, then we know it's between 1.9881 and 2.0164, then we know it's between 1.999396 and 2.002225. But we never have it precisely enough to know if it starts with 1.9 or with 2.0.

There might be a more rigorous proof using what I've learned are called "fugitive sequences." I've mentioned these before in >>8947 :
>Why do we expect not to be able to prove trichotomy constructively? Because that would mean that we have a way to compute for any given pair of real numbers whether or not they are equal. Confirming that two numbers are equal would in general require us to compute both numbers to infinite precision. To make the impossibility more clear, we could define a real number in decimal form as follows: It starts with 0.999 and the 9's go on until a given Turing machine halts. Is this number equal to 1? We can't know without solving the halting problem.
That said, I don't know the proof. Maybe solving >>9012 would help.

 No.9597

Here's an interesting paper I found about various ways people have found to construct the real numbers:
https://arxiv.org/pdf/1506.03467.pdf

The implementation I'm working on now is essentially the same as Bachmann's construction.

>>6896
>Cauchy sequences in their full generality feel a bit too complicated for my taste. It's possible the complexity at the beginning makes the implementation of stuff later on easier, though. But I'm going to try something else first.

I should say that after trying various things I appreciate the Cauchy sequence construction (actually due to Cantor, it turns out) a lot more. In particular the fact that you can define arithmetic operations simply by doing the operations termwise on the the input sequences is really nice.

Making the modulus of convergence explicit, which is necessary to actually compute the number, also reduces the apparent complexity of the Cauchy sequence construction a bit. To explain this a bit more: If I just look at a finite number of terms in a Cauchy sequence, I have no idea what the number is. You need an infinite number of terms to know what limit the sequence converges to, and you can't compute that. What you have to do instead is go to the proof that the sequence is Cauchy. If I have some precision ε > 0 which which I want to know the number, the proof (assuming it's constructive) gives me an N beyond which the terms in the sequence have to be less than ε apart from each other, and it can be proven that at that point in the sequence, you are no more than ε away from the limit. A modulus of convergence is an explicit function that takes ε and gives you the appropriate N. Suppose we call this function f. Instead of saying
> for every ε > 0 there exists an N such that for m,n > N, |x_m - x_n| < ε
we can just say
> for every ε > 0 and m,n > f(ε), |x_m - x_n| < ε
which, although it says the same thing, is potentially easier to wrap your head around because of the smaller number of nested quantifiers.

 No.9598

>>9540
Time for the formal proof that design 2 works.

First some new notation. We've been denoting the minimum and maximum of a closed interval [a, b] as min([a, b]) = a and max([a, b]) = b. Because we'll be working with sequences of intervals whose endpoints are also sequences of intervals, a notation that can be chained more cleanly will be helpful. So instead we'll use [a, b].min = a and [a, b].max = b.

We define the intersection of nested shrinking intervals of real numbers as
¥ nested_RI_int(I, p, q)[n] = [max{I[m].min[n].min | m ≤ n}, min{I[m].max[n].max | m ≤ n}]Q
where I is a sequence of bounded closed intervals with two real numbers as endpoints,
where p is a proof that the intervals are nested, meaning
¥ k2 ≥ k1 implies I[k2].min, I[k2].max ∈ I[k1],
and where q is a proof that the widths of the intervals converge to zero, meaning
¥ for every real number eps > 0 there is a k such that I[k].max - I[k].min < eps.

(Strictly speaking the intervals converging to zero would be the stronger statement that
for every real number eps > 0 there is an n such that k ≥ n implies I[k].max - I[k].min < eps.
In other words, the intervals get sufficiently small and stay that way forever. This stronger statement can be deduced from p and q together since p prevents intervals from having widths larger than previous intervals. So we won't need it as a condition.)

max{f(m) | m ≤ n} is computed recursively:
¥ max{f(m) | m ≤ 0} = f(0)
¥ max{f(m) | m ≤ n+1} = max{max{f(m) | m ≤ n}, f(n+1)}
as is min{f(m) | m ≤ n}:
¥ min{f(m) | m ≤ 0} = f(0)
¥ min{f(m) | m ≤ n+1} = min{min{f(m) | m ≤ n}, f(n+1)}

To say that our function returns real numbers we need proofs of
¥ nestedness and
¥ convergence.

We also need to show
¥ compatibility with equality (changing one of the inputs to a different representation of the same real number shouldn't change the output) and
¥ proof the number constructed lies within all the intervals.

And we should also show
¥ uniqueness: there is only one real number in the intersection.

But first we need some lemmas about the min/max of a finite set of rationals as defined recursively above.

¥ There is a k ≤ n such that min{f(m) | m ≤ n} = f(k).
Proceed by induction. The base case f(0) = f(0) is trivial. Assume for some n that there is a k ≤ n such that min{f(m) | m ≤ n} = f(k). Then either min{f(m) | m ≤ n+1} = min{f(m) | m ≤ n} = f(k) or min{f(m) | m ≤ n+1} = f(n+1).

¥ If k ≤ n, then min{f(m) | m ≤ n} ≤ f(k).
Again proceed by induction. In the base case, k ≤ 0 implies k = 0, so we only need to show f(0) ≤ f(0), which is trivial. Assume for some n that k ≤ n implies min{f(m) | m ≤ n} ≤ f(k). Given k ≤ n+1, we either have k ≤ n or k = n+1. If k ≤ n, then min{f(m) | m ≤ n+1} = min{min{f(m) | m ≤ n}, f(n+1)} ≤ min{f(m) | m ≤ n} ≤ f(k). If k = n+1, then min{f(m) | m ≤ n+1} = min{min{f(m) | m ≤ n}, f(n+1)} ≤ f(n+1) = f(k).

By similar proofs, we have:
¥ There is an m ≤ n such that max{f(n) | m ≤ n} = f(m).
¥ If k ≤ n, then max{f(m) | m ≤ n} ≥ f(k).

nestedness

We need to show
¥ whenever n2 ≥ n1,
¥ max{I[m].min[n1].min | m ≤ n1} ≤ max{I[m].min[n2].min | m ≤ n2} ≤ min{I[m].max[n1].max | m ≤ n1} and
¥ max{I[m].min[n1].min | m ≤ n1} ≤ min{I[m].max[n2].max | m ≤ n2} ≤ min{I[m].max[n1].max | m ≤ n1}.

¥ max{I[m].min[n1].min | m ≤ n1} ≤ max{I[m].min[n2].min | m ≤ n2}
There is a k ≤ n1 such that max{I[m].min[n1].min | m ≤ n1} = I[k].min[n1].min. Since n2 ≥ n1 and the intervals defining I[k].min are nested, we have I[k].min[n1].min ≤ I[k].min[n2].min. Since k ≤ n1 ≤ n2, I[k].min[n2].min ≤ max{I[m].min[n2].min | m ≤ n2}, completing the proof.

¥ min{I[m].max[n2].max | m ≤ n2} ≤ min{I[m].max[n1].max | m ≤ n1}
Similar to the previous. There is a k ≤ n1 such that min{I[m].max[n1].max | m ≤ n1} = I[k].max[n1].max. Since n2 ≥ n1 and the intervals defining I[k].max are nested, we have I[k].max[n1].max ≥ I[k].max[n2].max. Since k ≤ n1 ≤ n2, I[k].max[n2].max ≥ min{I[m].max[n2].max | m ≤ n2}, completing the proof.

Now we need to do the parts
¥ max{I[m].min[n2].min | m ≤ n2} ≤ min{I[m].max[n1].max | m ≤ n1}
¥ max{I[m].min[n1].min | m ≤ n1} ≤ min{I[m].max[n2].max | m ≤ n2}.

From the fact that the intervals I are nested we can show
¥ I[m1].min ≤ I[m2].max for any m1, m2.
Either m1 ≥ m2, in which case I[m1].min ∈ I[m2], or m2 ≥ m1, in which case I[m2].max ∈ I[m1].

To prove p ≤ q for rational p and q, it is sufficient to disprove p > q.
1. Suppose max{I[m].min[n2].min | m ≤ n2} > min{I[m].max[n1].max | m ≤ n1}. Then there are j, k such that I[j].min[n2].min > I[k].max[n1].max, showing I[j].min > I[k].max. This contradicts I[j].min ≤ I[k].max.
2. Suppose max{I[m].min[n1].min | m ≤ n1} > min{I[m].max[n2].max | m ≤ n2}. Then there are j, k such that I[j].min[n1].min > I[k].max[n2].max, showing I[j].min > I[k].max. Again, this contradicts I[j].min ≤ I[k].max.

 No.9600

>>9598
convergence

We need to show
¥ for every rational number eps > 0 there is an n such that
¥ min{I[m].max[n].max | m ≤ n} - max{I[m].min[n].min | m ≤ n} < eps.

Since eps/3 > 0, there is an k such that
width(I[k]) = I[k].max - I[k].min < eps/3
and therefore
I[k].max[n].min - I[k].min[n].max ≤ I[k].max - I[k].min < eps/3.

Furthermore there is an n1 such that
width(I[k].min[n1]) < eps/3
and an n2 such that
width(I[k].max[n2]) < eps/3.
Let n = max{k, n1, n2}. Then
width(I[k].min[n]) < eps/3
width(I[k].max[n]) < eps/3.

Adding together
width(I[k].max[n]) < eps/3
I[k].max[n].min - I[k].min[n].max < eps/3
width(I[k].min[n]) < eps/3
we obtain
I[k].max[n].max - I[k].min[n].min < eps.

Since k ≤ n, we have I[k].max[n].max ≥ min{I[m].max[n].max | m ≤ n} and I[k].min[n].min ≤ max{I[m].min[n].min | m ≤ n}. Thus
min{I[m].max[n].max | m ≤ n} - max{I[m].min[n].min | m ≤ n} ≤ I[k].max[n].max - I[k].min[n].min < eps.

proof the number constructed lies within all the intervals

We need to show
¥ for any k, nested_RI_int(I, p, q) ∈ I[k].

By nested_RI_int(I, p, q) ∈ I[k], we mean I[k].min ≤ nested_RI_int(I, p, q) ≤ I[k].max.

1. Assume to the contrary that I[k].min > nested_RI_int(I, p, q). Then there is an n such that I[k].min[n].min > nested_RI_int(I, p, q)[n].max. Let n' = max{n,k}; then
I[k].min[n'].min ≥ I[k].min[n].min > nested_RI_int(I, p, q)[n].max ≥ nested_RI_int(I, p, q)[n'].min = max{I[m].min[n'].min | m ≤ n'}.
But since k ≤ n', I[k].min[n'].min ≤ max{I[m].min[n'].min | m ≤ n'}, a contradiction.

2. Assume to the contrary that I[k].max < nested_RI_int(I, p, q). Then there is an n such that I[k].max[n].max < nested_RI_int(I, p, q)[n].min. Let n' = max{n,k}; then
I[k].max[n'].max ≤ I[k].max[n].max < nested_RI_int(I, p, q)[n].min ≤ nested_RI_int(I, p, q)[n'].max = min{I[m].max[n'].max | m ≤ n'}.
But since k ≤ n', I[k].max[n'].max ≥ min{I[m].max[n'].max | m ≤ n'}, a contradiction.

uniqueness

We need to show
¥ If I is a sequence of nested closed intervals whose widths converge to zero, and we have both x ∈ I[k] for all k and y ∈ I[k] for all k, then x = y.

For this we won't need the nestedness part. So we can restate this with fewer conditions as
¥ If I is a sequence of closed intervals in which we can find an interval with width smaller than any positive real number eps, and we have both x ∈ I[k] for all k and y ∈ I[k] for all k, then x = y.
(we can't say the widths converge to zero anymore because that's not true without the nestedness condition)

Assume to the contrary that x <> y. Either x > y or y > x. If x > y, let eps = x - y, and note eps > 0. Then there is a k such that I[k].max - I[k].min < eps. Since x ≤ I[k].max and y ≥ I[k].min, we have eps = x - y ≤ I[k].max - I[k].min, a contradition. If y > x, the argument is the same with x and y interchanged. Therefore x = y.

compatibility with equality

We need to show
¥ if I1[k].min = I2[k].min and I1[k].max = I2[k].max for all k, then nested_RI_int(I1, p1, q1) = nested_RI_int(I2, p2, q2).

We know nested_RI_int(I1, p1, q1) ∈ I1[k] for all k. Since the bounds for I1 and I2 are the same, nested_RI_int(I1, p1, q1) ∈ I2[k] for all k. Since we also have nested_RI_int(I2, p2, q2) ∈ I2[k] for all k, and the real number in this intersection is unique, it follows that nested_RI_int(I1, p1, q1) = nested_RI_int(I2, p2, q2).

Our proofs about nested real intervals are now complete.

 No.9731

File:calculator.mp4 (450.24 KB,1024x768)

When I started this thread I said
>This has been done many times before and probably much better than I will manage
but I learned of an implementation of this concept in a surprisingly ubiquitous place.

On a phone that runs Android, open the default Android calculator app and input any expression that has infinitely many decimals in the answer. Then in the place where the result is displayed, swipe left.

 No.9732

>>9731
some more exposition of this feature

 No.9733

>>9731
that's kinda cool




[Return] [Top] [Catalog] [Post a Reply]
Delete Post [ ]

[ home / bans / all ] [ qa / jp / cry ] [ spg ] [ f / ec ] [ b / poll ] [ tv / bann ] [ toggle-new / tab ]