Friday, February 10, 2012

Discovery and Verification of a Specific Epistemic Technique

I am discussing induction with Aretae and discovered - among other things - that my beliefs about the physical process of induction is consistent with my definition of intelligence.[1] It is a win-win-win situation. Also, two of the wins should be achievable through conscious effort, which means you can do this too, if I can figure out how I did it in the first place. It may even be automatic, which implies it is especially reliable.

Later this post switches to new discovery mode.[2]

I suppose I should think of a name for it, but it's a bit early as I don't know which one it will be.

If I'm wrong, it means my beliefs are being made consistent without conscious effort. It would mean they'll all shift if I find my mistake, without me having to spend any further effort. Verification: calculate the odds of two independent mistakes being made exactly the same.

If I'm right, either the beliefs are related or independent.

If they are dependent, it means I'm deriving new true things without conscious effort from previous facts. Verification: similar odds calculation as above.
If they are independent, it represents independent corroboration, and thus self-verification; based on the odds of a falsity appearing exactly consistent with a separate truth.


The next step is to get use out of this by figuring out which it is. If I'm wrong, in due course reality should kick me in the teeth either about induction or about my definition of intelligence.[1]

If I'm right, the possibilities can be distinguished by intentionally trying to use one of the resulting techniques. If I try to exploit automatic consistency generation, what happens? Do I get somewhere or do I get trash?

My present plan to to try these in order.

For completeness, I'll mention that the first time I try to exploit it for correcting false beliefs, I will do a manual re-check of all related beliefs. Problem is I'm running out of verifiably mistaken beliefs, as I've plucked all the low hanging fruit. There's the Higgs, (Via.) but I don't believe much that hinges on the Higgs...


[2]Wait. It's time to change this post to (approximate) stream of consciousness.

Thinking about the Higgs, I now have a new plan. I realized I'm wrong; I believe in the Higgs because I like the general relativity description of gravity as the curvature of space itself. You shouldn't need a particle to couple particles to space - it would mean the Higgs either was recursive or has no position.
If my beliefs are usually being logically coupled without conscious intervention, changing my mind about the Higgs should change my mind about GR without conscious thought. Historically, this has not been the case. Rather, I just get a flag next time I think about GR, reminding me about the Higgs. (And screwing up whatever I was about to say.)

Which means two independent beliefs are reinforcing each other. Neat. Verification: I'm waiting on the Higgs verification anyway; what they discover about the putative Higgs should be consistent with what they have previous said about Higgs. If not, it calls the entire model into question, and I go back to being happy with GR.

It doesn't even matter if my logic about GR is correct or not, it will verify the technique either way.

If Higgs is confirmed, I'll do what I have to accept it, then simply wait. How I feel about GR will either change on its own, or it won't - getting around the fact that I'll naturally remember to think about it, at least a little bit, now it has occurred to me.
If the Higgs turns out badly, I'll use the fallback plan.


[1] Intelligence, as commonly considered, is the conflation of three processes. Learning, creativity, and reasoning. This spans the space of everything you can do to bits; record them, generate them, and manipulate them. I believe they're implemented independently, at least in the brain.

No comments: