Ideas Journal

19 December 2017

– 00:01 –

I started writing this essay, and then wasn't sure how to answer my own question. https://namespace.obormot.net/Main/RealTimeMinuteTimePomodoroTime Now I think I might know. Construct artificial agents which exist at each of the different timescales you want to consider. What is an agent? OODA young man. Imagine a series of agents, all of whom perceive life along different timescales. At the lowest timescale is an agent we'll call 'child', living 250ms at a time. At the highest timescale is an agent we'll call 'grandfather', which experiences exactly one moment and that moment is contemplating the global 'score' of their life before ceasing to exist. We might want our life to be controlled by grandfather, because they're about regret minimization over the whole life. But 'child' might have to suffer a lot in the course of that to optimize outcomes for grandfather, so childs incentives are to steal from the agents representing its future self for short term gain. In practice, child exists and grandfather counterfactually exists so child is the one that ends up controlling our life. If you take childs preferences at face value with no input from the agents in the middle, hyperbolic discounting is completely rational in all circumstances with no caveats. This will cause problems for child later, because as Jordan Peterson put it, improving your life is about not suffering any more stupidly than you have to. And child, on their own will suffer really stupidly. Really often. Now, only one agent in this model actually exists, and that's child. Everything afterwards up to and including grandfather is counterfactual. But child has a neat trick they can use to suffer a bit less stupidly. They can attach the judgments and suggested actions of the other agents to the orientation and deciding stages of their OODA loop. Which makes those other agents less-counterfactual. :P So here comes the balancing act. There is an intrinsic principal agent problem here. Where if child just does whatever grandfather says, they might end up completely hating their life right up until the moment where they get to the last 250ms experience-interval and go "oh wow amazing". To the extent you will be these counterfactual agents later (because you are child, lol). They have incentives to hurt you for their gain, and you have incentives to hurt them for your gain. A core bit of fun theory then, is figuring out who in this model should get what over the total course of your life. :p (If this is confusing, just straightforward model it as you making decisions for your future self, but that's actually subtly wrong and not what I mean.) "Okay but namespace, you just took the 'what do you want' problem and turned it into a network analysis problem where the nodes are also people. Doesn't that make things worse?" Nope, because you just put limits on the time horizon of each person in the network that probably makes taking the integral a lot easier. >_>

18 December 2017

– 23:38 –

If you split yourself into a plurality, and focused the different pieces on different timescales you could probably approximate proper foresight.

– 23:34 –

"EMPIRICISM: So if you conclude hyperbolic discounting is actually rational, does that mean you're insane or economists are insane?" "RATIONAL: I never actually said that." "EMPIRICISM: But you thought it. :3" "RATIONAL: I think hyperbolic discounting is irrational because people have bad implicit priors on how long they'll live which are hardcoded for harsher environments where surviving now is much more important than surviving later." "RATIONAL: In the desert savannah, an index fund is not useful to you. In your cushy 1st world western life, you do in fact prefer across your various systemic maximas to invest in the index fund." "EMPIRICISM: And that's not true for suffering?" "RATIONAL: No it's not. Because for mere money, I only suffer a little realistically to give it a way now. It's implicitly a thing I'm capable of bearing because I'm choosing it. In a genuine suffering scenario I want to die at the 250ms level, which is in fact more important to me than reaching the global maxima of optimal enlightenment later." "EMPIRICISM: Really, if you were offered 500 thousand years of suffering for True Enlightenment you wouldn't take it, knowing during the whole time you'll be better at the end?" "RATIONAL: I was assuming I don't know during the experience, so it doesn't factor into my local-maxima-wanting-to-die." "EMPIRICISM: Fair enough." "RATIONAL: I think suffering is like, a log function basically, so at low enough levels of not-suffering, i.e the general case over a googleplex years with certain levels of variant suffering mixed in, hyperbolic discounting is rational."

– 23:02 –

So my first draft is something like. You exist in a local maxima of AIXI-ish rewards and punishments from the environment, which itself exists in a local maxima which itself exists in a local maxima which itself...and the long you live, the more detail there is going all the way down on the maximas. *longer Realistically, the closer one of these maximas gets to the 250ms base level. To being horrible. The more a person wants to die, because people live 250ms at a time. So if 'experiencing a fate you empirically find to be worse than death' is bad for you, you actually want to prioritize the short term and do things like hyperbolic discounting. ...Well then.

– 22:56 –

So basically. In Bastion, you kind of wake up at the start of the game with a hammer and start mowing down enemies. The whole game is narrated by the guy singing that, his name is Rucks. And after a short while you show up in this..underdeveloped paradise called the Bastion, and Rucks informs you that The Calamity (which is not explained) killed everyone else and you need to help him fix it.(edited) As the game progresses. It's slowly explained that The Calamity was a weapon used by your people, essentially like a nuke, that was meant to genocide The Ura, a white skinned people that yours were at war with. but it backfired, and killed everyone. So. The gods that your people worshipped, the pantheon, if you pray to them it makes the game harder. Because they feel that this is your fuckup, and the only 'help' they're obligated to give is more pain to punish you for your sins. So like, the question is over what timescale your preferences are even existentially relevant. Maybe a million years a slave just doesn't mean that much to Celestia over your trillions-years life, that's like stubbing your toe suck it up pony. (This is one of the core reasons I think FiO is a dystopia, it gives you the tools to hurt yourself over existential hell timescales but not the perspective to handle that power.) (If you can't tell, I grok what it means to live an infinite life. At least some dimensions of it.) If you live 250ms at a time, but your lifespan is forever then the local and global maxima will pretty much always outweigh whatever you're actually experiencing at the moment. So an agent optimizing you along that timescale, is actually going to spend a lot of time hurting you because that is in fact the best thing for you over a googleplex years. Make sense? (Watch I'm gonna end up writing a FiO fic that makes EY wet his bed.) And if you don't think that's a thing, we do it right now with ordinary humans. School is terrible, it's very un-fun. But goes the narrative, after 16 years a slave to the school system, you will be a much better deeper person for the rest of your years. That's what in loco parentis means after all, it means you're temporarily a ward in the presence of school instructors. They finally remove that particular flag at the collegiate level, but. So you know. Imagine something like that, but it takes a thousand years. Or ten thousand, or a hundred thousand. If you still live your life 250ms at a time, you're completely fucked. https://namespace.obormot.net/Main/RealTimeMinuteTimePomodoroTime

30 November 2017

– 01:04 –

<namespace> Obormot\Sirius: :3 <namespace> So. <namespace> It just occurred to me that. <namespace> You could use this interop stuff for a functional 'decentralized host your own shit' architecture. <namespace> Where things like comments and posts are stored as pages on peoples wikis and then referenced by the central server for display.

28 November 2017

– 21:06 –

Regex / Caracitrine / Bluevertro - Last Sunday at 7:48 PM 4 times :P any useful piece of information you encounter or generate needs to be sent somewhere to be used by something else for one thing(edited) so whenever a question is encountered, it has to send that information somewhere or make a decision so everything is either: 1) observing something 2) sending that information to be checked against our values and ideals 3) making a decision 4) actually sending people commands