-
Notifications
You must be signed in to change notification settings - Fork 0
/
thinking.html
477 lines (462 loc) · 22 KB
/
thinking.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
<!-- saved from url=(0033)https://patrickcollison.com/about -->
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<link rel="stylesheet" href="Kevin Sun/style.css" />
<title>Questions · Kevin Sun</title>
</head>
<body>
<div id="menu">
<ul>
<li><a href="index.html">About</a></li>
<li><a href="questions.html">Questions</a></li>
<li><a href="answers.html">Answers</a></li>
<li><a href="thinking.html">Thoughts</a></li>
</ul>
</div>
<div id="left"> </div>
<div id="content">
<p>
Some of my unstructured thoughts. Not really questions or answers. Not
refined enough to be blog posts. Just a lot of my open-ended ruminations
and rants into the void.
</p>
<p>
<b><i>Tenet</i> and generational conflict</b>
</p>
<p>
I don't think I gave this movie enough credit when I first watched it. I
was too focused on the mechanics of time travel in the film, rather than
paying attention to the deeper allegorial themes at play.
</p>
<p>
There's one interesting concept in particular that I noticed in
<i>Tenet</i>: our profound hatred of the past.
</p>
<p>
To summarize, <i>Tenet</i>'s antagnoist, Sator, makes contact with
future humans, and they command him to eliminate all his present-day
human compatriots by reversing the direction of entropy (which risks
their own existence as well). Here, the central conflict emerges: the
battle between the present and future; with both sides vying to justify
their existence at all costs.
</p>
<p>
I think there's something to be said about humanity's distaste for our
past; this is evident in our narration of history as an inevitable march
from primitive, brutish tribes to modern, progressive civilizations. We
pathologize the past as an exceptional evil---of which all traces must
be obliterated. We are willing to kill it off, even if it means risking
our very own destruction.
</p>
<p>
There's an additional complication of information asymmetry at play in
the film. None of the characters understand why exactly the future
desires to kill the present off---much less, why exactly killing the
present wouldn't get rid of the future as well. They are fighting with
an inexorable temporal restriction on information (to use the film's
language, "what happens will happen").
</p>
<p>
What's interesting is that these restrictions end up working to the
advantage of our protagonist. Being unable to change what happens also
implies that the future cannot change the past, despite its best
efforts. For that reason, the success of the protagonist's mission
appears to have this rather counter-intuitive cross-temporal existence
-- which is taken to it's an extreme when it turns out the protagnist's
right hand was apparently a time ghost.
</p>
<p><b>Expanding the domain of human cognition</b></p>
<p>
One university professor once told me that the economy "never grows, it
only finds ways to monetize previously unmonetizable things"; I think a
similar principle applies to technology. We don't invent "new things",
we merely subject a new domain to human cognition.
</p>
<p>
Take the weather for example: before weather forecasts, we did not think
about whether it'd be rainy later in the afternoon before walking
outside -- but now that we have forecasts, this becomes to think about.
We've progressed far beyond that; not only do we have climate models
that are precise over extraordinarily long time horizons, there are
nations that possess the ability to
<a
href="https://en.wikipedia.org/wiki/Cloud_seeding_in_the_United_Arab_Emirates"
>engineer the weather</a
>
itself.
</p>
<p>
Once you expand the domain of what is considered a human invention, then
this re-definition of technology as the expansion of human cognition has
more profound implications. If you take stories to be a human invention,
then by our definition, stories are a technology that provides us a
entry tunnel to our deeper values and morals.
</p>
<p>
Explaining technology through this language may explain the oft-cited
contemporary malaise of "cognitive overload". By definition, technology
expands the domain of human cognition, it is unsurprising that humans
are experiencing the overuse of cognitive force.
</p>
<p>
Once we wielded the power to control the weather, the weather became our
responsibility.
</p>
<p><b>Second-order theories of morality and epistemic humility</b></p>
<p>
I was captivated by an
<a
href="https://open.spotify.com/episode/5sYlXzYI4pHPiwWAqvEjSP?si=6b16cffe443d449d"
>incredible podcast episode</a
>
between Alex O'Connor and David Wolpe last weekend. Wolpe describes what
was to me a novel resolution to the problem of evil. In short, God must
permit a universe with unnecessary suffering, else it is impossible for
truly good humans to exist. If humans are only kind because there exists
a certainty of negative consequence, then there are no naturally kind
humans.
</p>
<p>
What I find novel about this hypothetical universe is the
<i>certainty</i> of negative consequence. It seems that perfect
knowledge of the future is the only difference between our universe and
this hypothetical one---if you can see infinitely into the future, then
you have certainty of whether certain actions will bring about negative
consequences for yourself.
</p>
<p>
There is a grain of intuition that emerges from this emphasis on
certainty: do fundamental moral values rely on the existence of
imperfect information? If I was a selfish criminal and knew with
absolute certainty that my neighbor would never find out about my
theivery, my optimal strategy would be to steal from him every day. It
is only because of my imperfect knowledge of my neighbor's intentions
that I choose against it.
</p>
<p>
One piece of fiction comes to mind while ruminating on this topic:
<i>Dune</i>. A theme in the novel that I haven't heard discussed is the
corrupting nature of prescience. One critical plot device is that as
Paul Atreides acquires the power of omniscience, he becomes ever-more
willing to wager the lives of billions to achieve his objective.
</p>
<p>
There is something that feels intuitively wrong about the patterns of
behavior that would likely emerge if people gambled on poker whilst
having perfect knowledge of everyone else's cards. I would suspect that
the sight of people betting their family's life savings over a hand of
cards would be gut wrenching.
</p>
<p>
Following that train of intuition, some part of me wonders whether our
imperfect knowledge of the world is an evolutionary advantage. Would it
truly produce a more stable society if we had perfect knowledge of each
other and our actions? I could see our inability to neither hear each
other's thoughts*** nor see into the future as somewhat of an
evolutionary equilibrium point. Any closer to prescience, and we might
be running against the collapse of civilization.
</p>
<p>
There seems to be an interesting relationship between the concept of
optimal information distribution in a society and the metaphoriation of
the monetary systems as a "database" of goods and services. What would
be the optimal distribution of information in this database that would
produce the most stable society?
</p>
<p>
***Trisolorans from Three Body Problem come to mind. Maybe there's a
science fiction angle to this.
</p>
<p><b>Consciousness, Abstraction, and Computers</b></p>
<p>
I always found it interesting how the term "abstraction" is thrown
around without much thought amongst software engineers. Abstraction is a
rather non-trivial philosophical concept; whether they exist at all
outside of our conscious experience is still a very open question.
</p>
<p>
I find it fascinating that abstractions "work". We can't quite describe
what we're doing when we're generalizing an idea into its more abstract
variant, but for some reason, this generalization appears to be
necessary for us to develop any breakthrough in thinking. A question
emerges: what would thinking look like sans any abstractions? Is it even
possible to "think" without the orchestration of some limited set of
abstract concepts?
</p>
<p>
It seems extraordinarily important to develop insights into the human
capacity for abstraction in light of recent advances in artificial
intelligence. As it concerns the capacity for human thought, our
assessment of machines seems to be bottlenecked by our understanding of
ourselves.
</p>
<ul>
<li>
<a href="https://en.wikipedia.org/wiki/I_Am_a_Strange_Loop"
>Expansion of recursion to the study of consciousness</a
>
</li>
<li>
<a href="https://en.wikipedia.org/wiki/Man_a_Machine">Monism</a>
</li>
<li>
<a href="https://en.wikipedia.org/wiki/Christof_Koch"
>Panpsychism meets computers</a
>
</li>
<li>
<a href="https://en.wikipedia.org/wiki/Artificial_consciousness"
>Artificial consciousness</a
>
</li>
<li>
<a href="https://en.wikipedia.org/wiki/Holonomic_brain_theory"
>Quantum brains</a
>
</li>
<li>
<a
href="https://en.wikipedia.org/wiki/Neural_correlates_of_consciousness"
>SOTA neural imaging for consiousness</a
>
</li>
</ul>
<p><b>Religion as a meta-cognitive structure</b></p>
<p>
I have a hypothesis that part of the mass secularization of society may
be in part explained by the Flynn effect. We have an explosion of
abstract reasoning skills in the general populace, causing the core
function of religion to become continuously more obsolete.
</p>
<p>
It appears to me that one of the primary functions of religion is its
ability to develop a meta-cognitive framework for thought -- in essence,
describing how one should think about the world. Any system of morality
or value appears to envelop the world in some sort of interpretive
framework; religion is no different. It operates from a fundamentally
high level of abstraction; you area describing thoughts that have not
even emerged yet, using a maximally generalizable grammar to explain all
of life. Throughout history, large collectives of humans have adopted a
comparatively small number of religions, orchestrating society under a
unitary collective abstraction. It may be the case that as individual
humans possess greater and greater abstract thinking skills, more humans
can orchestrate individual abstractions without the need for some
collective abstraction like religion.
</p>
<p><b>Rationalizing the irrational</b></p>
<p>
I'm puzzled by people who attempt to memorize the digits of pi by
remembering some logical relationship between the numbers (e.g. telling
yourself a story about what number comes next, describing algebraic
relationships between digits, etc.). By virtue of pi being an irrational
number, we know that these methods are not properly describing the
nature of the number -- you're taking a fixed length random number and
then attempting to rationalize each digit.
</p>
<p>This became a starting point for a few interesting thoughts:</p>
<ul>
<li>
<p>
<b
>Are all the answers to our deepest problems contained in the
number pi?</b
>
</p>
<p>
If the digits of pi serve as an infinite random number generator,
then you pretty much have the proverbial infinite typewriting monkey
contained within the number. You can develop some encoding schema
between the digits of pi (ex: put it in base 26 and have it start
outputting English characters), and then reap the benefits of all
the secret knowledge contained in the digits.
</p>
<p>
You can even index each digit of pi and then start counting units of
time before you reach this final "secret" contained in pi. What's
fascinating about this heuristic is that it appears to reframe
solutions to our deepest scientific questions as a function of time
and randomness. With enough randomness and time, you can solve any
problem (classic Darwinian take).
</p>
</li>
<li>
<p>
<b
>What could this thought experiment say about the way we narrate
the past?</b
>
</p>
<p>
We know that our world is composed of entities (ex: pi, e, circles)
that escape the language of real numbers, so to narrate the causal
forces that drive this world might be as fruitless as predicting the
next digit of pi after memoizing the many digits beforehand. Perhaps
when we're developing a history of the past (taking some fixed pivot
point in time and then developing a narrative that explains the
digits before it), we're narrating the time like someone trying to
remember the digits of pi.
</p>
<p>
After much authorship, you may very well have created a story with
internal logical consistency and complete historical information,
but the story itself cannot tell you what will happen next; just
like how a story used to remember the digits of pi won't be able to
tell you the next digit.
</p>
</li>
</ul>
<p><b>Social Darwinism</b></p>
<p>
I recently watched the <i>Kingdom of the Planet of the Apes</i> movie,
and one element of the plot that stuck out was the primates insisting on
"evolving" their society through the aid of human technology. The movie
seems to have this embedded critique of agential theories of evolution -
that natural selection is a process propelled by conscious decisions
rather than inarticulable environmental pressures. Along this path, the
movie appears to also architect a critique of intellect as the primary
bottleneck for civilizational advancement; the apes are clearly
underdeveloped as ethical agents, but they have surpassed in cognitive
ability the average human. There is more to be said along these two
ideas as critiques of social darwinism.
</p>
<p><b>Willpower / Ego Depletion</b></p>
<p>
Ever since I was in high school, I thought that willpower was a finite
commodity. You had to budget the quantity of difficult tasks that you
were to complete throughout the day, otherwise you'd just run out of
"will". At the time, I was compelled by
<a
href="https://faculty.washington.edu/jdb/345/345%20Articles/Baumeister%20et%20al.%20(1998).pdf"
>a study</a
>
that had subjects eat tasy / distasteful food before a difficult task
and measured the likelihood of them completing that task.
</p>
<p>
Now, looking back, it seems like the minimal test case of having
participants eat different foods before challenging tasks was too
limited to generalize well to all domains involving "willpower".
Additionally, there appear that this particular study has some
<a
href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0147770"
>replicability problems</a
>.
</p>
<p>
It appears that willpower-oriented explanations of human behavior are
just observations of a placebo effect -- the more likely someone thinks
that willpower is limited,
<a
href="https://hbr.org/2016/11/have-we-been-thinking-about-willpower-the-wrong-way-for-30-years"
>you're more likely to feel less willpower.</a
>
I'd be fascinated to see whether people would live different lives if
this belief were reversed.
</p>
<p><b>Probability</b></p>
<p>
Probability is not
<a href="https://www.arameb.com/blog/2020/11/22/probability">"real"</a>.
</p>
<p>I feel like there's a tie between this and machine learning.</p>
<p><b>Social Media Musings</b></p>
<p>
I think a more targeted analysis of what social media platforms
<i>are</i> would be necessary for us to articulate what we intuitively
feel like is wrong about them.
</p>
<p>
Some dimensions of these platforms that make them suboptimal sources of
meaningful knowledge:
</p>
<ul>
<li>
<p><i>How we access the knowledge</i></p>
<p>
One interpretation of test-oriented learning is the
<i>obstruction</i> of knowledge. You're not given the answers. This
learning method bakes in habitual time investment as a condition to
accessing information.
</p>
<p>
The bet is that this restriction will force students to study a
wider breadth of knowledge (as opposed to the limited number of
answers on the test key) and nudges them into spending more time
wrestling with the material. For foundational knowledge domains that
generalize well, it intuitively feels like this methodology fiats
students that are well-trained in abstract reasoning.
</p>
<p>
It appears that social media doesn't quite pair well with that
knowledge acquisition formula. There's a pretty wide diversity of
posts optimized for short-term engagement. It's difficult to develop
any good abstractions given the user experienced presented by these
platforms.
</p>
</li>
<li>
<p><i>What knowledge is shown on these platforms</i></p>
<p>
A normal person's normal thought will not help you in your abnormal
situation.
</p>
<p>
Most people on these platforms won't offer you amazing advice on
problems beyond a certain point of specificity. Particular problems
require particular information with empiricial tests and intuition
build up over long periods of time.
</p>
<p>
I'm thinking about a concept that I like to call the "inversion of
the knowledge-experience hierarchy".
</p>
<p>
We like in an age where kids are primarily consuming content made by
other kids. You can say the same about teenagers and young adults.
One possible side effect of this is the hollowing out of knowledge
bases. We're not building on the paper trail of our forefathers;
we're digging holes and then filling them in with short-term
experiences. I'm curious (if not slightly worried) at where this
will bring us in a few decades.
</p>
<p>
Now, you might ask: what made the past different in this regard?
Surely peers were learning from other peers back before social media
existed.
</p>
<p>
I'll say that one compelling distinction is that previously
information had to be accessed far more intentionally than today.
Information was dispersed at a far lower throughput with true
physical barriers to access (e.g. internet timeouts / newspapers
still on paper) that restricted its availability enough to require
conscious choice.
</p>
<p>
I'd be interested if there's any literature on whether we're
approaching maximal information throughput in humans. It seems like
an upper limit must exist (if nothing more than just the limitations
to eyesight, hearing, reading).
</p>
<p>
That is not to say that there is a limit to knowledge, which is a
measure of information interpretation. An infinite number of
interpretations may very well exist.
</p>
</li>
</ul>
<p><i>So what can we do about it?</i></p>
<p>
Common objections to social media regulation that reduce to the
inevitability of short-term human preference optimization are
unpersuasive to me. Even though nicotine heavily spikes dopamine levels,
we were able to phase out cigarettes with targeted public information
campaigns (though it seems like vapes are making a comeback). I think we
can begin to invest some thought into mirroring the most successful
tactics in grassroots anti-drug movements for social media consumption
(and we should obviously do away with the tactics that didn't work).
</p>
</div>
</body>
</html>