-
Notifications
You must be signed in to change notification settings - Fork 0
/
questions.html
416 lines (402 loc) · 20.5 KB
/
questions.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
<!-- saved from url=(0033)https://patrickcollison.com/about -->
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<link rel="stylesheet" href="Kevin Sun/style.css" />
<title>Questions · Kevin Sun</title>
</head>
<body>
<div id="menu">
<ul>
<li><a href="index.html">About</a></li>
<li><a href="questions.html">Questions</a></li>
<li><a href="answers.html">Answers</a></li>
<li><a href="thinking.html">Thoughts</a></li>
</ul>
</div>
<div id="left"> </div>
<div id="content">
<div id="content">
<p>
I'm pretty curious about the world. Here are some of the questions
that percolate in my mind.
</p>
<p>
<b
>Is the world getting faster or are we just thinking in shorter time
slices?</b
>
</p>
<p>
I've seen my handful of doomsday internet articles describing the
collapse of human attention spans. What's often implied by these
authors is that humans aren't thinking at all anymore (the proverbial
comparison between humans and goldfish comes to mind). I'm a little
skeptical of this take. One position that I haven't seen addressed is
whether humans are, in fact, still thinking, just in ever-shorter time
slices.
</p>
<p>
This would explain the common complaint that the "world is moving too
fast". Of course if you think in increments of seconds, then your life
would seem closer to an all-out sprint than a well-paced marathon.
You've subjected your own life to some time dialation paradox,
engineering the seconds to feel long but the years short.
</p>
<p>
Anyone with a cursory understanding of
<a href="https://en.wikipedia.org/wiki/Greedy_algorithm"
>greedy algorithms</a
>
knows that optimizing for an ever-shortening time slice usually
doesn't pan out to be the best long-term solution. It might not make
sense for us to hold our lives hostage one second at a time.
</p>
<p>
All that said, I'd be curious to see if there are any good
psychological studies on self-assessments of productivity; whether
there's been an observed shift toward smaller time units employed in
self-assessing work output. I'd also be interested in seeing how this
short-termism generalizes across a variety of lifestyle habits
(whether that be exercise, diet, sleep).
</p>
<p>
<b
>What's the optimal time distribution of money across our
lifetimes?</b
>
</p>
<p>
We often hear that people on their deathbeds will never mention money
as one of their top life priorities.
</p>
<p>
Some suggest that this is an indication that money has no intrinsic
value. I don't share that view. Old people might not value money
because there's no time left to spend it.
</p>
<p>
I believe there's a discussion to be had about variable depreciation
rates of different life investments. Money might have an accelerating
depreciation as life approaches its conclusion. Health might spike in
value toward middle-age, but then decline later on. Family may have an
ever-accelerating appreciation rate.
</p>
<p>Once you realize that</p>
<ul>
<li>
you'll only make a finite amount of money in your lifetime and
</li>
<li>
spending money is just executing a time distribution schedule of
your assets,
</li>
</ul>
you're forced to confront this two-pronged question of
<ul>
<li>when money is the most valuable to you and</li>
<li>how much you want to spend at that given time.</li>
</ul>
<p>
Even if very imprecise, a back-of-envelope calculation of the
net-value with depreciation rates of different life investments could
be hugely beneficial in planning out the long-term for young people.
</p>
<p>
<b
>Is it possible to statistically generalize breakthroughs in human
creativity?</b
>
</p>
<p>
Recently, I've started to reason about machine learning (ML) via some
crude first principles reductions to statistics. Put concretely, I've
just been asking myself the question: can this problem be solved by
clever probability engineering? We obviously know that some problems
(e.g. coin tosses, zero-knowledge proofs, quantum physics) don't have
useful statistical patterns. Other problems do have statistical
patterns, but might not be super meangingful to generalize on a long
time horizon (e.g. economic indicators, political polling). That said,
there's quite a large cross section of problems that we can solve
using statistical generalization (e.g. genetics, disease spread).
</p>
<p>
One thread of questions I've encountered is whether human creativity
itself lends itself to statistical analysis. Is it truly random? Can
we truly articulate the process of true human creativity? Is it the
case that if we were to articulate such a process that it would cease
to be creative?
</p>
<p>
It's a truly unbounded domain of intruiging loose ends. I've read an
article by David Chapman earlier this year about how there's no
meaningful scientific method that could reasonably generalize the
process of scientific breakthrough (which was written mostly in
response to the suggestion that AI could develop an "automation of
science"). I'm also sure there's a camp of people commited to some
form of statistical determinism (Malcolm Gladwell seems like a good
imperfect example) that would dispute this claim -- that history is a
set of articulable inputs and outputs that can be described by some
probability distribution that converges to a historical equilibrium
point.
</p>
<p>
I'll have to do much more thinking on this subject to develop a good
approach toward an answer (defining "human creativity" would be a good
start).
</p>
<p>
It's tough to say whether DALL-E or GPT truly shifts the needle on
this question. These algorithms are outputting what the average human
[1] thinks is "good" drawing and writing. Should we trust the average
human to assess a breakthrough in human creativity? Probably not.
</p>
<p>
[1] To perform optimally, these models must converge to the
statistical distribution of a human population's art preferences,
which ends up representing the average human's creative preference.
That said, the human population could be a limited subset of humans,
in which case we've opened up a political problem of who gets to
decide what constitutes human creativity.
</p>
<p>
<b>How rigorous is intelligence theory? Can we make it better?</b>
</p>
<p>
Just a few months ago, I stumbled upon an interesting
<a
href="http://cup.columbia.edu/book/artificial-whiteness/9780231194914"
>book</a
>
by Dr. Yarden Katz critiquing the neutrality of intelligence theory;
in particular, the application of intelligence theory to modern
artificial intelligence. His work disputes the belief that advanced
deep learning algorithms can become a univeral truth engine. He has a
general skepticism of intelligence theory writ-large (whether it's
ever possible to have a univeral metric for intelligence that could
avoid its historical relationship to racial eugenics) and finds that
modern AI's need for pre-computed symbolic systems to interpret
training sets may always subject them to flawed human biases.
</p>
<p>
I've read
<a
href="https://economicsfromthetopdown.com/2020/08/18/why-general-intelligence-doesnt-exist/"
>another article</a
>
by Dr. Blair Fix supporting this skepticism about intelligence theory.
His observation is essentially that the more "general" intelligence
is, the less meaningful it becomes. If I were to give someone a
"general performance" test, you'd ask what exactly was being tested.
Likewise, if I were to tell a college engineering student to develop
an engine that "generally computes", you'd ask what exactly we're
computing. We don't seem to have the same level of skepticism to when
people offer "general intelligence" examinations. Some part of me
feels like we might be missing the forest for the trees when social
scientists tout the incredible correlation coefficient of IQ studies.
</p>
<p>
That being said, I've come across other definitions of intelligence
(e.g. the <a href="http://prize.hutter1.net/">Hutter Prize</a> that
classifies intelligence as lossless knowledge compression) that seem
to cut across the arbitrariness objection of both Katz and Fix. I
haven't read these throughly enough to see whether they accomplish the
objective of establishing a meaningful univeral theory of
intelligence, but I'm interested in learning more. I'd be fascinated
to see whether intelligence can be quantified in a more meaningful
sense (for example, expected knowledge output given a quantity of
intelligence) and whether we can retroactively apply this theory to
the emergent behaviors of artificial intelligence algorithms.
</p>
<p>
<b>Are we feeling more lonely, or are we more scared to be alone?</b>
</p>
<p>
I've seen and used a handful of various social networking apps over
the years; usually these consumer-facing products try to pitch
themselves as a solution to our
<a
href="https://www.npr.org/2023/05/02/1173418268/loneliness-connection-mental-health-dementia-surgeon-general"
>
epidemic of loneliness</a
>.
</p>
<p>
I've become somewhat skeptical of this value proposition. It's not
clear to me that expanding the pool of possible relationships would
help lonely people to stop feeling lonely. I don't think there's
evidence to suggest that everyone's currently living in the wrong
social circle, and that a service that matches people up properly
would unlock the most important bottleneck to someone experiencing
renewed social purpose.
</p>
<p>
Of course, there are some cases where such a service might help put
together small clusters of people with very abnormal interests; but
frankly, most people aren't abnormal. There has to be some explanation
for why people feel lonely even in situations where they are around
people who'd they get along with.
</p>
<p>
My hypothesis is that some of the loneliness epidemic can be explained
by more people being scared of being alone. Much of the current work
I've seen is done on the supply side of social interactions (ex.
whether
<a href="https://www.youtube.com/watch?v=5ghUy_L1F9E">suburbs</a>
hurting our children's social lives), not as much on the demand side
(ex. whether people are actually trying to forge new connections,
whether they're satisifed in the relationships they do have).
</p>
<p>
I came around to this hunch after learning about monks isolating
themselves from society for
<a
href="https://www.theguardian.com/lifeandstyle/2009/may/15/buddhist-retreat-religion-first-person"
>years</a
>. These people don't have a problem with staying alone for long
periods of time, and we don't seem to have a problem with them doing
so either. Yet, if we meet someone who's spent most waking hours in
their local library, we'd probably ask them to make a friend or two.
It's interesting that we have different expectations in these
scenarios. Because of digital work, it's been more possible than ever
for people to be comfortable alone; and I'm wondering if we're all
just a little too slow to accept.
</p>
<p>
I'm interested whether this self-fulfilling fear of loneliness truly
exists -- and if so, what its possible causes are. My intuition tells
me that this phenomenon (if it exists at all) might be explained by
some mix of neurotic social comparison (e.g. social media) and
cultural expectations (e.g. everyone wanting to be super popular).
</p>
<p>
Like most social science, figuring out a robust measurement
methodology would be the hardest part of constructing an answer. I
figure some combination of public surveys (e.g. whether people find
themselves comparing their social actvity to others) and small
experiments (e.g. whether people actually do end up feeling less
lonely when all barriers to meeting each other are removed) would
nudge me into making a root cause assessement of our epidemic of
loneliness.
</p>
<p>
<b>Should we really be trying to encourage everyone to be leaders?</b>
</p>
<p>
Every so often during my meditation sessions, I catch myself wondering
whether leadership is an inherent virtue. It seems like we have an
overabundance of people who firmly believe they're right and will get
their way by any means necessary. There might be something to be said
about the value of patience and putting others before yourself.
</p>
<p>
That being said, I do see where the leadership apologists are coming
from though. Much-needed social change has to come from somewhere, and
it requires some people who are willing to break others out of the
norm to accomplish. Perhaps, that's the kind of leader that they have
in mind.
</p>
<p>Given all that, I suppose I'm interested in a few sub-questions:</p>
<ul>
<li>
How would we quantify the negative externalities of making
leadership a social virtue?
</li>
<li>
Is it possible to objectively calculate whether any social virutes
are net-positive at all?
</li>
<li>
If not "try to be a leader and make a difference in the world?",
what would be the desired message to tell students?
</li>
</ul>
<p><b>Can large language models (LLMs) reason?</b></p>
<p>
Large language models (LLMs) underpin state-of-the-art machine
learning technologies like ChatGPT. They absorb a large text database
to develop a mathematical encoding of language called a generative
pre-trained transformer (GPT). They are then are fine-tuned to produce
different types of text (e.g. dialogue, novels, poetry). When
performing these tasks, these models will typically execute some
variant of a "next word prediction" task---given some previous words,
they'll predict what the next sequence of words in the phrase should
be. This rather simple approach to language understanding has
generated some pretty
<a href="https://openai.com/research/gpt-4">impressive results</a>.
</p>
<p>
Somewhere around a year ago, I encountered the literature around these
LLMs, and I've gone back and forth over whether they could possibly
replicate human reasoning. On one hand, I'm persuaded by the
<a href="https://en.wikipedia.org/wiki/Language_game_(philosophy)"
>Wittgensteinian interpretation of language</a
>
that possibly affirms LLMs having the capacity for reasoning.
</p>
<p>
In short, a colloquial version of the argument goes something like
this: whenever we are "reasoning", we are just developing various
expressions of "reason" based on our language context. Your stubborn
relatives can always "win" any debate against you by just changing the
definition of words; your math teacher can flunk the entire class by
cleverly phrasing a word problem. "Reason" has no meaning alone---it's
always bound by some linguistic context that mediates its expression.
Now, if we can develop a schema that can capture this linguistic
context (e.g. GPTs), we have functionally developed a model that
captures reasoning.
</p>
<p>
I thought this was game-set-match for the LLMs, but I've encountered a
dearth of literature that contradicts this simple argument. Noam
Chompsky came out with this
<a
href="https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html"
>opinion piece</a
>, disputing the reduction of language to a complex statistical
engine. Functionally, these neural networks purely attempt to
calculate the "probability" of a sentence given only the words before
it---a task that doesn't really seem to make too much sense on its
own. "Biden passed the farm bill on October 22nd, 2023" isn't a more
probable sequence of words than "Biden consorts with the aliens". They
both obey grammar rules and the other conventions of language, so it's
difficult to say that these models that calculate the probability of
sentences are truly developing a meaningful representation of language
itself. Instead, they might just be learning other information from
the text database that make it seem like it truly understands language
(and thus reason)---when in reality, it's just drawing extraneous
correlations that suggest the President is more likely to pass a piece
of legislation than collude with extraterrestrials.
</p>
<p>
There are some other objections that I've encountered as well. Erik
Larson's
<a href="https://www.hup.harvard.edu/catalog.php?isbn=9780674983519"
>book</a
>, speaks to the limitations of inductive systems like machine
learning to simulate abductive reasoning. Perfect performance for a
machine learning model would be no different than having an infinite
dimensional regression---it's just a map of correlations from past
data. It will not produce a theory that explains the multi-terabyte
hydra of information through the language of causes and effects. There
are some interesting arguments later in the book that point to whether
this critique of machine learning can be applied to virtually any
scientific field nowadays---most research papers in biomedicine seem
to be regurgitations of convoluted statistics---so maybe it says more
about the infiltration of myopic data science methodology in social
science than some targeted critique of AI alone.
</p>
<p>
I'm still at a loss for who's right. I used to be a complete AI
skeptic, but GPT-4's radically changed my perspective on the matter.
There's a good chance that we're just advanced statistical
engines---in which case an LLM could simulate reasoning without any
problems--- but there's also a good chance that we're not---in which
case an LLM will just continue to be an excellent auto-complete
program, but no more. It's safe to say that this will be on my mind
for awhile.
</p>
</div>
</div>
</body>
</html>