On the off chance that anyone ever stumbles upon this blog and for some inscrutable reason wants to read more from me, I thought I should post a link to my shiny new blog at http://thepenforests.com/. It will hopefully be more of the same but better?
Anyway, goodbye Big Load of Garbage. You were aptly named and you will be missed.
just a Big Load Of Garbage
(or BLOG, for short)
Tuesday 6 January 2015
Saturday 22 November 2014
Calories in, something out
Let's talk about the whole "calories in, calories out" model of weight loss.
So, I've witnessed a lot of discussions about obesity in my day (I, uh, might be spending too much time on the internet). Some of these discussions have been very nuanced and largely reasonable, while others have been...not so much those things. But no matter what the level of discourse is, it is an inviolable truth of the universe that at some point during the debate, something like the following must be said:
And it annoys me. It annoys me not because the statement is false (it's true, of course - well, I would rather say that mass in versus mass out actually determines weight loss, but that's quibbling). No, it annoys me precisely because it's true. The fact that it's true means that people can keep saying it in debates and feel justified in doing so, when in fact it's a profoundly useless truth.
Let me explain why I think so.
(Oh, and I should probably emphasize here that being a borderline-unhealthily-skinny person, these debates have an abstract/academic quality to me that they may not have to other people. So: thin-privilege alert or whatever)
So as I said, the caloric-balance model of weight loss is true. In fact it's trivially true (or at least the mass-balance version is trivially true, but mass and energy are pretty fungible in the body, so that's fine). But true doesn't mean useful, and the problem starts when people assume that caloric intake and output are easily controllable.
The classic caloric-balance argument would go something like this: the body takes in a certain number of calories per day, and expends a certain number of calories per day. When excess calories are consumed (that is, more calories are consumed than expended), these excess calories are stored in the body as fat or muscle mass. We know these excess calories have to be stored in the body somewhere, because feces and urine contain almost no calories, and something something conservation of energy. So, you should gain weight if you take in more calories than you put out, because the extra calories stay in your body, and the extra calories have weight. Similarly, if a person expends more calories than they consume in a day, then again that energy has to come from somewhere, and that somewhere has to be energy stores in the body. These energy stores, consisting of either fat or muscle, have mass. Therefore, if you burn more calories than you take in, you'll end up using some of your energy stores, and so you have to lose mass. There is no other way you could have a calorie deficit in your diet, short of violating conversation of energy (which most people are for some reason reluctant to do).
So, okay. Huh. When you put it that way, it actually sounds pretty airtight. Consuming less calories than you expend really should lead to weight loss. Why would anyone ever object to such a sensible-sounding statement?
The classic rebuttal to this argument would go something like this: the body is extremely complicated. Certainly someone can decide to lower their caloric intake below the level of their normal caloric output. However, if they do that, there's no guarantee that their caloric output will stay the same. People are not in control of their metabolism, for instance. When faced with very low intake of calories, the body can decide to lower its metabolism (read: make you feel extremely tired), which will cause you to expend less calories than you otherwise would have. This might foil your plan to consume fewer calories than you expend. Or, alternatively, since the body is in control of your hunger sense, it's entirely capable of making you feel very hungry. Unbearably hungry, in fact. So hungry that you wind up eating enough to bring you back to a calorie-neutral diet (or even an excess-calorie diet). The point is: assuming perfect control over one's intake and output of calories, the energy/mass-balance point of view would make sense, and yes, a calorie deficit would lead to weight loss. However, biochemically speaking caloric intake and output are not under our control, and indeed to control them would require almost superhuman restraint. Therefore, whatever the logical merits of the calories in/calories out model, it's simply not a useful way of thinking when we take into account human psychology/physiology - people just don't have the willpower required to guarantee weight loss in this manner.
That's the classic rebuttal to the caloric-balance argument.
I'm not going to give the classic rebuttal, though. I'm going to do something different. As a physicist, I like thought experiments, so I'm going to try one of those. Let's forget about willpower limitations entirely. In fact, let's imagine a person with literally perfect willpower. Whatever they decide to do, they can do (up to the physical limits imposed by their body). So if they decide they want to...I don't know, stop themselves from peeing, they can easily do so up until the point where their bladder explodes.
Now let's say this person decides to adopt a diet with a calorie deficit. Hunger pangs and tiredness be damned, they eat their watercress and complete their daily jog. They take in 1500 calories per day, say, and put out 2000. What will happen to such a person?
This isn't a trick question. I'm literally asking what you would expect to happen in this situation.
...
...
...
Okay, so I usually try to avoid asking leading questions, but I hope most people (including those who accept the calorie in/calorie out viewpoint) will agree: our hypothetical person in this case should lose weight. It would be super weird if they didn't: willpower normally limits someone from achieving a negative-calorie diet in the first place, but assuming that they could, surely they would lose weight, right?
Right?
Are you ready for the punchline?
This isn't a thought experiment.
I refer you to the case of Michael Edelman (#4 on the linked page - and NSFW, I guess? Depending on your workplace's position on extremely obese people?)
Michael Edelman was about 1200 pounds or so at his heaviest. That is...fairly heavy. Because of his extreme obesity (and the death of another, also extremely obese, friend) he ended up developing a severe fear of eating. This severe fear of eating - while debilitating for Michael - is convenient for us, because we will use it as a stand-in for extreme willpower. As a result of his fear, Michael didn't want to eat, no matter how hungry he felt. He wound up, not surprisingly, adopting a negative-calorie diet. And because of his (tragic) phobia, he was able to stick to that diet in spite of the pain it caused him.
So, dear readers: what do you suppose happened to him? This is our thought experiment brought to life - a person with essentially perfect willpower, choosing to consume fewer calories than he expends. Place your bets: did he lose all of his extra weight, or did he instead wind up going off the diet? Which gave in: the irresistible force or the immovable object?
(Drumroll, please....)
Neither. He starved to death at 600 pounds.
Let me repeat that:
He starved.
To death.
At 600 pounds.
Did anyone predict that? I for one didn't. And I definitely don't think the calorie in/calorie out people did, because I seem to remember them saying something about a negative-calorie diet, properly maintained, inevitably leading to weight loss - not leading to death by starvation while plenty of calories remained in the body to be made use of. So that's kind of weird.
What went wrong with the seemingly airtight caloric-balance argument? How can you have a deficit of calories and still not lose weight?
As I see it, the problem was with that sneaky word, you. Sure, assuming that you want a you to keep existing, you better either lose mass or take in as many calories as you put out, lest you violate the laws of thermodynamics. But the laws of thermodynamics are also perfectly happy with you outputting no calories - that is, with you being dead. This scenario completely satisfies conservation of energy as well. There is certainly no law of thermodynamics that says that your cells have to release any energy they have stored in order to prevent your death. They can just not release the energy.
But of course, the point here isn't that most people should avoid a negative-calorie diet for fear of starving to death. That's ridiculous. Most people don't have a phobia of eating, so most people don't have the willpower required to starve to death while still obese. Most people facing that level of hunger will just start eating, and either gain or not lose weight.
No, the point is this: people generally think that it's willpower that limits someone from losing weight. That if the person in question could just put up with a little hunger, a little suffering, they would shed pounds easily. But Michael Edelman straight up disproved this - he suffered through as much hunger as a person can possibly suffer through, and still died while obese. He showed that the body, if sufficiently messed up (biochemically speaking), does not have to release its stored energy. He showed that, in the limit of infinite willpower, a negative calorie diet does not have to lead to weight loss.
Instead, it can just lead to death.
Be honest here: did this surprise you? Would you have thought that a person could starve to death while being extremely obese? I don't think most caloric-balance people would have predicted this. I think most caloric-balance people would have predicted (rather strongly) that he would either go off the diet or lose weight - not starve to death. And when your model predicts something strongly, and it turns out to be falsified, it's usually time to update your model.
So maybe you should stop saying "It's all a matter of calories in, calories out."
Maybe you should instead look for more nuance.
Maybe you should start saying, "Damnit. Obesity is hard."
So, I've witnessed a lot of discussions about obesity in my day (I, uh, might be spending too much time on the internet). Some of these discussions have been very nuanced and largely reasonable, while others have been...not so much those things. But no matter what the level of discourse is, it is an inviolable truth of the universe that at some point during the debate, something like the following must be said:
"Weight loss is just a matter of calories in versus calories out. If you expend more calories than you consume, you'll lose weight. It's simple thermodynamics."
-Random made-up internet person, who is my foil for the dayThat these talismanic words be invoked by someone is, for all intents and purposes, an ironclad rule - sooner would the actual laws of thermodynamics be violated than they not be said somewhere along the line.
And it annoys me. It annoys me not because the statement is false (it's true, of course - well, I would rather say that mass in versus mass out actually determines weight loss, but that's quibbling). No, it annoys me precisely because it's true. The fact that it's true means that people can keep saying it in debates and feel justified in doing so, when in fact it's a profoundly useless truth.
Let me explain why I think so.
(Oh, and I should probably emphasize here that being a borderline-unhealthily-skinny person, these debates have an abstract/academic quality to me that they may not have to other people. So: thin-privilege alert or whatever)
So as I said, the caloric-balance model of weight loss is true. In fact it's trivially true (or at least the mass-balance version is trivially true, but mass and energy are pretty fungible in the body, so that's fine). But true doesn't mean useful, and the problem starts when people assume that caloric intake and output are easily controllable.
The classic caloric-balance argument would go something like this: the body takes in a certain number of calories per day, and expends a certain number of calories per day. When excess calories are consumed (that is, more calories are consumed than expended), these excess calories are stored in the body as fat or muscle mass. We know these excess calories have to be stored in the body somewhere, because feces and urine contain almost no calories, and something something conservation of energy. So, you should gain weight if you take in more calories than you put out, because the extra calories stay in your body, and the extra calories have weight. Similarly, if a person expends more calories than they consume in a day, then again that energy has to come from somewhere, and that somewhere has to be energy stores in the body. These energy stores, consisting of either fat or muscle, have mass. Therefore, if you burn more calories than you take in, you'll end up using some of your energy stores, and so you have to lose mass. There is no other way you could have a calorie deficit in your diet, short of violating conversation of energy (which most people are for some reason reluctant to do).
So, okay. Huh. When you put it that way, it actually sounds pretty airtight. Consuming less calories than you expend really should lead to weight loss. Why would anyone ever object to such a sensible-sounding statement?
The classic rebuttal to this argument would go something like this: the body is extremely complicated. Certainly someone can decide to lower their caloric intake below the level of their normal caloric output. However, if they do that, there's no guarantee that their caloric output will stay the same. People are not in control of their metabolism, for instance. When faced with very low intake of calories, the body can decide to lower its metabolism (read: make you feel extremely tired), which will cause you to expend less calories than you otherwise would have. This might foil your plan to consume fewer calories than you expend. Or, alternatively, since the body is in control of your hunger sense, it's entirely capable of making you feel very hungry. Unbearably hungry, in fact. So hungry that you wind up eating enough to bring you back to a calorie-neutral diet (or even an excess-calorie diet). The point is: assuming perfect control over one's intake and output of calories, the energy/mass-balance point of view would make sense, and yes, a calorie deficit would lead to weight loss. However, biochemically speaking caloric intake and output are not under our control, and indeed to control them would require almost superhuman restraint. Therefore, whatever the logical merits of the calories in/calories out model, it's simply not a useful way of thinking when we take into account human psychology/physiology - people just don't have the willpower required to guarantee weight loss in this manner.
That's the classic rebuttal to the caloric-balance argument.
I'm not going to give the classic rebuttal, though. I'm going to do something different. As a physicist, I like thought experiments, so I'm going to try one of those. Let's forget about willpower limitations entirely. In fact, let's imagine a person with literally perfect willpower. Whatever they decide to do, they can do (up to the physical limits imposed by their body). So if they decide they want to...I don't know, stop themselves from peeing, they can easily do so up until the point where their bladder explodes.
Now let's say this person decides to adopt a diet with a calorie deficit. Hunger pangs and tiredness be damned, they eat their watercress and complete their daily jog. They take in 1500 calories per day, say, and put out 2000. What will happen to such a person?
This isn't a trick question. I'm literally asking what you would expect to happen in this situation.
...
...
...
Okay, so I usually try to avoid asking leading questions, but I hope most people (including those who accept the calorie in/calorie out viewpoint) will agree: our hypothetical person in this case should lose weight. It would be super weird if they didn't: willpower normally limits someone from achieving a negative-calorie diet in the first place, but assuming that they could, surely they would lose weight, right?
Right?
Are you ready for the punchline?
This isn't a thought experiment.
I refer you to the case of Michael Edelman (#4 on the linked page - and NSFW, I guess? Depending on your workplace's position on extremely obese people?)
Michael Edelman was about 1200 pounds or so at his heaviest. That is...fairly heavy. Because of his extreme obesity (and the death of another, also extremely obese, friend) he ended up developing a severe fear of eating. This severe fear of eating - while debilitating for Michael - is convenient for us, because we will use it as a stand-in for extreme willpower. As a result of his fear, Michael didn't want to eat, no matter how hungry he felt. He wound up, not surprisingly, adopting a negative-calorie diet. And because of his (tragic) phobia, he was able to stick to that diet in spite of the pain it caused him.
So, dear readers: what do you suppose happened to him? This is our thought experiment brought to life - a person with essentially perfect willpower, choosing to consume fewer calories than he expends. Place your bets: did he lose all of his extra weight, or did he instead wind up going off the diet? Which gave in: the irresistible force or the immovable object?
(Drumroll, please....)
Neither. He starved to death at 600 pounds.
Let me repeat that:
He starved.
To death.
At 600 pounds.
Did anyone predict that? I for one didn't. And I definitely don't think the calorie in/calorie out people did, because I seem to remember them saying something about a negative-calorie diet, properly maintained, inevitably leading to weight loss - not leading to death by starvation while plenty of calories remained in the body to be made use of. So that's kind of weird.
What went wrong with the seemingly airtight caloric-balance argument? How can you have a deficit of calories and still not lose weight?
As I see it, the problem was with that sneaky word, you. Sure, assuming that you want a you to keep existing, you better either lose mass or take in as many calories as you put out, lest you violate the laws of thermodynamics. But the laws of thermodynamics are also perfectly happy with you outputting no calories - that is, with you being dead. This scenario completely satisfies conservation of energy as well. There is certainly no law of thermodynamics that says that your cells have to release any energy they have stored in order to prevent your death. They can just not release the energy.
But of course, the point here isn't that most people should avoid a negative-calorie diet for fear of starving to death. That's ridiculous. Most people don't have a phobia of eating, so most people don't have the willpower required to starve to death while still obese. Most people facing that level of hunger will just start eating, and either gain or not lose weight.
No, the point is this: people generally think that it's willpower that limits someone from losing weight. That if the person in question could just put up with a little hunger, a little suffering, they would shed pounds easily. But Michael Edelman straight up disproved this - he suffered through as much hunger as a person can possibly suffer through, and still died while obese. He showed that the body, if sufficiently messed up (biochemically speaking), does not have to release its stored energy. He showed that, in the limit of infinite willpower, a negative calorie diet does not have to lead to weight loss.
Instead, it can just lead to death.
Be honest here: did this surprise you? Would you have thought that a person could starve to death while being extremely obese? I don't think most caloric-balance people would have predicted this. I think most caloric-balance people would have predicted (rather strongly) that he would either go off the diet or lose weight - not starve to death. And when your model predicts something strongly, and it turns out to be falsified, it's usually time to update your model.
So maybe you should stop saying "It's all a matter of calories in, calories out."
Maybe you should instead look for more nuance.
Maybe you should start saying, "Damnit. Obesity is hard."
Saturday 6 September 2014
On Reading
I need to distract myself from life, so...blogging it is! I tried alcohol at first, but then I realized that the two aren't mutually exclusive. I wrote roughly half of this piece a couple years ago, then (as I do for so many of my blog posts) abandoned it.
(it turns out writing things is kind of hard)
Anyway, our topic for today is America's 16th-favourite pastime, reading. I begin with what I assume is not a particularly shocking statement coming from me: I really like to read. Like, really really. It's been a constant source of enjoyment for me since I was very young, and thankfully I have kept up the habit as I've grown less young. I say thankfully because as many people know from experience, it's not actually that hard to fall out of the habit of reading. What starts as a cherished childhood habit becomes an occasional adult indulgence; we've all seen it happen. And that's really a shame, because if we consider what I would call the Big Five of entertainment media - television, film, books, music, and videogames - reading stands alone for a number of reasons. Well naturally they all stand alone for one reason or another, but I'd like to focus on what makes reading so special.
And it is special. Fans talk about reading in hushed tones: it's cozy. Or it's relaxing. Or comforting. Connotatively speaking there's no doubt it skews warmer than the other mediums. And part of the reason for that, which I didn't really appreciate until I started writing this post, is that the act of reading itself is highly intimate. Think about it - even in a purely physical sense, reading is unique in that it takes place entirely within your personal space. Whereas a movie or videogame goes out of its way to brashly involve the whole room, a book strays no more than a few feet from your face. And it does so quietly - of the five, reading is the only medium that involves no sound. These two factors - proximity and silence - help to wrap the reader in a kind of insulating bubble; a bubble where the rest of the world, beyond your book and wherever you happen to be oh-so-comfortably curled up, ceases to register. As almost any avid reader can attest, read a good book and you can completely lose yourself, to the extent that being interrupted results in a blink, a shake of the head, and the realization that yes, you had completely forgotten where you were.
But part of what makes reading unique is that this retreat from the outside world is twofold. I realize I'm venturing into more well-trodden ground here, but it still bears pointing out: reading, obviously, takes place in the mind. And so on top of the physical bubble that we create around ourselves when we sit down with a book, there's a further withdrawal from within that bubble into our own thoughts. I'll spare you the hackneyed cliches ("reading transports you to a world of imagination!"), but there is something pretty cool about that. It means that when reading a book, the experience - if not the actual content - is generated entirely by you, the reader (this is in marked contrast to, say, a movie, where the content and experience are essentially one and the same - two people may get different things from a movie, but by and large they see the same frames and hear the same sounds). This obviously adds to the intimacy I mentioned earlier (after all, what are we more familiar with than our own thoughts?), but it also means that reading is at its core an extremely personal affair. Of the millions of people who might read a book, not one of them will have the same experience as any other. For each scene we read, what we see in our head - the face of a character, or the slope of a hill, or even the angle from which we view the scene itself - these are utterly unique, to be seen by no one else in the world. Looking at it this way, it's not really a stretch to say that in the history of reading, no two people have ever read the same book - a thought that, if not profound, should at least give pause.
As an aside, when it comes to picturing scenes in books I've always found the whole viewing angle thing to be fascinating. I mean, if you're going to picture a location obviously it has to be from some angle, but still, how exactly does your brain go about choosing the particular angle you end up seeing? What criteria are used? It's as if we each have our own personal cinematographer in our heads, busily working away to select shots for us in real time, as we read. And incidentally, as a person who used to reread a lot of books, I can attest to the fact that once I picture a scene, it tends to be pictured exactly the same in subsequent reads. I find this kind of cool, as it even works for totally inconsequential scenes, in books that I haven't read for years. It seems like our internal cinematographers do their job once, to generate a kind of mind-movie that goes along with the book, and the brain uses that forever after.
Intimacy and mind-cinematographers aside, though, books are just kind of generally awesome. For instance, I've always loved how in-depth a book can be with respect to its characters and its world. I mean, for one there are the obvious advantages of just being able to tell the reader what characters are thinking in a novel. Thoughts can go a very long way towards defining a character, and in lieu of such direct brain access, movies or TV shows are often forced to rely on narration, awkward exposition, or Significant Glances. Even beyond that, though, I think books have the upper hand. I look at some of my favourite epic novels, like The Lord of the Rings, or The Dark Tower series, and I see an entire universe in them. When it comes to information content, a book can simply have more stuff stuffed into it than a movie or television show. Lacking the time constraints that directors face, authors have the option of delving much more deeply into their story. Think of how many scenes have to be cut from a typical book-to-movie adaptation - mostly nonessential scenes, to be sure, but scenes that together add up to a much more detailed and nuanced universe. I'm finding this weirdly hard to convey, but I feel like...oh, I don't know, like the real world has this fractal nature to it, where you can just keep probing deeper and deeper and keep finding more and more reality, because it is real. And movies and books alike, being fictional, are both just these facades, which seem fine from a distance but start to show their seams if examined too closely. But books manage to go a few levels deeper than movies - you can zoom in further before you start to see that unreality. With added detail comes a richness that yields a more realistic and believable universe - and a more interesting universe, because there's more to explore. The Hogwarts of Hollywood is a pale shadow compared to the Hogwarts of Hardcover, is what I'm trying to say.
Now of course, that's not to say other mediums don't have their advantages. I mean, yes: if I were forced to choose between only reading books and only watching, say, movies for the rest of my life, I would likely keep my Kindle. But it would be a close thing. I'm a huge fan of film, and there's no shortage of things I can point to that movies do better than books. The one that jumps straight to mind is powerful imagery - in terms of creating lasting, indelible visuals that can be recalled years later, my imagination is no match for Steven Spielberg. There's a reason we talk of movie adaptations bringing a book "to life" after all. What I can conjure in my head simply isn't vivid enough to compete with onscreen visuals. Or sound, for that matter - humour in particular is incredibly difficult to pull off without sound. This is a bit of an aside, but if you look at text-based humour, it's a very different beast from on-screen or in-person humour, relying much more on dry wit or sarcasm. I suspect (though I couldn't say for sure) that the reason for this is that certain tones of voice are able to be conveyed via text much more easily than others. The tone used in dry humour, say, is very close to what we already hear internally when we read. Whereas something like, say, this (to choose the first example that comes to mind) relies on an outburst, something that can't be easily generated by the brain's internal narrator. You really couldn't do this joke in a book, I don't think (certainly it wouldn't be as funny). Your brain just can't yell like that. Don't get me wrong, books can absolutely be funny - but I think in a much more circumscribed way, that requires a lot more effort, and that has to be tailored around the limitations of silence.
But anyway, such disadvantages/tradeoffs aside, I think it's fair to summarize my position as fairly pro-reading. So I do think it's a shame that people don't read more, even if that's quite possibly the most cliched opinion in history (What's that? A self-styled wannabe intellectual coming out in favour of reading? What's next, less-than-stellar behaviour from Rob Ford?). To nudge people more in the direction of reading, then, I thought I'd finish up by talking a bit about my personal reading habits. I actually have three reading "rules" that I've always strictly followed, for reasons that aren't entirely clear to me. I didn't really choose to implement the rules so much as they arose organically, and they may be somewhat arbitrary, but I do think they've helped to keep me in the habit of reading over the years. So here they are, in (unintentional) alphabetical order:
1. Always be reading a book
At any given time you should always have a book that you're in the process of reading. This is probably the most important rule of the three. Note that the rule says nothing about how often you should read your book - maybe it's every day, or once a week, or maybe only once a month. Doesn't matter. Just always be reading a book. As soon as you finish one book, you should immediately (preferably within the day) pick out your next book to read. Maybe read a page of it or something just so it feels like you've "started" it, psychologically - that way you're actually "reading" it, as opposed to just having abstractly chosen it as your next book. The point is, there should never be a point where someone asks you what you're reading and you don't have an answer. Obviously this trivially helps you read more, in that it simply gets rid of the gap between you finishing one book and starting the next. But it goes deeper than that. I really think this approach puts you in a different mindset than if you just read books sporadically. It makes you a reader, rather than someone who just happens to read occasionally when they find a good book. It ties it into your own personal identity, which I think is hugely important in inculcating habits. And although, again, this rule doesn't commit you to any particular reading schedule, I suspect that people who did adopt the rule would just naturally find themselves reading more frequently, without much effort on their part.
2. Finish every book you start
This is perhaps a less important rule, and more just a reflection of my personal tastes - I don't like the idea of starting a book then abandoning it. My personality is selectively perfectionist, and I guess this is one of the ways it manifests. Not finishing a book just seems sort of wrong to me, somehow. Of course, it's usually not an issue because I've gotten pretty good at picking out books that I want to finish anyway. But the few times I would up with a really awful book, I still slogged through to the bitter end. I think for me the rule is useful just to prevent a slippery slope scenario where I stop reading one book because it's awful, then I think abandoning books is okay, so I start doing it more and more, with books that aren't so awful. That's the point I guess - in isolation not finishing a book that you can't stand is fine, but if you find yourself only making it through half the books you start, I think something has gone wrong. Reading deserves better than that.
3. Only read one book at a time
Also to an extent just a reflection of my personal tastes. I think I like my reading very orderly: start book, read only that book, finish book, start new book. When it comes to reading I don't like getting sidetracked. It's the same instinct behind Rule #2, in a way - dropping a book to read something new, even if you do later come back to the first book, still seems to me to display a kind of...distractedness, I guess. Like you're always being lured away by the next new shiny thing, or whatever. You chose your book for a reason, so stick with it. Of course it goes without saying that this doesn't apply to everyone. Some people just like the variety of being able to switch back and forth between two stories. Some people like to read fiction and non-fiction at the same time. That's totally reasonable - it's just not for me.
Anyway, those are my three rules. I'm definitely not saying that everyone should follow them. They're more descriptive rules than they are prescriptive ones - how I do read in practice, rather than how I ought to. Rule #1 seems like legitimately good advice, though, and I would encourage everyone to give it a try. There are a few random other tips I would also throw in, things that have worked for me - try to read every night before you go to sleep, make an effort to cultivate good sources of book recommendations, get an e-reader to eliminate the trivial inconvenience of having to go to the book store, and so on. And I guess I would give the standard meta-advice of trying to look for different pieces of advice out there, and see what works for you. Mostly though I would just say that if you like to read, and wish you read more, really the thing to do is simply make an effort. Go out tonight and buy a book, or pick one up that's been collecting dust for years on the bookshelf, and just start reading. It's relaxing, it's enjoyable, it's mind-expanding, it's...just worth it in general. You make yourself a little bigger with each book you read, because the book becomes a part of your self. People sometimes wonder how I seem to know so much, and honestly, the reason (apart from me being super lucky to have gotten a really good memory) is that I've read so many books. There's a whole world of knowledge and experiences and different points of view to be had out there, and it's literally limitless - books are being written at a pace far faster than people could ever read them. There's never been a better time to be a reader - we live in a world with an unprecedented and almost unimaginable wealth of stories.
You're missing out if you don't take advantage of it.
(it turns out writing things is kind of hard)
Anyway, our topic for today is America's 16th-favourite pastime, reading. I begin with what I assume is not a particularly shocking statement coming from me: I really like to read. Like, really really. It's been a constant source of enjoyment for me since I was very young, and thankfully I have kept up the habit as I've grown less young. I say thankfully because as many people know from experience, it's not actually that hard to fall out of the habit of reading. What starts as a cherished childhood habit becomes an occasional adult indulgence; we've all seen it happen. And that's really a shame, because if we consider what I would call the Big Five of entertainment media - television, film, books, music, and videogames - reading stands alone for a number of reasons. Well naturally they all stand alone for one reason or another, but I'd like to focus on what makes reading so special.
And it is special. Fans talk about reading in hushed tones: it's cozy. Or it's relaxing. Or comforting. Connotatively speaking there's no doubt it skews warmer than the other mediums. And part of the reason for that, which I didn't really appreciate until I started writing this post, is that the act of reading itself is highly intimate. Think about it - even in a purely physical sense, reading is unique in that it takes place entirely within your personal space. Whereas a movie or videogame goes out of its way to brashly involve the whole room, a book strays no more than a few feet from your face. And it does so quietly - of the five, reading is the only medium that involves no sound. These two factors - proximity and silence - help to wrap the reader in a kind of insulating bubble; a bubble where the rest of the world, beyond your book and wherever you happen to be oh-so-comfortably curled up, ceases to register. As almost any avid reader can attest, read a good book and you can completely lose yourself, to the extent that being interrupted results in a blink, a shake of the head, and the realization that yes, you had completely forgotten where you were.
But part of what makes reading unique is that this retreat from the outside world is twofold. I realize I'm venturing into more well-trodden ground here, but it still bears pointing out: reading, obviously, takes place in the mind. And so on top of the physical bubble that we create around ourselves when we sit down with a book, there's a further withdrawal from within that bubble into our own thoughts. I'll spare you the hackneyed cliches ("reading transports you to a world of imagination!"), but there is something pretty cool about that. It means that when reading a book, the experience - if not the actual content - is generated entirely by you, the reader (this is in marked contrast to, say, a movie, where the content and experience are essentially one and the same - two people may get different things from a movie, but by and large they see the same frames and hear the same sounds). This obviously adds to the intimacy I mentioned earlier (after all, what are we more familiar with than our own thoughts?), but it also means that reading is at its core an extremely personal affair. Of the millions of people who might read a book, not one of them will have the same experience as any other. For each scene we read, what we see in our head - the face of a character, or the slope of a hill, or even the angle from which we view the scene itself - these are utterly unique, to be seen by no one else in the world. Looking at it this way, it's not really a stretch to say that in the history of reading, no two people have ever read the same book - a thought that, if not profound, should at least give pause.
As an aside, when it comes to picturing scenes in books I've always found the whole viewing angle thing to be fascinating. I mean, if you're going to picture a location obviously it has to be from some angle, but still, how exactly does your brain go about choosing the particular angle you end up seeing? What criteria are used? It's as if we each have our own personal cinematographer in our heads, busily working away to select shots for us in real time, as we read. And incidentally, as a person who used to reread a lot of books, I can attest to the fact that once I picture a scene, it tends to be pictured exactly the same in subsequent reads. I find this kind of cool, as it even works for totally inconsequential scenes, in books that I haven't read for years. It seems like our internal cinematographers do their job once, to generate a kind of mind-movie that goes along with the book, and the brain uses that forever after.
Intimacy and mind-cinematographers aside, though, books are just kind of generally awesome. For instance, I've always loved how in-depth a book can be with respect to its characters and its world. I mean, for one there are the obvious advantages of just being able to tell the reader what characters are thinking in a novel. Thoughts can go a very long way towards defining a character, and in lieu of such direct brain access, movies or TV shows are often forced to rely on narration, awkward exposition, or Significant Glances. Even beyond that, though, I think books have the upper hand. I look at some of my favourite epic novels, like The Lord of the Rings, or The Dark Tower series, and I see an entire universe in them. When it comes to information content, a book can simply have more stuff stuffed into it than a movie or television show. Lacking the time constraints that directors face, authors have the option of delving much more deeply into their story. Think of how many scenes have to be cut from a typical book-to-movie adaptation - mostly nonessential scenes, to be sure, but scenes that together add up to a much more detailed and nuanced universe. I'm finding this weirdly hard to convey, but I feel like...oh, I don't know, like the real world has this fractal nature to it, where you can just keep probing deeper and deeper and keep finding more and more reality, because it is real. And movies and books alike, being fictional, are both just these facades, which seem fine from a distance but start to show their seams if examined too closely. But books manage to go a few levels deeper than movies - you can zoom in further before you start to see that unreality. With added detail comes a richness that yields a more realistic and believable universe - and a more interesting universe, because there's more to explore. The Hogwarts of Hollywood is a pale shadow compared to the Hogwarts of Hardcover, is what I'm trying to say.
Now of course, that's not to say other mediums don't have their advantages. I mean, yes: if I were forced to choose between only reading books and only watching, say, movies for the rest of my life, I would likely keep my Kindle. But it would be a close thing. I'm a huge fan of film, and there's no shortage of things I can point to that movies do better than books. The one that jumps straight to mind is powerful imagery - in terms of creating lasting, indelible visuals that can be recalled years later, my imagination is no match for Steven Spielberg. There's a reason we talk of movie adaptations bringing a book "to life" after all. What I can conjure in my head simply isn't vivid enough to compete with onscreen visuals. Or sound, for that matter - humour in particular is incredibly difficult to pull off without sound. This is a bit of an aside, but if you look at text-based humour, it's a very different beast from on-screen or in-person humour, relying much more on dry wit or sarcasm. I suspect (though I couldn't say for sure) that the reason for this is that certain tones of voice are able to be conveyed via text much more easily than others. The tone used in dry humour, say, is very close to what we already hear internally when we read. Whereas something like, say, this (to choose the first example that comes to mind) relies on an outburst, something that can't be easily generated by the brain's internal narrator. You really couldn't do this joke in a book, I don't think (certainly it wouldn't be as funny). Your brain just can't yell like that. Don't get me wrong, books can absolutely be funny - but I think in a much more circumscribed way, that requires a lot more effort, and that has to be tailored around the limitations of silence.
But anyway, such disadvantages/tradeoffs aside, I think it's fair to summarize my position as fairly pro-reading. So I do think it's a shame that people don't read more, even if that's quite possibly the most cliched opinion in history (What's that? A self-styled wannabe intellectual coming out in favour of reading? What's next, less-than-stellar behaviour from Rob Ford?). To nudge people more in the direction of reading, then, I thought I'd finish up by talking a bit about my personal reading habits. I actually have three reading "rules" that I've always strictly followed, for reasons that aren't entirely clear to me. I didn't really choose to implement the rules so much as they arose organically, and they may be somewhat arbitrary, but I do think they've helped to keep me in the habit of reading over the years. So here they are, in (unintentional) alphabetical order:
1. Always be reading a book
At any given time you should always have a book that you're in the process of reading. This is probably the most important rule of the three. Note that the rule says nothing about how often you should read your book - maybe it's every day, or once a week, or maybe only once a month. Doesn't matter. Just always be reading a book. As soon as you finish one book, you should immediately (preferably within the day) pick out your next book to read. Maybe read a page of it or something just so it feels like you've "started" it, psychologically - that way you're actually "reading" it, as opposed to just having abstractly chosen it as your next book. The point is, there should never be a point where someone asks you what you're reading and you don't have an answer. Obviously this trivially helps you read more, in that it simply gets rid of the gap between you finishing one book and starting the next. But it goes deeper than that. I really think this approach puts you in a different mindset than if you just read books sporadically. It makes you a reader, rather than someone who just happens to read occasionally when they find a good book. It ties it into your own personal identity, which I think is hugely important in inculcating habits. And although, again, this rule doesn't commit you to any particular reading schedule, I suspect that people who did adopt the rule would just naturally find themselves reading more frequently, without much effort on their part.
2. Finish every book you start
This is perhaps a less important rule, and more just a reflection of my personal tastes - I don't like the idea of starting a book then abandoning it. My personality is selectively perfectionist, and I guess this is one of the ways it manifests. Not finishing a book just seems sort of wrong to me, somehow. Of course, it's usually not an issue because I've gotten pretty good at picking out books that I want to finish anyway. But the few times I would up with a really awful book, I still slogged through to the bitter end. I think for me the rule is useful just to prevent a slippery slope scenario where I stop reading one book because it's awful, then I think abandoning books is okay, so I start doing it more and more, with books that aren't so awful. That's the point I guess - in isolation not finishing a book that you can't stand is fine, but if you find yourself only making it through half the books you start, I think something has gone wrong. Reading deserves better than that.
3. Only read one book at a time
Also to an extent just a reflection of my personal tastes. I think I like my reading very orderly: start book, read only that book, finish book, start new book. When it comes to reading I don't like getting sidetracked. It's the same instinct behind Rule #2, in a way - dropping a book to read something new, even if you do later come back to the first book, still seems to me to display a kind of...distractedness, I guess. Like you're always being lured away by the next new shiny thing, or whatever. You chose your book for a reason, so stick with it. Of course it goes without saying that this doesn't apply to everyone. Some people just like the variety of being able to switch back and forth between two stories. Some people like to read fiction and non-fiction at the same time. That's totally reasonable - it's just not for me.
Anyway, those are my three rules. I'm definitely not saying that everyone should follow them. They're more descriptive rules than they are prescriptive ones - how I do read in practice, rather than how I ought to. Rule #1 seems like legitimately good advice, though, and I would encourage everyone to give it a try. There are a few random other tips I would also throw in, things that have worked for me - try to read every night before you go to sleep, make an effort to cultivate good sources of book recommendations, get an e-reader to eliminate the trivial inconvenience of having to go to the book store, and so on. And I guess I would give the standard meta-advice of trying to look for different pieces of advice out there, and see what works for you. Mostly though I would just say that if you like to read, and wish you read more, really the thing to do is simply make an effort. Go out tonight and buy a book, or pick one up that's been collecting dust for years on the bookshelf, and just start reading. It's relaxing, it's enjoyable, it's mind-expanding, it's...just worth it in general. You make yourself a little bigger with each book you read, because the book becomes a part of your self. People sometimes wonder how I seem to know so much, and honestly, the reason (apart from me being super lucky to have gotten a really good memory) is that I've read so many books. There's a whole world of knowledge and experiences and different points of view to be had out there, and it's literally limitless - books are being written at a pace far faster than people could ever read them. There's never been a better time to be a reader - we live in a world with an unprecedented and almost unimaginable wealth of stories.
You're missing out if you don't take advantage of it.
Thursday 21 August 2014
Putting the "and" in understanding
Hypothetical situation: let's say you've made some poor decisions in life, and through a series of very unlucky events, you've wound up doing an undergraduate degree in physics. Hey, don't feel bad - we all make mistakes. Happens to the best of us. Unfortunately, though, you're kind of stuck here now, and you figure you should probably try to make the best of it - and that means you'll now have to complete an unreasonably large number of assignments (this is how Physicists ensure that students graduate with the Important Skill of Being Able to Solve Assignment Problems). Anyway, you've just been given the latest assignment, and it's a doozy. For all the sense you can make of it, it may as well be written in Greek (and for that matter, about half of it actually is written in Greek). What's a poor physicist to do?
Well, usually the first thing a poor physicist is to do is procrastinate. But what about after that, when you actually want to get around to solving the problem?
Well, if I were such a hypothetical physicist (and man just to re-emphasize would that ever suck) I would probably start by breaking the problem into sub-problems. That is, I would identify parts of the problem that I didn't understand, parts which were prerequisites for figuring out the actual problem, and then try to understand those parts. Naturally because we're talking about physics this will no doubt involve further breaking the sub-problems down into sub-sub-problems, and those into sub-sub-sub-problems, but the basic idea is there. Reduce until you understand. If you don't understand, keep reducing.
This I think is basically the correct approach, and I fully endorse it. But it does lead to an interesting view of problem solving, which I would summarize as follows:
Understanding is an AND function.
What do I mean by that? Well, let's say our hypothetical assignment problem has 20 reasonably distinct sub-problems that you have to figure out. Initially, you understand basically nothing about the problem, and it is little more than an opaque wall of confusion that makes you want to break down into tears (hypothetically). So out of the 20 sub-problems, you are able to solve...precisely zero of them. Now let's say we give an "understanding" value to each of the sub-problems, which for simplicity we'll say can only be 0 (if you haven't solved the sub-problem) or 1 (if you have solved it). I would then claim that your total "understanding function" for the entire problem is simply the product of these individual understanding factors. Solving the whole problem requires solving every single one of the sub-problems, and if even one of the sub-problems remains unsolved, you haven't gotten the solution yet, so your overall "understanding function" simply remains at zero. This is what I mean by an AND function - to get that pesky understanding function up to one, you have to solve sub-problem one and sub-problem two and sub-problem three, and so on and so forth. So until you understand everything, it feels (to you anyway) pretty much as if you understand nothing.
This picture has some strange implications for the psychology of problem solving. Namely, it suggests that while solving a problem, in many cases progress won't feel like progress. You can be doing great work, solving sub-problem after sub-problem, flipping zero after zero into one after one, and yet to you it feels as if you've done nothing. Your overall understanding is still at zero. Basically, whenever you solve some sub-problem [A], you can always say "oh sure, I understand [A] now, but that's trivial - even with [A] solved, I still don't understand [B], [C], [X], [Y] and [Z]. I'm still confused." The goalposts shift to the next sub-problem, and because your sense of progress is tied to your overall confusion level, and because confusion doesn't go away until you solve the entire problem, it seems as if you've accomplished nothing. If you could step back and objectively evaluate your overall progress, you would of course agree that you had in fact accomplished a great deal - you solved a bunch of sub-problems! But you can't do that, and you're stuck in the middle of the situation, so all you know is that you haven't solved the whole problem. Thus, confusion reigns.
But it's even worse than just that. For in the scenario I outlined above, you already know how many sub-problems there are to solve. You know that once you figure out all twenty of the sub-problems, you'll have solved the whole problem. In a real assignment this isn't the case - you're in a state of uncertainty regarding the total number of sub-problems. Maybe all you have to do is solve the next sub-problem...but maybe not. Maybe you actually have to solve like twelve freakin more. Who knows! And that's not even taking into account the possibility of heading down the wrong problem-solving path - maybe you've solved ten sub-problems that you initially decided were important, but then you realize that whoops, no, they're actually just irrelevant to the problem at hand. Back to square one! This adds a whole extra layer of uncertainty on to the problem, which only increases your confusion - and again, it's the alleviation of confusion that feels like progress to you.
And in fact, it's even worse than that. Because in the real world, not only do you not know how many sub-problems you are away from a solution, you don't even know if there is a solution. Assignment problems are specifically chosen by professors to be solvable given your current level of knowledge (er, usually anyway). Unfortunately, the universe doesn't grade on a curve - it might be that the problem you're working on is completely beyond your skill level, or simply can't be solved. So even if you manage to notice that you're solving sub-problems, and even if you're sure you aren't solving the wrong sub-problems, even then you don't know if you're making real progress.
I think this is a large part of what makes doing Real Research (TM) so hard. You have to keep working even when you're confused for extended periods of time, all the while not being sure if you're actually getting anywhere or not. It's like being lost, blind, in a maze, and you won't even know if an exit exists up until the moment you find it. Einstein famously worked on his theory of General Relativity for eight years before finally completing it - eight years! That's a staggering, almost mind-boggling accomplishment. In a literal sense, I probably can't even imagine the dedication that it took. That's why Einstein tops so many people's lists (certainly my own) of the greatest physicists of all time. He was absolutely a genius, no doubt about that. But the thing that really set him apart, the thing that got him General Relativity when it wasn't even on anyone else's radar, was his willingness to persist so long in the face of confusion.
We could all probably learn something from Einstein here, because I don't think this only applies to the case of doing abstract theoretical research. Probably a lot of the problems we face in everyday life have this character. Maybe you're trying to figure out how to stop procrastinating, or quit smoking, or...be a better haberdasher, or something. Whatever. The point is, maybe none of the things that you've tried so far have worked. It could be that you were totally on the right track, and all you needed to do was one or two extra things on top of what you were already trying. But because to you it felt like you weren't making any progress - because of the tyranny of the AND function - you stopped trying, tragically just short of a solution.
So if you find yourself banging your head against the wall in despair, unable to solve a problem that has been plaguing you for months and months - well, take heart.
The answer might be closer than you think.
Well, usually the first thing a poor physicist is to do is procrastinate. But what about after that, when you actually want to get around to solving the problem?
Well, if I were such a hypothetical physicist (and man just to re-emphasize would that ever suck) I would probably start by breaking the problem into sub-problems. That is, I would identify parts of the problem that I didn't understand, parts which were prerequisites for figuring out the actual problem, and then try to understand those parts. Naturally because we're talking about physics this will no doubt involve further breaking the sub-problems down into sub-sub-problems, and those into sub-sub-sub-problems, but the basic idea is there. Reduce until you understand. If you don't understand, keep reducing.
This I think is basically the correct approach, and I fully endorse it. But it does lead to an interesting view of problem solving, which I would summarize as follows:
Understanding is an AND function.
What do I mean by that? Well, let's say our hypothetical assignment problem has 20 reasonably distinct sub-problems that you have to figure out. Initially, you understand basically nothing about the problem, and it is little more than an opaque wall of confusion that makes you want to break down into tears (hypothetically). So out of the 20 sub-problems, you are able to solve...precisely zero of them. Now let's say we give an "understanding" value to each of the sub-problems, which for simplicity we'll say can only be 0 (if you haven't solved the sub-problem) or 1 (if you have solved it). I would then claim that your total "understanding function" for the entire problem is simply the product of these individual understanding factors. Solving the whole problem requires solving every single one of the sub-problems, and if even one of the sub-problems remains unsolved, you haven't gotten the solution yet, so your overall "understanding function" simply remains at zero. This is what I mean by an AND function - to get that pesky understanding function up to one, you have to solve sub-problem one and sub-problem two and sub-problem three, and so on and so forth. So until you understand everything, it feels (to you anyway) pretty much as if you understand nothing.
This picture has some strange implications for the psychology of problem solving. Namely, it suggests that while solving a problem, in many cases progress won't feel like progress. You can be doing great work, solving sub-problem after sub-problem, flipping zero after zero into one after one, and yet to you it feels as if you've done nothing. Your overall understanding is still at zero. Basically, whenever you solve some sub-problem [A], you can always say "oh sure, I understand [A] now, but that's trivial - even with [A] solved, I still don't understand [B], [C], [X], [Y] and [Z]. I'm still confused." The goalposts shift to the next sub-problem, and because your sense of progress is tied to your overall confusion level, and because confusion doesn't go away until you solve the entire problem, it seems as if you've accomplished nothing. If you could step back and objectively evaluate your overall progress, you would of course agree that you had in fact accomplished a great deal - you solved a bunch of sub-problems! But you can't do that, and you're stuck in the middle of the situation, so all you know is that you haven't solved the whole problem. Thus, confusion reigns.
But it's even worse than just that. For in the scenario I outlined above, you already know how many sub-problems there are to solve. You know that once you figure out all twenty of the sub-problems, you'll have solved the whole problem. In a real assignment this isn't the case - you're in a state of uncertainty regarding the total number of sub-problems. Maybe all you have to do is solve the next sub-problem...but maybe not. Maybe you actually have to solve like twelve freakin more. Who knows! And that's not even taking into account the possibility of heading down the wrong problem-solving path - maybe you've solved ten sub-problems that you initially decided were important, but then you realize that whoops, no, they're actually just irrelevant to the problem at hand. Back to square one! This adds a whole extra layer of uncertainty on to the problem, which only increases your confusion - and again, it's the alleviation of confusion that feels like progress to you.
And in fact, it's even worse than that. Because in the real world, not only do you not know how many sub-problems you are away from a solution, you don't even know if there is a solution. Assignment problems are specifically chosen by professors to be solvable given your current level of knowledge (er, usually anyway). Unfortunately, the universe doesn't grade on a curve - it might be that the problem you're working on is completely beyond your skill level, or simply can't be solved. So even if you manage to notice that you're solving sub-problems, and even if you're sure you aren't solving the wrong sub-problems, even then you don't know if you're making real progress.
I think this is a large part of what makes doing Real Research (TM) so hard. You have to keep working even when you're confused for extended periods of time, all the while not being sure if you're actually getting anywhere or not. It's like being lost, blind, in a maze, and you won't even know if an exit exists up until the moment you find it. Einstein famously worked on his theory of General Relativity for eight years before finally completing it - eight years! That's a staggering, almost mind-boggling accomplishment. In a literal sense, I probably can't even imagine the dedication that it took. That's why Einstein tops so many people's lists (certainly my own) of the greatest physicists of all time. He was absolutely a genius, no doubt about that. But the thing that really set him apart, the thing that got him General Relativity when it wasn't even on anyone else's radar, was his willingness to persist so long in the face of confusion.
We could all probably learn something from Einstein here, because I don't think this only applies to the case of doing abstract theoretical research. Probably a lot of the problems we face in everyday life have this character. Maybe you're trying to figure out how to stop procrastinating, or quit smoking, or...be a better haberdasher, or something. Whatever. The point is, maybe none of the things that you've tried so far have worked. It could be that you were totally on the right track, and all you needed to do was one or two extra things on top of what you were already trying. But because to you it felt like you weren't making any progress - because of the tyranny of the AND function - you stopped trying, tragically just short of a solution.
So if you find yourself banging your head against the wall in despair, unable to solve a problem that has been plaguing you for months and months - well, take heart.
The answer might be closer than you think.
Friday 11 July 2014
Indications of vindication
It's always cool to have a big name scientist agree with you, especially if that big name scientist is Scott Aaronson. Here's Scott's thoughts on funding for quantum computing:
Sound familiar? A while ago I posted the following:
I'm not sure if I totally agree with everything I wrote in that post anymore, but that part I definitely still think is true (and so does Scott, apparently). We need to be honest when we ask the public for science funding, for reasons both ethical and pragmatic. We really don't know when basic science research will pan out in terms of practical applications, and pretending otherwise will only come back to bite us in the ass.
[epistemic status: gloating]
What happens when it turns out that some of the most-hyped applications of quantum computers (e.g., optimization, machine learning, and Big Data) were based on wildly inflated hopes—that there simply isn’t much quantum speedup to be had for typical problems of that kind, that yes, quantum algorithms exist, but they aren’t much faster than the best classical randomized algorithms? What happens when it turns out that the real applications of quantum computing—like breaking RSA and simulating quantum systems—are nice, but not important enough by themselves to justify the cost? (E.g., when the imminent risk of a quantum computer simply causes people to switch from RSA to other cryptographic codes? Or when the large polynomial overheads of quantum simulation algorithms limit their usefulness?) Finally, what happens when it turns out that the promises of useful quantum computers in 5-10 years were wildly unrealistic?I’ll tell you: when this happens, the spigots of funding that once flowed freely will dry up, and the techno-journalists and pointy-haired bosses who once sang our praises will turn to the next craze. And they’re unlikely to be impressed when we protest, “no, look, the reasons we told you before for why you should support quantum computing were never the real reasons! and the real reasons remain as valid as ever!”In my view, we as a community have failed to make the honest case for quantum computing—the case based on basic science—because we’ve underestimated the public. We’ve falsely believed that people would never support us if we told them the truth: that while the potential applications are wonderful cherries on the sundae, they’re not and have never been the main reason to build a quantum computer. The main reason is that we want to make absolutely manifest what quantum mechanics says about the nature of reality. We want to lift the enormity of Hilbert space out of the textbooks, and rub its full, linear, unmodified truth in the face of anyone who denies it. Or if it isn’t the truth, then we want to discover what is the truth.
Sound familiar? A while ago I posted the following:
With this in mind isn't it a bit...I don't know, shady of us, to be collecting money under the pretenses of maybe-eventually-possibly producing some kind of new technology that'll likely never arrive? No, I'd rather make the case for funding fundamental physics research simply as it is, without having to dangle a carrot in front of the public's nose. For one, it's more honest, which I tend to be in favour of. For another, it avoids backlash - after all, if we promise miracles, we had better darn well deliver them (I doubt the people will be so quick to grant us a 2000 year grace period). And for yet another (if ethical qualms don't move you) it allows physicists much more freedom in their research: freedom to explore, to investigate, to follow the winds of evidence wherever they lead, even if it's away from application.
I'm not sure if I totally agree with everything I wrote in that post anymore, but that part I definitely still think is true (and so does Scott, apparently). We need to be honest when we ask the public for science funding, for reasons both ethical and pragmatic. We really don't know when basic science research will pan out in terms of practical applications, and pretending otherwise will only come back to bite us in the ass.
[epistemic status: gloating]
I have a lot of doubt too! I'm sure of it!
This is a very good article. You should read it.
...
...have you read it yet? I'll wait.
Okay, good. So the article (which I'll now summarize, assuming you didn't read it) asserts that one of the major factors limiting women's success in the workplace has been a simple lack of confidence. The basic problem seems to be that people in general are extremely susceptible to displays of confidence. If someone claims to know what they're doing, and they say it confidently enough, it seems to be hardwired into our brains to just believe them by default. The person doing the claiming does not in any way have to actually know what they're doing, of course - but they do have to think they know what they're doing, in order to be able to say it confidently enough. And so people who are overconfident (or just regularly confident) tend be more successful in the getting promotions or raises or whatever, and (surprise surprise) it turns out that men are way better at being overconfident. Hence, the glass ceiling.
Like I said, I enjoyed the article and thought it was very good. It's both well-researched and surprisingly in-depth. And it's also a very hopeful article - if it's only underconfidence holding women back, then maybe the gender imbalance won't turn out to be as hard to fix as we thought. Which would be pretty great! But that being said, I did have a few issues with it.
First, the boring stuff. It really bugs me when people stretch the truth to make their points, especially when they have a good point to begin with. It jumps out at me immediately and tends to make me less inclined to listen to them than if they had just given the boring old unembellished facts. The "stretching the truth" metaphor is surprisingly apt, actually - to me it really does feel as if something physical were being stretched; being forced into a state it shouldn't be in. It's become almost an aesthetic thing for me at this point - it gives me a sort of vague, unpleasant twinge, like I imagine a car person would feel if I were driving in too low a gear. I wish people wouldn't do it.
Anyway, this article isn't particularly bad or anything. But a few things stuck out. For example:
...50% vs. something like 30%. Huh. When you say it like that, it doesn't sound quite so impressive.
I see this fairly often in articles, where statistics pretty-much-but-don't-totally support the author's point, and they can't not include statistics because otherwise annoying scientists would nitpick about little things like "lack of evidence" or "having no factual basis in reality". So they put the statistics in the article but do it drive-by style, sandwiching them in between anecdotes, and dressing them up with phrases like "more than [impressive-sounding fraction]" and "less than [small-sounding fraction]" instead of using plain old numbers. [I should point out that this approach is exactly backwards, of course - statistics should be the focus of an article like this, rather than an afterthought. In an ideal world an opinion piece would live or die based on the extent to which it was supported by data, and pieces written without substantial empirical backing would be looked at, not with contempt, but with confusion. But this is not that world, and that's not really my main point anyway.] The issue I have here isn't so much that statistics are marginalized, it's that the statistics they do include are in tension with the main point of the article, and that isn't acknowledged.
I mean, don't get me wrong, the data obviously show a difference between men and women. I'm sure it's significant in both the statistical and ordinary senses of the word. But to me it almost looks as if there's...plenty of guys who experience self-doubt, and plenty of girls who don't? Which is of course what you would expect, but the article goes to great lengths to gloss over this fact, with quotes like:
Truth. Stretch. Twinge.
I'll come back to this idea, because I think it's important. But first, another boring observation. Here's a very confident-sounding sentence that appears - in bold and large font - as one of those floating quotes in between paragraphs:
Anyway, so that's the sort of mundane stuff. That's not the real reason I'm commenting on this article. The real reason I wanted to comment on this article is because I identified with it so much. As I was reading it, over and over I would think: that's me! I do that! I feel that! I tend to have a lot of self-doubt, I'm very sensitive to criticism, if I weren't in the bubble that is grad school I would totally be all passive about asking for raises and promotions, and hesitant to apply for jobs I didn't have 100% of the qualifications for. Point is, I felt like the article was describing me. And, uh (last I checked, anyway) I am not, in fact, female. No matter how much Matt calls me his bitch.
I'll be honest, this made the article very frustrating to read. Not just frustrating - kind of hurtful, actually. I felt like the article was saying that, as a man, I couldn't feel the way I do. Either my feelings were simply nonexistent, or they were invalid. I felt marginalized, if I can say that without sounding too melodramatic or MRA-ish.
Now, I get it. I really do. I'm obviously not a typical guy, and women really do have a lot of disadvantages in the workplace. And if an article comes across as wishy-washy and stops to include disclaimers every half-paragraph, people won't take that as evidence that a complex and nuanced issue is being discussed - they'll just mentally chalk it up as a draw, and probably ignore the article altogether. So it's likely in an author's best interest, generally speaking, to make it seem like things are more one-sided than they really are, in order to get their (very legitimate) point across.
Still, though. Still I'm annoyed. I mean, let's go back to those numbers for a second. Earlier, we had the lovely statistic that half of all surveyed women reported experiencing self-doubt about their job, and fewer than a third of men did. Fine. Clearly, the numbers already "favour" women in this case (in the sense of them being more underprivileged), but let's say even that is an underestimate. In fact, let's pretend that those numbers are total bullshit. Instead, we'll charitably assume, for whatever reason, that women drastically under-reported how much they experience self-doubt, and men drastically over-reported (I don't think this makes any sense really, given that men experience a general cultural pressure to not display weakness, but who cares). Let's say that the real numbers are that 90% of women have self-doubt about their job, and only 10% (say) of men do. Surely, in this hypothetical case, it would make sense for society to focus on women, right? I mean, 90% versus 10%? Come on. Given the vast disparity between the genders, it would behoove us to simply forget men for a second, and just try to encourage girls to be more confident, yes?
No. No no no.
I can imagine different worlds. I can imagine a world in which we simply couldn't tell if a given person was overconfident or underconfident. A world in which we weren't able to discern who had a growth mindset and who had a fixed mindset. A world in which personality tests didn't exist, in which we were helpless to distinguish between different people, in which we were forced to reason based on correlates.
This is not that world.
We do have ways of distinguishing between people. We can administer questionnaires, we can conduct surveys. Heck, we can just look at people's behavior. We do not just have to go by the second-best method of assuming everyone who has the same gender has the same level of self-confidence. If you assume that all girls are under-confident and all boys are over-confident, you will automatically be wrong by a factor of (at the very least) 10%, guaranteed. You will encourage the 10% of girls who are overconfident to be even more confident, and the 10% of boys who are underconfident to be even less confident.
This, while perhaps better than nothing, is far from ideal (and remember, I'm being generous with the percentage estimates). The problem starts the second we try to frame the issue in terms of men versus women. It may seem to make sense, given the strong correlation between gender and confidence level in this case - but remember, the correlation is not perfect. Grouping by gender is quite simply not a natural category when it comes to self-confidence levels. Instead, we should be grouping by...well, by those who are underconfident and those who are overconfident! Remember, we have science! We can measure things! A much, much, better approach would be to devise a test that measures levels of self-confidence (I'm assuming this shouldn't be that hard, given what we know so far about the subject), administer it to second-graders or whatever, and then do [whatever you were going to do to encourage girls to be more confident] to [all people who displayed signs of low self-confidence on the test, regardless of gender]. Most of those people would be girls, sure - but not all of them! Some of them would be boys (like me!) who would no doubt greatly benefit from the intervention. God knows I would have done well to read Carol Dweck at an earlier age.
This seems to me to be an example of one of my least favourite fallacies, the ecological fallacy (if there were one piece of understanding I could force into everyone's brain in the world, it might be this - that, or the obvious superiority of candlepin bowling). The ecological fallacy is when one assumes that because on average a group has a certain trait, a given individual in that group must have the trait as well. Kind of like how no woman has ever been taller than a man, because girls are on average shorter than boys. Or how most politicians tend to be men, so Angela Merkel doesn't exist. Or how I'm forced to believe that every single person has one breast and one testicle, because, hey, that's the average.
Point is, it's a really stupid fallacy. The real world deals with distributions, not averages. Two groups can have overlapping distributions while still having different means. It's not that difficult really. I can't tell you how many times I've seen a study claiming something innocuous like "Left-handed people are more likely to use fabric softener" and the first comment is someone saying "This study is obviously bullshit my dad is left-handed and he's never used fabric softener in his life, in fact he actually uses fabric hardener why are scientists so dumb" and then acting as if the study has been completely invalidated. I...find it hard to grok what's going on in people's minds when they say things like this. I hate to be uncharitable, but...do they literally not know what scientists mean when they say "tendency"? There certainly seems to be some kind of understanding gap, because I can't imagine ever trying to refute a study that claimed a between-group difference by pointing out a single counterexample that I had at hand. You would expect counterexamples! It would be surprising if there weren't any counterexamples! Maybe in rare cases, when the study makes a very strong claim, like that 99.9% of all left-handed people use fabric softener, and you personally know that all five of the lefties you've met have never touched the stuff - then yeah, maybe be a bit suspicious. But when the study is only claiming small effect sizes, like less than half a standard deviation (which, let's face it, most studies are), then just bite the bullet and accept that, due to your own very small and unrepresentative sample of the population, you are incapable of discerning the truth of the matter. Just trust the goddamn study, even if it goes against what you've observed in the world so far. That's what studies are for.
Alright, I've ranted enough. What are the takeaways here? Well, underconfidence is a significant problem in women (and a much less significant problem in men - or rather, a just-as significant problem in men, but for a much smaller fraction of the population). People (in general) who are underconfident should work to combat that for the sake of their future success, either by consciously correcting for the bias, working to expand their comfort zone, or reading Carol Dweck. Journalists should try to pay more attention to statistics and try extra hard to stop exaggerating things, mostly for my sake. People (in general) should read the wikipedia article on the ecological fallacy every tuesday morning, mostly for my sake.
And everyone should switch to candlepin bowling. Mostly for my sake.
Seriously, it's so much better.
[A blog I read has the habit of tagging all posts with an epistemic status. This seems like a really good idea. So, epistemic status for this post: reasonably confident, but much less so than the very strong tone throughout would imply. In fact, that's probably a safe assumption for almost all of my posts. I'm...not very confident, you see. Even if I sound like it.]
...
...have you read it yet? I'll wait.
Okay, good. So the article (which I'll now summarize, assuming you didn't read it) asserts that one of the major factors limiting women's success in the workplace has been a simple lack of confidence. The basic problem seems to be that people in general are extremely susceptible to displays of confidence. If someone claims to know what they're doing, and they say it confidently enough, it seems to be hardwired into our brains to just believe them by default. The person doing the claiming does not in any way have to actually know what they're doing, of course - but they do have to think they know what they're doing, in order to be able to say it confidently enough. And so people who are overconfident (or just regularly confident) tend be more successful in the getting promotions or raises or whatever, and (surprise surprise) it turns out that men are way better at being overconfident. Hence, the glass ceiling.
Like I said, I enjoyed the article and thought it was very good. It's both well-researched and surprisingly in-depth. And it's also a very hopeful article - if it's only underconfidence holding women back, then maybe the gender imbalance won't turn out to be as hard to fix as we thought. Which would be pretty great! But that being said, I did have a few issues with it.
First, the boring stuff. It really bugs me when people stretch the truth to make their points, especially when they have a good point to begin with. It jumps out at me immediately and tends to make me less inclined to listen to them than if they had just given the boring old unembellished facts. The "stretching the truth" metaphor is surprisingly apt, actually - to me it really does feel as if something physical were being stretched; being forced into a state it shouldn't be in. It's become almost an aesthetic thing for me at this point - it gives me a sort of vague, unpleasant twinge, like I imagine a car person would feel if I were driving in too low a gear. I wish people wouldn't do it.
Anyway, this article isn't particularly bad or anything. But a few things stuck out. For example:
The shortage of female confidence is increasingly well quantified and well documented. In 2011, the Institute of Leadership and Management, in the United Kingdom, surveyed British managers about how confident they feel in their professions. Half the female respondents reported self-doubt about their job performance and careers, compared with fewer than a third of male respondents.My god! Half of female respondents, you say!? And fewer than a third of male respondents? Why that's...
...50% vs. something like 30%. Huh. When you say it like that, it doesn't sound quite so impressive.
I see this fairly often in articles, where statistics pretty-much-but-don't-totally support the author's point, and they can't not include statistics because otherwise annoying scientists would nitpick about little things like "lack of evidence" or "having no factual basis in reality". So they put the statistics in the article but do it drive-by style, sandwiching them in between anecdotes, and dressing them up with phrases like "more than [impressive-sounding fraction]" and "less than [small-sounding fraction]" instead of using plain old numbers. [I should point out that this approach is exactly backwards, of course - statistics should be the focus of an article like this, rather than an afterthought. In an ideal world an opinion piece would live or die based on the extent to which it was supported by data, and pieces written without substantial empirical backing would be looked at, not with contempt, but with confusion. But this is not that world, and that's not really my main point anyway.] The issue I have here isn't so much that statistics are marginalized, it's that the statistics they do include are in tension with the main point of the article, and that isn't acknowledged.
I mean, don't get me wrong, the data obviously show a difference between men and women. I'm sure it's significant in both the statistical and ordinary senses of the word. But to me it almost looks as if there's...plenty of guys who experience self-doubt, and plenty of girls who don't? Which is of course what you would expect, but the article goes to great lengths to gloss over this fact, with quotes like:
Currie rolled her eyes when we asked whether her wellspring of confidence was as deep as that of a male athlete. “For guys,” she said, in a slightly mystified, irritated tone, “I think they have maybe 13- or 15-player rosters, but all the way down to the last player on the bench, who doesn’t get to play a single minute, I feel like his confidence is just as big as the superstar of the team.” She smiled and shook her head. “For women, it’s not like that.”And:
“I think that’s really interesting,” Brescoll said with a laugh, “because the men go into everything just assuming that they’re awesome and thinking, Who wouldn’t want me?”And then paying lip service to the idea that men have doubts, but seemingly dismissing them as different not just in severity, but in kind:
Do men doubt themselves sometimes? Of course. But not with such exacting and repetitive zeal, and they don’t let their doubts stop them as often as women do.My point is: the trend is enough! If underconfidence hurts career outcomes and women tend to be less confident, then that's a super important fact all by itself. There's no need to pretend that 100% of women doubt and 0% of men do. And in fact, because you so helpfully included the data, you can't pretend - 30% of men experiencing self-doubt about work is a sizable (and last I checked, non-zero) fraction of the population. Less than women, sure, but still sizable. There's no need to exaggerate - you have a good case!
Truth. Stretch. Twinge.
I'll come back to this idea, because I think it's important. But first, another boring observation. Here's a very confident-sounding sentence that appears - in bold and large font - as one of those floating quotes in between paragraphs:
In studies, men overestimate their abilities and performance, and women underestimate both. Their performances do not differ in quality.This is followed up with another very confident quote:
“It is one of the most consistent findings you can have,” Major says of the experiment. Today, when she wants to give her students an example of a study whose results are utterly predictable, she points to this one.Now, seeing as this is a highly consistent finding, I would naturally assume that it (that is, men being overconfident) holds true for pretty much any study you can find. At the very least, I would assume that studies included in this article would follow the trend. So I can't resist pointing out that these quotes come only a few paragraphs after the following:
The women rated themselves more negatively than the men did on scientific ability: on a scale of 1 to 10, the women gave themselves a 6.5 on average, and the men gave themselves a 7.6. When it came to assessing how well they answered the questions, the women thought they got 5.8 out of 10 questions right; men, 7.1. And how did they actually perform? Their average was almost the same—women got 7.5 out of 10 right and men 7.9....which is an example of men, in a study, underestimating themselves! Sure, they underestimated themselves less than women, but it was still an underestimation. Even if this study is a total anomaly, and all other studies shows overconfidence by men...well then, for one, why did you pick this study, and for another, why didn't you acknowledge the contradiction that this creates? Again: piece of the article, in tension with another piece, being ignored. It bugs me.
Anyway, so that's the sort of mundane stuff. That's not the real reason I'm commenting on this article. The real reason I wanted to comment on this article is because I identified with it so much. As I was reading it, over and over I would think: that's me! I do that! I feel that! I tend to have a lot of self-doubt, I'm very sensitive to criticism, if I weren't in the bubble that is grad school I would totally be all passive about asking for raises and promotions, and hesitant to apply for jobs I didn't have 100% of the qualifications for. Point is, I felt like the article was describing me. And, uh (last I checked, anyway) I am not, in fact, female. No matter how much Matt calls me his bitch.
I'll be honest, this made the article very frustrating to read. Not just frustrating - kind of hurtful, actually. I felt like the article was saying that, as a man, I couldn't feel the way I do. Either my feelings were simply nonexistent, or they were invalid. I felt marginalized, if I can say that without sounding too melodramatic or MRA-ish.
Now, I get it. I really do. I'm obviously not a typical guy, and women really do have a lot of disadvantages in the workplace. And if an article comes across as wishy-washy and stops to include disclaimers every half-paragraph, people won't take that as evidence that a complex and nuanced issue is being discussed - they'll just mentally chalk it up as a draw, and probably ignore the article altogether. So it's likely in an author's best interest, generally speaking, to make it seem like things are more one-sided than they really are, in order to get their (very legitimate) point across.
Still, though. Still I'm annoyed. I mean, let's go back to those numbers for a second. Earlier, we had the lovely statistic that half of all surveyed women reported experiencing self-doubt about their job, and fewer than a third of men did. Fine. Clearly, the numbers already "favour" women in this case (in the sense of them being more underprivileged), but let's say even that is an underestimate. In fact, let's pretend that those numbers are total bullshit. Instead, we'll charitably assume, for whatever reason, that women drastically under-reported how much they experience self-doubt, and men drastically over-reported (I don't think this makes any sense really, given that men experience a general cultural pressure to not display weakness, but who cares). Let's say that the real numbers are that 90% of women have self-doubt about their job, and only 10% (say) of men do. Surely, in this hypothetical case, it would make sense for society to focus on women, right? I mean, 90% versus 10%? Come on. Given the vast disparity between the genders, it would behoove us to simply forget men for a second, and just try to encourage girls to be more confident, yes?
No. No no no.
I can imagine different worlds. I can imagine a world in which we simply couldn't tell if a given person was overconfident or underconfident. A world in which we weren't able to discern who had a growth mindset and who had a fixed mindset. A world in which personality tests didn't exist, in which we were helpless to distinguish between different people, in which we were forced to reason based on correlates.
This is not that world.
We do have ways of distinguishing between people. We can administer questionnaires, we can conduct surveys. Heck, we can just look at people's behavior. We do not just have to go by the second-best method of assuming everyone who has the same gender has the same level of self-confidence. If you assume that all girls are under-confident and all boys are over-confident, you will automatically be wrong by a factor of (at the very least) 10%, guaranteed. You will encourage the 10% of girls who are overconfident to be even more confident, and the 10% of boys who are underconfident to be even less confident.
This, while perhaps better than nothing, is far from ideal (and remember, I'm being generous with the percentage estimates). The problem starts the second we try to frame the issue in terms of men versus women. It may seem to make sense, given the strong correlation between gender and confidence level in this case - but remember, the correlation is not perfect. Grouping by gender is quite simply not a natural category when it comes to self-confidence levels. Instead, we should be grouping by...well, by those who are underconfident and those who are overconfident! Remember, we have science! We can measure things! A much, much, better approach would be to devise a test that measures levels of self-confidence (I'm assuming this shouldn't be that hard, given what we know so far about the subject), administer it to second-graders or whatever, and then do [whatever you were going to do to encourage girls to be more confident] to [all people who displayed signs of low self-confidence on the test, regardless of gender]. Most of those people would be girls, sure - but not all of them! Some of them would be boys (like me!) who would no doubt greatly benefit from the intervention. God knows I would have done well to read Carol Dweck at an earlier age.
This seems to me to be an example of one of my least favourite fallacies, the ecological fallacy (if there were one piece of understanding I could force into everyone's brain in the world, it might be this - that, or the obvious superiority of candlepin bowling). The ecological fallacy is when one assumes that because on average a group has a certain trait, a given individual in that group must have the trait as well. Kind of like how no woman has ever been taller than a man, because girls are on average shorter than boys. Or how most politicians tend to be men, so Angela Merkel doesn't exist. Or how I'm forced to believe that every single person has one breast and one testicle, because, hey, that's the average.
Point is, it's a really stupid fallacy. The real world deals with distributions, not averages. Two groups can have overlapping distributions while still having different means. It's not that difficult really. I can't tell you how many times I've seen a study claiming something innocuous like "Left-handed people are more likely to use fabric softener" and the first comment is someone saying "This study is obviously bullshit my dad is left-handed and he's never used fabric softener in his life, in fact he actually uses fabric hardener why are scientists so dumb" and then acting as if the study has been completely invalidated. I...find it hard to grok what's going on in people's minds when they say things like this. I hate to be uncharitable, but...do they literally not know what scientists mean when they say "tendency"? There certainly seems to be some kind of understanding gap, because I can't imagine ever trying to refute a study that claimed a between-group difference by pointing out a single counterexample that I had at hand. You would expect counterexamples! It would be surprising if there weren't any counterexamples! Maybe in rare cases, when the study makes a very strong claim, like that 99.9% of all left-handed people use fabric softener, and you personally know that all five of the lefties you've met have never touched the stuff - then yeah, maybe be a bit suspicious. But when the study is only claiming small effect sizes, like less than half a standard deviation (which, let's face it, most studies are), then just bite the bullet and accept that, due to your own very small and unrepresentative sample of the population, you are incapable of discerning the truth of the matter. Just trust the goddamn study, even if it goes against what you've observed in the world so far. That's what studies are for.
Alright, I've ranted enough. What are the takeaways here? Well, underconfidence is a significant problem in women (and a much less significant problem in men - or rather, a just-as significant problem in men, but for a much smaller fraction of the population). People (in general) who are underconfident should work to combat that for the sake of their future success, either by consciously correcting for the bias, working to expand their comfort zone, or reading Carol Dweck. Journalists should try to pay more attention to statistics and try extra hard to stop exaggerating things, mostly for my sake. People (in general) should read the wikipedia article on the ecological fallacy every tuesday morning, mostly for my sake.
And everyone should switch to candlepin bowling. Mostly for my sake.
Seriously, it's so much better.
[A blog I read has the habit of tagging all posts with an epistemic status. This seems like a really good idea. So, epistemic status for this post: reasonably confident, but much less so than the very strong tone throughout would imply. In fact, that's probably a safe assumption for almost all of my posts. I'm...not very confident, you see. Even if I sound like it.]
Wednesday 16 April 2014
Scrabble babble
Fun thought of the day: along the lines of people preferring right-handed QWERTY words, I've often wondered if certain people have an unconscious tendency to prefer words that score well in Scrabble. You know, words with lots of H's, Q's, Y's, K's, etc. Obviously it would only apply to the Scrabble-playing subset of the population (and probably only the nerdiest players at that), but I wouldn't be too surprised if it turned out to be a measurable - if tiny - effect. Scrabble is a fairly popular game after all, and pretty much everyone plays it at least a little bit as a kid. I've noticed it in myself at least (I think this is at least 30% of the reason I like Python so much). Consider the following examples of high-scoring words (of which "example" would be an example, I guess):
psychic
quiz
hyphen
phylum
Now compare with the following similar-ish words:
seer
test
colon
genus
Doesn't the first list just seem better, somehow? I know it does to me. Now, granted, maybe it doesn't to you - and even if it does, that could easily just be because I've primed you for it. Still, as I said, it's a fun thought. Some bored psychology grad student should probably look into this.
psychic
quiz
hyphen
phylum
Now compare with the following similar-ish words:
seer
test
colon
genus
Doesn't the first list just seem better, somehow? I know it does to me. Now, granted, maybe it doesn't to you - and even if it does, that could easily just be because I've primed you for it. Still, as I said, it's a fun thought. Some bored psychology grad student should probably look into this.
Subscribe to:
Posts (Atom)