Notes on a Weird Week

Dead-tree media: good. VC business model: bad.

Lewis WickesHine / Buyenlarge / Getty

Since I joined The Atlantic as a staff writer, I haven’t sent out a legitimate Galaxy Brain newsletter in a while (most have been reposts of articles I’ve published). So I figured I’d do a little weekend freewriting on some stuff that’s been tickling my brain lately.

Newspapers, They’re Good

I spent last week mostly off the grid on a trip and made an unusual, concerted effort to stay away from the news. Because of this, I learned about the Silicon Valley Bank collapse the old-fashioned way: While checking out of my hotel, I glimpsed a discarded copy of Monday’s Wall Street Journal and caught the heavily bolded headline. Learning about things from a newspaper is an exceedingly normal thing that normal people do all the time. But as an internet-poisoned millennial who works in media, I rarely come across a dead-tree copy of the news and learn something stunning from the headlines. I’m not breaking any new ground here with this observation, but I think the newspaper was really onto something, you guys.

Emerging from my news hibernation like a drowsy, uninformed bear, I had an experience straight out of the pre-smartphone era: See surprising headline, have many questions. Read lead story, have new questions. See article next to lead story that answers a few of those new questions. Rinse, repeat. Close paper, feel mostly satisfied and up to speed. As far as user experiences go, it was a great one. Turns out, an ideal way to consume news is to give journalists space to spend three days wrapping their head around an issue or event, and then to dive in and take time to read both the news reporting and the bigger contextualizing explainers in one stretch.

When most news happens, there’s an information vacuum that is immediately filled by an endless barrage of tweets, videos, and overconfident punditry. Some of this is right and responsible, but even more of it is wrong or reckless. Still, it’s a reasonably addictive way to get your news, specifically because it comes in pieces. There’s this feeling that you’re part of a collective of similar news obsessives who are cobbling together information. There’s endless room for debate or to yell at people when they jump the gun. The theorist L. M. Sacasas, who I quote frequently in this newsletter, has a great framework for this style of digital consumption: “The database has replaced the narrative.” Here’s how he describes it:

It’s not that we are literally presented with a relational database, but that we are confronted with what amounts to a loosely arranged set of data points, whose significance and meaning has not been baked into the form itself (as is the case with information encased in narrative). One effect of our digital media environment, then, is to immerse us in searchable databases of information rather than present us with comprehensive, integrated, and broadly compelling narratives.

This helps explain why being online has such a “choose your own adventure” feel when it comes to pretty much every potential news narrative. Sacasas argues (rightly) that this is also why misinformation feels so prevalent online. Obviously, people have been misleading each other since long before the internet, and the problem now isn’t just that deception circulates easier than it used to:

The problem is that under conditions of information superabundance, any bit of data, even if it is reasonable and accurate (perhaps especially if it is so), can be incorporated into wildly disparate and competing runs through the Database. It’s the total effect that matters most. The sum of our daily bombardment with information is to overwhelm and deplete our cognitive resources.

This is what makes fact-checking almost futile. Sacasas argues that we’re all conspiracy theorists now. He doesn’t mean we’re all picking up on or promoting awful garbage; rather, every one of us is tasked with having to sort through the morass of publicly available news and information and tweets and Instagram posts and TikToks and cobble together our own narratives. This leads to plenty of us disagreeing not just on the narratives themselves but also on the basic facts of a news event:

We are all in the position of holding beliefs, however sure we may be of them, that some non-trivial portion of the population considers not just mistaken but preposterous and paranoid.

Reading Sacasas’s post made me revisit a piece I wrote way back in 2014 about the “Alarming Rise of the Online Vigilante Detective.” It’s quaint, given that the still-niche phenomenon I tried to describe is now just how huge swaths of people operate online:

The rise of the vigilante investigation now seems undergirded by the internet-utopian idea that any and all information should be free and that those with the digital means to procure that information should be compelled to expose it. Standards and practices rarely exist (save for a halfhearted “don’t doxx” maxim in communities like Reddit); ethics are secondary concerns and reputation is rarely in jeopardy, thanks to the protective veil of online anonymity. Victory often goes to the actor most willing to do harm, to be wrong, or both.

We’re all just piecing together our own narratives, because there is no mass narrative.

I’m not advocating for a return to some media-gatekeeper-nostalgia period (that likely never existed)—in part because we simply won’t go back. But it was quite jarring to experience a manufactured version of the old way. Newspapers are far from a perfect way to consume news, but it was deeply satisfying not to have to sift through the database in order to understand the story. My unlikely hope is that 20 years from now, a burned-out Zoomer or Gen Alpha reinvents the newspaper, extolling its virtues as a pleasurable curated experience.

A Good AI Paper

On Thursday, I published a big piece on our AI moment that I am quite proud of. One of the biggest surprises for me is just how divided AI researchers and experts are on what is being built. There are lots of disputes in the field over very basic questions, such as “Can large language models understand things?” It seems that there’s very real disagreement on definitions for terms like understanding and intelligence.

Melanie Mitchell, an AI researcher at the Santa Fe Institute, has a great paper out that tracks these field debates. I wanted to share a few highlights.

From the “AI does not understand like a human” camp:

Another scholar argued that intelligence, agency, and by extension, understanding “are the wrong categories” for talking about these systems; instead LLMs are compressed repositories of human knowledge more akin to libraries or encyclopedias than to intelligent agents. For example, humans know what is meant by a “tickle” making us laugh, because we have bodies. An LLM could use the word ”tickle”, but it has obviously never had the sensation. Understanding a tickle is to map a word to a sensation, not to another word.

I don’t think too much about the particulars of human intelligence, but the idea that the core of our intelligence comes from the corporeal, sensory learning experience of being a human is striking.

From the “AI does understand like a human” camp: Mitchell’s summary suggests that this camp uses benchmark tests (tests given to humans to assess general language understanding) to make their case. GPT-4’s announcement last week cited that the model performed exceptionally well on an array of standardized tests, such as the SAT. This camp assumes “that humanlike understanding is required to perform well on these tasks.”

However! Mitchell pours some cold water on this rationale:

While “humanlike understanding” does not have a rigorous definition, it does not seem to be based on the kind of massive statistical models that today’s LLM’s learn; instead it is based on concepts—internal mental models of external categories, situations, and events, and of ones own internal state and “self”. In humans, understanding language (as well as nonlinguistic information) requires having the concepts that language (or other information) describes, beyond the statistical properties of linguistic symbols.

I’d suggest reading the whole paper, but, despite all the evidence, the field is still split almost down the middle. As I argue in my essay this week, that disagreement leaves me with an eerie feeling that nobody really knows what we’re building.

Venture Capital Seems Pretty Broken!

I don’t have tons to say that hasn’t already been said about the SVB collapse, but it sure does seem like an interesting moment of reckoning for the VC industry. I appreciated this bit from Daniel Davies’s blog:

And the fact that the VCs were able to use their portfolio companies as human shields in this way—a natural extension of the pretence that venture capitalists are in the tech industry rather than the financial industry—shows us what the real long-term cost of our current system of bailouts is, in terms of policy. Because the Fed and FDIC will always find a way to stabilise the system, populist yahoos and libertarians can rail against “bailouts” and pass legislation to “protect the taxpayers”, all on the understanding that it is purely playtime; that when things get serious, someone will find a way to bail them out.

One interesting argument, from the writer John Ganz, suggests that “it’s time to write down certain assets on our books, namely the entire idea of Silicon Valley itself.” His, and other excellent pieces, argue that the venture-capital class is listless and stagnant (though I imagine a generative-AI boom and hype cycle may change that perception a bit), and that the bank run is indicative of a herd mentality that’s causing industry rot and building very little of value. It’s hard not to agree with the herd part, and it made me think of what is perhaps the single best description of modern venture capital I’ve heard. I’ve cited it before in this newsletter, and it comes (hilariously) from Sam Bankman-Fried:

How do VCs find the next anything that they’re gonna invest in, right? Like how do they find the next company they’re gonna invest in? And I think my answer to that is like, when you break it down mechanically to what’s happening, you get a bizarre process. Like you get something that does not look like the paragon of efficient markets that you might expect, where it’s like, what’s mechanically happening? Well, they like see what all their friends are chattering about. And their friends keep talking about this company or this token or something, and they start FOMOing and then their LPs are like, yo, have you made us a lot of money off of this company or token yet?
… All the while you’re like, how do we justify? Is this a good investment? Like all the models are made up, right? Like things are currently being valued off of 2025 Ebitda right. But it’s not 2025 yet. It’s sort of like an interesting property of trying to value things off of 2025 Ebitda, right. You’re valuing them off of a model built by a person who owns the thing that’s being sold. So like, of course the numbers can go up between now and 2025. It’s gonna go up an arbitrary amount and you can justify anything by just like, you know, back graph goes up and off and eventually like, holy sh*t, LPs boy, are you gonna be excited about the stuff that we’re buying on your behalf.

What SBF is describing is a group that seems to be A) truly winging it, with no solid way to justify the value of what they’re trying to glom onto, and B) motivated mostly by FOMO and what their friend networks are excited about. Now, SBF is a pretty well-documented bullshitter, so take it with a grain of salt, I guess. But he’s also an example of somebody who was able to very effectively game this same funding system before he brought it crashing down. So maybe he’s got a point! For my money, SBF’s description of how venture capital works and the dynamics of the SVB collapse (where a bunch of group texts and herd mentality caused a bank run) feel somewhat similar. Seems like a broken model that moves fast and breaks things but also gets its money out in time. If you build a system that works this way, why would you ever bother with something as inefficient and boring as due diligence?

And, because it is the weekend, here is what my dogs are up to:

Charlie Warzel is a staff writer at The Atlantic and the author of its newsletter Galaxy Brain, about technology, media, and big ideas. He can be reached via email.