144 Comments

It’s incredible how much venture capital is being poured into a technology which experts in the field believe has a 10% chance of ending all life on the planet

Expand full comment
author

It's almost as if the venture capitalists don't believe that number!

Expand full comment

Isn't there a coordination problem here?

If AI doesn't destroy the world you'll end up with FOMO if you don't join the stampede, because other VCs will get fabulously rich and you won't.

If AI does destroy the world, your decision to invest almost certainly won't make you (or anyone else) worse off. It only matters if the world is poised right on the "destroyed/not destroyed" bubble and your marginal investment pushes it over the edge. Even if the probability of catastrophe is 10%, the probability of catastrophe *caused by you* is minuscule.

Expand full comment

Also depending on what "destroys the world" entails some folks may just not consider that possibility to have significant downsides. If AI doesn't nuke the world and you fund it you get fabulously wealthy. If AI does nuke the world you die instantaneously and never know you made a mistake.

Expand full comment

100% agree. And then of course all the VC money causes haphazard competition which greatly increases the probability of catastrophe. Awesome.

Expand full comment

This really does seem like the first act of a horror movie. Experts are screaming “slow down” while companies compete to develop a possibly deadly virus as quickly as possible.

Expand full comment

It’s a stampede problem. Venture capitalists get excited about whatever their VC friends are investing in. I doubt very many of them have thought about the (profound IMHO) dangers.

Expand full comment

Nay sir, they don't care about the number. Just like we don't care about the nuclear weapons that could end the modern world. We're all children lost in a dream.

Expand full comment

Has anyone ever managed to explain how AI can destroy the world? I've been following the meme since the 1960s with The Forbin Project but have yet to get a clear explanation of how it might work in the real world, not some scriptwriter's caricature.

Expand full comment
Comment removed
Expand full comment

I'm still wondering how it gets that access. If it needs someone to set it up, then the problem is people, not AI. We already have people.

Expand full comment
Comment removed
Expand full comment

Maybe we did have this conversation, now that I think of it. I'm still trying to figure out how an AI can do harm in the real world without a human enabler. Someone has to hook those Twitter bots up to the internet and set up the API for them to spread cruft on Twitter. I suppose a seriously introspective AI could find an exploit that would let it modify its own code to explore the necessary system calls, but that assumes all sorts of things, and one slip would give away its game.

Expand full comment
Comment removed
Expand full comment

If the nature of your work leads you to keep an arsenal, you might stop and wonder whether you're actually a Bond villain.

Expand full comment

Sam bought his apocalypse bunker and two McLarens before getting into AI. It's just a demonstration of what happens when you make a lot of money without ever getting a normal job.

Expand full comment

Shorter Noah:

"VCs are accelerating the end of the world by hiring thousands of smart young people who go to parties and moan abut superhuman AI risk, after spending all day making the risk worse than it is.

But the parties were awesome"

Expand full comment
author

But they probably aren't accelerating the end of the world. That's just hype.

Expand full comment

This seems like something no one actually knows. More people seem to be down on AI risk these days as signaling, because the AI risk people are annoying and a downer, and who really wants to be associated with them?

But seriously, we are collectively building things that will be much more capable than humanity in some ways, and that deserves some caution and respect for the unknown. We've already had at least a couple of documented near misses with nuclear weapons, and biology and arguably AI fall into the same ultra high risk bucket, in my opinion.

Expand full comment
Feb 26, 2023·edited Feb 27, 2023

We constantly build things with "human equivalent" or "more than human" abilities without starting new xrisk religions about them. That suggests it's reasonable to not do that.

Example of human equivalent invention: a baby. Pretty dangerous!

Example of more than human equivalent invention: a bulldozer. Even more dangerous!

Expand full comment

Yes, the implications of a bulldozer and AI are totally similar. How do you classify a strawman argument?

Expand full comment

I mean, a lot of extremely smart people make a good case that AGI poses a unique existential threat. Obviously nobody knows for sure, but it’s hard to read through, for example, Holden Karnofsky’s description of the problem and not come away with a healthy sense of the stakes. https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/

Expand full comment

> 10% chance of ending all life on the planet

Maybe because that number is nonsense?

Expand full comment

I really wish it were.

Expand full comment

It is. Is called "Pascal's mugging".

The rationalist AI people are members of a religion which involves pretending to be a STEM bro. (sort of the opposite of Twitter leftism where rich failsons pretend to be poor humanities bros.)

So they heard arguments about the future are more convincing if they have probability numbers. …so they made up some probability numbers.

Expand full comment

Your dismissal is not convincing, and I say this as somebody desperate to believe that we’re not in mortal danger. Which part of these points do you think is obviously wrong?

1) an AGI with the ability to self improve could quickly become much more intelligent than humans, possibly by orders of magnitude, and

2) such an entity might be extremely difficult or maybe impossible to align - it may well value its own survival over any needs of humanity

Rationalists are subject to groupthink just like everyone else but as somebody trying to game things out it’s hard not to see AGI as an enormous risk.

Expand full comment
Feb 27, 2023·edited Feb 27, 2023

I do not believe an AGI could "recursively self improve" and in fact I think that's one of the most obvious signs rationalists just read some SF novels and think it could work in real life through magic. Why would it be able to do that? Humans can't do that.

(It can learn things at the cost of very expensive GPU time maybe, but "orders of magnitude" growth - who's paying for that? Aren't there latency limits? If it splits into sub-AIs, why do they all stay aligned to each other? What does "smarter" mean, anyway?)

Nor would it necessarily prioritize "survival". Humans care about survival because we can't be restored from backup. A computer doesn't have this problem and is free to go on a vacation in virtual space for subjective-forever. (Minds in the Culture novels have to be specially programmed to not do this.)

The other main issue with an evil AGI escaping human control is… how will it pay its AWS bill? We don't need nuclear containment devices or anything. Entropy exists, so it's just going to be the AI equivalent of a homeless person without support.

Expand full comment

I agree that none of these things are possible now, thankfully. But I do think it’s important to distinguish between “I don’t see how we would do this” and “it’s not possible.” With enough time and billions of VC funding many previously impossible things (ie things from SF novels) can quickly become possible. I also understand that many rationalist doomers can be annoying but I’m glad they’re raising the alarm; this line of research has clear high-stakes risks and I’m frustrated at what appears to be little in the way of safety precautions from the major players.

Is the risk actually 10%? Who knows? That’s an average from a number of industry experts who know a lot more than I do. It’s certainly high enough to be taken seriously.

Expand full comment

> ending all life on the planet

Honest question, what's the predicting power of the statement?

Obviously, this is something that has never happened before (because if it did, we wouldn't be here). So we have billions of years of life on this planet, multiple mass extinction events (asteroids, super volcanoes, ice ages, etc), and life has survived in some form.

So, considering that... A 10% chance. Really? Even if the statement was 'end all human life', it would still be a silly prediction, at best.

Also, looking at human history and the amount of people who made similar false statements just about every time a new technology was implemented (remember the large hadron collider was supposed to unravel reality?); What is that 10% chance really based on?

Expand full comment

You’re not wrong to be skeptical; there certainly have been doomsday predictions by every generation since the dawn of time. But AGI, if we can actually build it (indeed this entire argument is predicated on it being possible), is fundamentally different from anything we’ve ever invented before, so past intuitions may not hold.

Expand full comment

The statement that in 2000s SF "tech wasn’t something you went into if you wanted to make a lot of money" is true only in a relative sense. Yes, there was a lot *more* money focus before and after. But it was pretty clear to a lot of us that if you got in early at one of the rocketship companies of that decade, it was a ticket to wealth, and that a lot of people were trying to get that ticket.

I interviewed at Google in 2004 and started in 2005, and this was very much on the minds of both interviewees and new Googlers. We were all conscious that the then-recent IPO was a Very Big Deal and we should care a lot about both the size and the strike price of our equity offer. One of the most frequently repeated pieces of lore was about the then head of engineering, Wayne Rosing, at the pre-IPO all hands threatening to take a baseball bat to the windshield of any excessively fancy cars he saw in the parking lot post IPO-- and though I wasn't at that all hands, I have it on excellent authority that this (the threat, not the execution thereof) actually happened.

Expand full comment
author

That is very true! But I feel like this tech scene was almost all down on the peninsula, and the people who went up to SF did so because they wanted a more bohemian life rather than a high powered corporate career. Perhaps I should have been clearer about that distinction...

Expand full comment

Right - cultural claims about 2000s era tech firms need to be handled very carefully. When I joined Google in 2006 I was interested to discover that my team was made up of strongly libertarian types. No leftism was even slightly visible anywhere. One April 1st someone sent around an anonymous email telling the workers to rise up and unionize - it was of course a widely appreciated joke.

Arguably the company's whole story, mission and ethos was strongly repellant to the left - advertising, making lots of money quickly, incredibly stark focus on meritocracy, giving the world free and open access to information (no questions asked), making employees into part owners (i.e. blurring the distinction between capitalists and workers), making it easy for anyone to publish anything they believed to the world etc. Leftists are famous for despising all these things, but libertarians love them, so it maybe wasn't a huge surprise.

So it's quite surprising now to see claims that tech firms are or were naturally left wing. That certainly wasn't true in their most successful and vital periods. The transition from "let's unionize" being an April Fools joke to being an actual thing they say seriously was interesting to watch.

Expand full comment

I think turning workers into owners is literally the thing leftists want?

But there's generally a lot of confusion since they want the "public" to own things, but never say which "public", and obviously state and employee ownership of a company is two different things.

Expand full comment

I don't think I ever heard socialists promote equity stakes in companies - maybe co-ops is the closest equivalent? As you observe, when they talk about the "workers" they don't actually mean the workers, they mean that all power should be concentrated into a government which then derives legitimacy by claiming to represent the workers.

Channelling Thomas Sowell, the fundamental intuition behind left wing views is that some people are just much better than others (smarter, wiser, more moral, fairer etc) and those are the people who should be put in charge of things, with that re-arrangement of society being justified by theorized oppression. Giving workers an ownership stake in their employers is the opposite: it's the decentralization of power and undermines the whole concept of oppression.

Expand full comment

You are confusing communists and socialists.

Expand full comment

The traditional liberal approach is simply to tax corporations at a relatively high level. This might explain the economic stagnation after World War II, but it did provide a lot of benefits to most people. The argument back in the 1930s is that this made the government a partner with corporations, aligning their interests while providing money to buy nice things for the country and workers.

Expand full comment

At the turn of the century I was in the valley at Apple and Microsoft as well as startups and never noticed any lefty sentiments. Most engineers were apolitical or libertarian leaning like me and no one was pro-union.

Expand full comment
founding
Feb 26, 2023Liked by Noah Smith

You should have been here in the mid-90s before the dotcom bust. Early Wired magazine culture was full of genius dreamers and we built Hotwired (the first commercial website) created the free software movement which built Apache and most of Linux and started all kinds of failed and successful companies and ideas. Burning Man really took off after the whole staff of Wired went and then ran a whole magazine about it.

It was a pretty amazing time and we had our share of all night parties and rave camp outs. And while lots of people thought they were going to get rich, more were there to create something amazing. It did give me my first IPO though.

Most of what happened in SF in the early aughts was by those who survived the massive downturn after 2001. I was out of work for 16 of 21 months. By the time things took off again I was too old and settled down to enjoy it but I am glad you did.

It’s nice to see young people recreating the sense of possibility and excitement for their own era in their own way. Cook that you got to see that.

Expand full comment
author

Wish I had been there!

Expand full comment

I left Apple in the early 90s for a Palo Alto startup that soon moved to SoMa. I did the reverse commute from Mountain View for a few years and enjoyed the lunch hour in South Park and the east bay warehouse parties. I didn’t want to move to SF and when the IPO failed to happen I moved to Japan instead which was a bigger cultural change but better for me.

Expand full comment
Feb 26, 2023·edited Feb 26, 2023Liked by Noah Smith

“… rising rents push more of the bohemians out of town every year.“

Moved to SF in 1990 onto to Oakland in ‘94. Moved into tech when working as a photographer at an ad agency in 95. Winkler was one of the first agency’s to go onto the web, they paid me more to do care and feeding of that than photography. Exit one creative into tech. Worked a few of start-ups in 98, 2000 and 2008. The money was better, by a lot.

My wife is a creative, a metal worker and jeweler. With each downturn she watched her sales go down even though there was always interest in her work. Similar story with most of the creatives in the east bay, where much of the art being created for Burning Man was done.

There was always the story of lots of money, it just only trickled to the creatives. Tech likes art, just does not want to pay for it. It’s why most of the artists have left the area, including us in ‘21 to New Mexico.

Expand full comment
author

Yep.

Expand full comment

I lived in Silicon Valley in the early naughties, living a very normal life in a fairly normal company which did involve bars and restaurants, but not all night crazy parties in SF. I’m not sure what came out of the VC craziness in the area as I was exempt from all that in my 9-5 job. We did create the iPhone though.

Expand full comment

You created the iPhone in the early 90s? I was at Apple then and unless you think of the Newton as an early iPhone that is mistaken.

Expand full comment
Feb 26, 2023·edited Feb 26, 2023Liked by Noah Smith

"naughties" here isn't a typo for "nineties"; it's a typo for "noughties" meaning the 2000s

Expand full comment

Steve Jobs started the iPhone development in 2005 so even early 2000s is not correct.

Expand full comment

You don't understand how technology works. Apple didn't pull the iPhone out of a vacuum. There were people working on stuff that Apple adopted, adapted and turned into a bestselling product.

Expand full comment

I was a senior engineer at Apple in the 90s and shipped a best selling product there so definitely know how technology works. If you read Tony Fadell’s excellent book Build then you could learn something also. If you think VCs or other “fairly normal companies” that aren’t Apple developed the iPhone in the early naughtiest it is you who are mistaken.

Expand full comment

Great writeup. There’s something a little too efficient about the lifestyle/career networking optimization you’re describing, but I wish them luck in having it all.

Expand full comment

Bravo! Now just ask ChatGPT to rewrite it in the style of Tom Wolfe!

Expand full comment
Feb 26, 2023Liked by Noah Smith

In the late 1990's I was living and working as a software developer in San Francisco. There were obviously plenty of stories of people getting financial windfalls during the IPO boom of that period, but most of the engineers I knew were first and foremost there because they loved the tech, and because engineering jobs were for the most part stable and paid a good salary - one that would allow for a perfectly fine upper middle class existence in a desirable city somewhere in the Bay Area. From the mid 2000's to the mid 2010's, I was working down in Silicon Valley and I would describe the tech scene in that period as almost aggressively normal. Everybody wore sneakers, jeans and polo shirts, nobody was overly political, and to the extent that there were subcultures they were around activities like road biking, home brewing, and digital photography. Sure, people would talk about what their favorite Thai restaurant was on Castro Street in Mountain View, but there was no snootiness about it. Companies would often host BBQ's on Friday afternoons in the summer, and people would stand around and drink beer from plastic cups. Sometimes one of the home-brewer hobbyists would bring a keg of their latest batch. Aside from the hours of my life I wasted commuting on I-880, I recall that time fondly.

In the mid 2010's I started working in San Francisco again, after a 10+ year hiatus. I was shocked at how the tech scene had changed. I was working for AWS and was deeply involved with the unicorn startups that were popping up like mushrooms, as they were all heavy users of AWS cloud services. Name a 2014-2018 vintage SF based unicorn startup and I probably spent time working with them. Two things I recall noting at that time were how perks were being showered on employees, and how there seemed to be a lot more arrogance and posturing. There were definitely people trying to cultivate a certain image or personal brand, and that was something I just didn't see very much in the prior decade working down on the peninsula. And then there was the money - gobs and gobs of money just flowing to the tech scene. I would read VentureBeat on BART on my way to the office each morning just to see which one of the AWS customers I was working with had newly raised a couple hundred million at a $1B+ valuation.

I left AWS after nearly a decade at the end of last year, and founded a startup with several ex-AWS colleagues. Several of us worked together in the AI/ML service team at AWS - in NLP, no less - so yes, my company is an "AI startup." As the "old guy" on the team I don't spend a whole lot of time in the bars or cafes where this AI subculture dwells. But I like the vibe I get from the startup scene in SF these days. It feels like the excess of the 2010's has been washed out, and the people who are still around are those that really like building stuff. I can see the contours of another "gold rush" forming, this time with AI technology. But right now, we're in that halcyon period where people are building cool stuff out of passion for the tech. I've got no doubt that if you're a young engineer working with AI, that San Francisco probably feels a lot like Hunter S Thompson described it in "Fear and Loathing in Las Vegas." I remember having the same feeling when I was in my early 20's during the internet boom.

Expand full comment
author

I wish I had been there to see the 90s! I think the mid-00s still had a lot of that flavor though! They just didn't have the money until the 2010s. And then when the money came back in the 2010s, there were a lot of corporate types and East Coast investment banking refugees and such.

Expand full comment

The mid-to-late 1990's version of San Francisco is my favorite version. Even in the late 1990's when the internet bubble was inflating, the city wasn't trying to be the center of the world. If anything, back then the locus of power in the tech industry was San Jose. In the 2010's, San Francisco became the center of the world due to all the money flowing to SF-based unicorns. But unlike Los Angeles or New York City, its not in San Francisco's nature to play such a role. People point out all the time just how bad problems like homelessness have gotten in San Francisco. But the truth is that SF has always been a place willing to let its blemishes show. There's no doubt that problems like this have gotten worse. But its also true that a lot more attention is being paid to SF now by the rest of the world than there ever was in the 1990's.

Expand full comment

It's an interesting culture but I wonder how likely they are to really make so much money. From what I can tell, AI tech is heading in the direction of being both highly centralized and surprisingly easy to use, which is a recipe for a tiny number of extremely rich people who happen to get first mover advantage in infrastructure (so, maybe OpenAI and a few early startups that build on them) and then a huge long tail of mostly failing startups with hopes and dreams but little profit. Think mobile apps: a small number of huge commercial successes, mostly in gaming, and then a bazillion trivial and clone apps duking it out for little bits of attention. Except that AI doesn't have a direct path to the gaming industry unless RPGs start using LLMs for their NPCs (now there's a pile of TLAs for you!).

The obsession with "safety" is also a major risk factor for these guys, at least it seems obvious from the outside. That obsession has already completely crippled Google and put it firmly in last place, alongside Facebook, which I admit is not a twist I saw coming. I wonder how many AI startups are going to self-destruct through in-fighting. Maybe it's an uneasiness with that possibility that leads to the (highly interesting, unexpected) coolness towards woke politics Noah is reporting.

Expand full comment

I spent 3 years at AWS leading a team of ML scientists and engineers that developed bespoke solutions for customers across every major industry vertical. There are no shortage of use-cases for ML and these vary widely across industries, so there's plenty of room for both startups and big companies alike to innovate in this space. Further, while model training and inference - particularly with LLM's - lends itself to centralization due to the massive compute, network and storage requirements required neural network architectures like GPT-3 - industry domain knowledge matters a lot (perhaps more than the model itself) and that works against centralization. Models will need to be fine tuned or trained from the ground up to get the accuracy required to be useful in most industry use cases. Also, many of these use cases to an outsider would seem utterly mundane, but to a business have tremendous value so the real impact is going to be less about the flashy stuff you're seeing now with ChatGPT and more about efficiency gains to some part of a business process and there will be a lot of AI startups that build successful businesses around doing just this.

Expand full comment

Any use cases you find particularly compelling, no matter how mundane or niche?

Expand full comment

I'll bet AI is just the excuse for putting in the appropriate monitoring and looking at the result. It is very hard to do that kind of thing in a corporate setting without running into internal political roadblocks. If you can get the C-suite to stamp it AI, odds are you might be able push it through.

Expand full comment

I think chat is just an attention grabber, I think what Microsoft will do is add the OpenAI to all facets of Office/Microsoft 365 (data wrangling in Excel, text outlining and suggestions in Word, diffusion artwork for PowerPoint, meeting summaries in Teams) and recover their investment with more or more expensive 365. I got the Bing ChatGPT mode and it’s actually useful, not like the NYT guy teasing out the evil Sydney to get clicks.

Expand full comment

That sounds plausible re. MS Office. Chat GPT is a fun toy, but I haven’t heard of any significant practical applications beyond cheating on High School essays.

Expand full comment

The big practical application I’ve heard from people is if you can benefit from writing little bits of code in various scripting and programming languages to help you do certain things, but don’t actually know how all those languages work. ChatGPT does a great job of writing mostly usable chunks of code, and helping you fix them when you get error messages (though you need to have a bit of understanding of what is going on to catch some more structural issues).

Expand full comment

Yes, little bits of code - unfortunately, significant software requires a lot more than that. Mostly a more updated version of code completion that can handle slightly larger chunks and patterns.

Expand full comment

Even that might be too optimistic. It might just get incorporated into existing tech megacorps like Google and Microsoft.

Expand full comment
Feb 26, 2023Liked by Noah Smith

I lived in Palo Alto (Byron St) in the early 80s, walked to the CalTrain Station through downtown, then rode the express train and Financial District shuttle bus to the 500 block of Sansome, where I worked. It was a very different era. Palo Alto was very quiet and not outrageously overpriced. The professional ranks were eclectic and interests ranged widely.

Expand full comment
author

Did you catch that the Radiohead quote was from the song "Palo Alto"?

https://www.youtube.com/watch?v=mrYTsLgCidQ

Expand full comment
Feb 26, 2023·edited Feb 26, 2023Liked by Noah Smith

I'm one of the techies I suppose who's part of this AI subculture, but not in Silicon Valley because of life reasons (PhDing at CMU). I feel your article has captured the major cultural undercurrents pretty well and is pretty representative of my friends in industry and academia alike.

I do have a couple questions, maybe you can shed some light.

1. Why are the VCs so bullish on one specific paradigm (the flavour of the season appears to be Generative AI)? I see this sort of concentration is cyclic, Web3 and Self-driving were the big flavours earlier. Has the strategy shifted? I think I saw some articles about a16z moderate risk strategy lately.

2. What is your take and the estimate of the AI hype-train vs actual AI progress in terms of real life products? ML is certainly very critical already in many applications more subtly. I certainly believe we are at the cusp of the post internet age (my personal bet is on vision, ML and robotics), but I'm not certain how increasingly AI-first and automation products will be received by people and society.

Expand full comment
author

1) Herd behavior I suppose. LLMs and AI art apps are very cool, and there are lots of obvious potential applications, so maybe they're doing the right thing.

2) Here we exceed my level of expertise, but I think most people I talk to agree on the "product, not a platform" idea, which is probably why OpenAI is partnering with Microsoft.

Expand full comment

3. I forgot to ask one more thing. How critical do you think is to stay in the in-group of Silicon Valley network to be a part of this "supposed" success era in the next few years? A lot of folks like me are constrained to be out of SV right now, due to well, life. :)

Expand full comment
author

I don't know! It's easier to get funded if you're in SV, but if you're an employee rather than a founder, that's less of a constraint...

Expand full comment
Feb 26, 2023·edited Feb 26, 2023Liked by Noah Smith

Interesting, would love to know your thoughts on founding a company in the US (for immigrants or otherwise) in terms of being abreast of the general social and cultural dynamics and navigating the bureaucratic processes. Hence I felt to ask the need to be in SV and be part of the cultural in-group. For instance, I'm generally bearish about some developing countries due to formation of recurring kinds of nexuses. Is it something that plagues US?

Now that I think about it, it's more of an article idea! Feel free to ignore to respond. :)

Expand full comment
author

Not my area of expertise but I have some friends working on exactly this.

Expand full comment

The answer to both of your questions depends on your confidence that this is a) a trend and not a fad, b) that Generative AI in particular is going to be a horizontally disrupting technology that moves quickly.

The bullishness on Generative is because it has implications at every level of the stack, from the need for different hardware up to the ability for average consumers to make funny little new apps. If you think Generative is similar to expert systems, or the last wave of ML, then it may be disruptive for a few use cases but it doesn't deserve the hyper it is getting.

But there are times where the technology itself isn't just applied to problems, but starts to shift market dynamics themselves (think beginning of internet, beginning of mobile). That is when every month shifts the rules of the game, where building trust with the right people matters, where learning is both nuanced and rapid. In short, it's those moments where silicon valley really shines.

Expand full comment
Feb 26, 2023·edited Feb 26, 2023

Thank you for this perspective! I think this makes sense, I do think the answer wrt Generative Model is that it is bigger than the "expert systems" craze but it's also not going to change how we view farming or mining automation for instance (that's my view right now, I might be proven wrong in 5 years). But in general, learned systems (instead of explicit math modelling) are here to disrupt the way we think about engineering and science in many ways. Generative models are one such approach with pros/cons, and they are very good for certain kinds of applications.

I do agree that we are witnessing a general paradigm shift, and hence all the excitement from Silicon Valley is really amazing to see. :)

Expand full comment

These group houses remind me of the party scene in "All the Birds in the Sky", which more than anything, reminds me how uncomfortably accurate Anders' depiction of SF was (gets at the weird intersection between bohemian culture and tech culture), but also that this manifestation of "AI" specific houses isn't actually a divergence from the past. Scott Alexander's "Every Bay Area House Party" series (https://astralcodexten.substack.com/p/every-bay-area-house-party) is also wonderful

Expand full comment
author

Yes, All the Birds in the Sky is a GREAT depiction of the 2000s tech scene and how it tried to meld with bohemian culture (probably in the 90s too, but that's before my time). Now, with the true bohemians priced out of the city and the the tech shifting from internet to AI, the culture is shifting too, but lots of what I was talking about in this post can be traced, however tenuously, to that earlier era...

Expand full comment
Feb 26, 2023Liked by Noah Smith

I live in Hayes Valley and on paper am part of this culture. It’s depressing how completely cutoff the tech/finance world is from the actual city of san francisco. An entire corporate playground has been built on top of a formerly cool city. What’s left is rich people paying for these “bohemian” experiences and a working class struggling to survive.

All the actual cool people live in Oakland now.

Expand full comment

Cool people can't afford SF. It's like SoHo and Tribeca in NYC. You could get cheap space if you didn't mind a machine shop or the welfare department next door.

Expand full comment
Feb 26, 2023·edited Feb 26, 2023Liked by Noah Smith

Noah can you write an article about AI and interest rates:

https://forum.effectivealtruism.org/posts/8c7LycgtkypkgYjZx/agi-and-the-emh-markets-are-not-expecting-aligned-or

It's my belief that

1) AI will be a very powerful technology compared to competitors

2) Well-developed economies grow a slower rate as they move increasingly into a regime of diminishing returns.

3) So no giant economic growth is coming but there may still be a giant transformation of our society. (I.e. Markets are broadly right on this one.)

Expand full comment
author

Basically, the mathematical tool used to predict growth from interest rates here (a consumption Euler equation) is known to fail at short horizons, and its long-term horizon correlation with growth is just being driven by the Volcker rate hikes and subsequent 90s growth boom. That being said, I agree that markets are not expecting high rates of growth, but it's also probably true that they rarely predict them correctly. As for whether or not AI will generate lots of growth, that's hard for anyone to say, which is why I don't put too much stock in the market predictions!

Expand full comment