Reflections on Palantir


Published: October 15, 2024 (Substack link)

Palantir is hot now. The company recently joined the S&P 500. The stock is on a tear, and the company is nearing a $100bn market cap. VCs chase ex-Palantir founders asking to invest.

For long-time employees and alumni of the company, this feels deeply weird. During the 2016-2020 era especially, telling people you worked at Palantir was unpopular. The company was seen as spy tech, NSA surveillance, or worse. There were regular protests outside the office. Even among people who didn’t have a problem with it morally, the company was dismissed as a consulting company masquerading as software, or, at best, a sophisticated form of talent arbitrage.

I left last year, but never wrote publicly about what I learned there. There’s also just a lot about the company people don’t understand. So this is my effort to explain some of that, as someone who worked there for eight years.

(Note: I’m writing this in my personal capacity, and don’t have a formal relationship with the company anymore. I’m long $PLTR.)

1. Why I joined

I joined in summer 2015, initially in the newly-opened London office, before moving to Silicon Valley, and finally DC – as a forward deployed engineer. For context, the company was around 1500 people at the time; it had offices in Palo Alto (HQ), NYC, London, and a few other places. (It’s now 4000 or so people, and headquartered in Denver.)

Why did I join?

First, I wanted to work in ‘difficult’ industries on real, meaningful problems. My area of interest – for personal reasons - was healthcare and bio, which the company had a nascent presence in. The company was talking about working in industries like healthcare, aerospace, manufacturing, cybersecurity, and other industries that I felt were very important but that most people were not, at the time, working on. At the time the hot things were social networks (Facebook, LinkedIn, Quora, etc.) and other miscellaneous consumer apps (Dropbox, Uber, Airbnb) but very few companies were tackling what felt like the real, thorny parts of the economy. If you wanted to work on these ‘harder’ areas of the economy but also wanted a Silicon Valley work culture, Palantir was basically your only option for awhile.

My goal was to start a company, but I wanted (1) to go deep in one of these industries for a while first and learn real things about it; (2) to work for a US company and get a green card that way. Palantir offered both. That made it an easy choice.

Second, talent density. I talked to some of the early people who started the healthcare vertical (Nick Perry, Lekan Wang, and Andrew Girvin) and was extremely impressed. I then interviewed with a bunch of the early business operations and strategy folks and came away even more impressed. These were seriously intense, competitive people who wanted to win, true believers; weird, fascinating people who read philosophy in their spare time, went on weird diets, and did 100-mile bike rides for fun. This, it turned out, was an inheritance from the Paypal mafia. Yishan Wong, who was early at Paypal, wrote about the importance of intensity:
"In general, as I begin to survey more startups, I find that the talent level at PayPal is not uncommon for a Silicon Valley startup, but the differentiating factor may have been the level of intensity from the top: both Peter Thiel and Max Levchin were extremely intense people - hyper-competitive, hard-working, and unwilling to accept defeat. I think this sort of leadership is what pushes the "standard" talented team to be able to do great things and, subsequently, contributes to producing a wellspring of later achievements."
Palantir was an unusually weird place, too. I remember my first time I talked to Stephen Cohen he had the A/C in his office set at 60, several weird-looking devices for minimizing CO2 content in the room, and had a giant pile of ice in a cup. Throughout the conversation, he kept chewing pieces of ice. (Apparently there are cognitive benefits to this.)

I also interviewed with the CEO, Alex Karp and talked to other members of the leadership team. I probably don’t need to convince you that Karp is weird - just watch an interview with him. I can’t say what Karp and I talked about, but he gives a good flavor for his style in a 2012 interview:
I like to meet candidates with no data about them: no résumé, no preliminary discussions or job description, just the candidate and me in a room. I ask a fairly random question, one that is orthogonal to anything they would be doing at Palantir. I then watch how they disaggregate the question, if they appreciate how many different ways there are to see the same thing. I like to keep interviews short, about 10 minutes. Otherwise, people move into their learned responses and you don’t get a sense of who they really are.
My interviews were often not about work or software at all – one of my interviews we just spent an hour talking about Wittgenstein. Note that both Peter Thiel and Alex Karp were philosophy grads. Thiel’s lecture notes had come out not long before and they discussed Shakespeare, Tolstoy, Girard (then unknown, now a cliché) and more.

The combo of intellectual grandiosity and intense competitiveness was a perfect fit for me. It’s still hard to find today, in fact - many people have copied the ‘hardcore’ working culture and the ‘this is the Marines’ vibe, but few have the intellectual atmosphere, the sense of being involved in a rich set of ideas. This is hard to LARP - your founders and early employees have to be genuinely interesting intellectual thinkers. The main companies that come to mind which have nailed this combination today are OpenAI and Anthropic. It’s no surprise they’re talent magnets. [1]

2. Forward deployed

When I joined, Palantir was divided up into two types of engineers:

  1. Engineers who work with customers, sometimes known as FDEs, forward deployed engineers.
  2. Engineers who work on the core product team (product development - PD), and rarely go visit customers.

FDEs were typically expected to ‘go onsite’ to the customer’s offices and work from there 3-4 days per week, which meant a ton of travel. This is, and was, highly unusual for a Silicon Valley company.

There’s a lot to unpack about this model, but the key idea is that you gain intricate knowledge of business processes in difficult industries (manufacturing, healthcare, intel, aerospace, etc.) and then use that knowledge to design software that actually solves the problem. The PD engineers then ‘productize’ what the FDEs build, and – more generally – build software that provides leverage for the FDEs to do their work better and faster. [2]

This is how much of the Foundry product took initial shape: FDEs went to customer sites, had to do a bunch of cruft work manually, and PD engineers built tools that automated the cruft work. Need to bring in data from SAP or AWS? Here’s Magritte (a data ingestion tool). Need to visualize data? Here’s Contour (a point and click visualization tool). Need to spin up a quick web app? Here’s Workshop (a Retool-like UI for making webapps). Eventually, you had a damn good set of tools clustered around the loose theme of ‘integrate data and make it useful somehow’.

At the time, it was seen as a radical step to give customers access to these tools — they weren’t in a state for that — but now this drives 50%+ of the company’s revenue, and it’s called Foundry. Viewed this way, Palantir pulled off a rare services company → product company pivot: in 2016, descriptions of it as a Silicon Valley services company were not totally off the mark, but in 2024 they are deeply off the mark, because the company successfully built an enterprise data platform using the lessons from those early years, and it shows in the gross margins - 80% gross margins in 2023. These are software margins. Compare to Accenture: 32%.

Tyler Cowen has a wonderful saying, ‘context is that which is scarce’, and you could say it’s the foundational insight of this model. Going onsite to your customers – the startup guru Steve Blank calls this “getting out of the building” – means you capture the tacit knowledge of how they work, not just the flattened ‘list of requirements’ model that enterprise software typically relies on. The company believed this to a hilarious degree: it was routine to get a call from someone and have to book a first-thing-next-morning flight to somewhere extremely random; “get on a plane first, ask questions later” was the cultural bias. This resulted in out of control travel spend for a long time — many of us ended up getting United 1K or similar — but it also meant an intense decade-long learning cycle which eventually paid off.

My first real customer engagement was with Airbus, the airplane manufacturer based in France, and I moved out to Toulouse for a year and worked in the factory alongside the manufacturing people four days a week to help build the version of our software there.

My first month in Toulouse, I couldn’t fly out of the city because the air traffic controllers were on strike every weekend. Welcome to France. (I jest - France is great. Also, Airbus planes are magnificent. It’s a truly engineering-centric company. The CEO is always a trained aeronautical engineer, not some MBA. Unlike… anyway.)

The CEO told us his biggest problem was scaling up A350 manufacturing. So we ended up building software to directly tackle that problem. I sometimes describe it as “Asana, but for building planes”. You took disparate sources of data — work orders, missing parts, quality issues (“non-conformities”) — and put them in a nice interface, with the ability to check off work and see what other teams are doing, where the parts are, what the schedule is, and so on. Allow them the ability to search (including fuzzy/semantic search) previous quality issues and see how they were addressed. These are all sort of basic software things, but you’ve seen how crappy enterprise software can be - just deploying these ‘best practice’ UIs to the real world is insanely powerful. This ended up helping to drive the A350 manufacturing surge and successfully 4x’ing the pace of manufacturing while keeping Airbus’s high standards of quality.

This made the software hard to describe concisely - it wasn’t just a database or a spreadsheet, it was an end-to-end solution to that specific problem, and to hell with generalizability. Your job was to solve the problem, and not worry about overfitting; PD’s job was to take whatever you’d built and generalize it, with the goal of selling it elsewhere.


The A350 final assembly line, in Toulouse. I hung out here most days. It was awe-inspiring. Screenshot from here.

FDEs tend to write code that gets the job done fast, which usually means – politely – technical debt and hacky workarounds. PD engineers write software that scales cleanly, works for multiple use cases, and doesn’t break. One of the key ‘secrets’ of the company is that generating deep, sustaining enterprise value requires both. BD engineers tend to have high pain tolerance, the social and political skills needed to embed yourself deep in a foreign company and gain customer trust, and high velocity – you need to build something that delivers a kernel of value fast so that customers realize you’re the real deal. It helped that customers had hilariously low expectations of most software contractors, who were typically implementors of SAP or other software like that, and worked on years-long ‘waterfall’ style timescales. So when a ragtag team of 20-something kids showed up to the customer site and built real software that people could use within a week or two, people noticed.

This two-pronged model made for a powerful engine. Customer teams were often small (4-5 people) and operated fast and autonomously; there were many of them, all learning fast, and the core product team’s job was to take those learnings and build the main platform.

When we were allowed to work within an organization, this tended to work very well. The obstacles were mostly political. Every time you see the government give another $110 million contract to Deloitte for building a website that doesn’t work, or a healthcare.gov style debacle, or SFUSD spending $40 million to implement a payroll system that - again - doesn’t work, you are seeing politics beat substance. See SpaceX vs. NASA as another example.

The world needs more companies like SpaceX, and Palantir, that differentiate on execution - achieving the outcome - not on playing political games or building narrow point solutions that don’t hit the goal. 

3. Secrets

Another key thing FDEs did was data integration, a term that puts most people to sleep. This was (and still is) the core of what the company does, and its importance was underrated by most observers for years. In fact, it’s only now with the advent of AI that people are starting to realize the importance of having clean, curated, easy-to-access data for the enterprise. (See: the ‘it’ in AI models is the dataset).

In simple terms, ‘data integration’ means (a) gaining access to enterprise data, which usually means negotiating with ‘data owners’ in an organization (b) cleaning it and sometimes transforming it so that it’s usable (c) putting it somewhere everyone can access it. Much of the base, foundational software in Palantir’s main software platform (Foundry) is just tooling to make this task easier and faster.

Why is data integration so hard? The data is often in different formats that aren’t easily analyzed by computers – PDFs, notebooks, Excel files (my god, so many Excel files) and so on. But often what really gets in the way is organizational politics: a team, or group, controls a key data source, the reason for their existence is that they are the gatekeepers to that data source, and they typically justify their existence in a corporation by being the gatekeepers of that data source (and, often, providing analyses of that data). [3] This politics can be a formidable obstacle to overcome, and in some cases led to hilarious outcomes – you’d have a company buying an 8-12 week pilot, and we’d spend all 8-12 weeks just getting data access, and the final week scrambling to have something to demo.

The other ‘secret’ Palantir figured out early is that data access tussles were partly about genuine data security concerns, and could be alleviated through building security controls into the data integration layer of the platform - at all levels. This meant role-based access controls, row-level policies, security markings, audit trails, and a ton of other data security features that other companies are still catching up to. Because of these features, implementing Palantir often made companies’ data more secure, not less. [4]

4. Notes on culture

The overall ‘vibe’ of the company was more of a messianic cult than a normal software company. But importantly, it seemed that criticism was highly tolerated and welcomed – one person showed me an email chain where an entry-level software engineer was having an open, contentious argument with a Director of the company with the entire company (around a thousand people) cc’d. As a rationalist-brained philosophy graduate, this particular point was deeply important to me – I wasn’t interested in joining an uncritical cult. But a cult of skeptical people who cared deeply and wanted to argue about where the world was going and how software fit into it – existentially – that was interesting to me. [5]

I’m not sure if they still do this, but at the time when you joined they sent you a copy of Impro, The Looming Tower (9/11 book), Interviewing Users, and Getting Things Done. I also got an early PDF version of what became Ray Dalio’s Principles. This set the tone. The Looming Tower was obvious enough – the company was founded partly as a response to 9/11 and what Peter felt were the inevitable violations of civil liberties that would follow, and the context was valuable. But why Impro?

Being a successful FDE required an unusual sensitivity to social context – what you really had to do was partner with your corporate (or government) counterparts at the highest level and gain their trust, which often required playing political games. Impro is popular with nerds partly because it breaks down social behavior mechanistically. The vocabulary of the company was saturated with Impro-isms – ‘casting’ is an example. Johnstone discusses how the same actor can play ‘high status’ or ‘low status’ just by changing parts of their physical behavior – for example, keeping your head still while talking is high status, whereas moving your head side to side a lot is low status. Standing tall with your hands showing is high status, slouching with your hands in your pocket is low status. And so on. If you didn’t know all this, you were unlikely to succeed in a customer environment. Which meant you were unlikely to integrate customer data or get people to use your software. Which meant failure.

This is one reason why former FDEs tend to be great founders. (There are usually more ex-Palantir founders than there are ex-Googlers in each YC batch, despite there being ~50x more Google employees.) Good founders have an instinct for reading rooms, group dynamics, and power. This isn’t usually talked about, but it’s critical: founding a successful company is about taking part in negotiation after negotiation after negotiation, and winning (on net). Hiring, sales, fundraising are all negotiations at their core. It’s hard to be great at negotiating without having these instincts for human behavior. This is something Palantir teaches FDEs, and is hard to learn at other Valley companies.

Another is that FDEs have to be good at understanding things. Your effectiveness directly correlates to how quickly you can learn to speak the customer’s language and really drill down into how their business works. If you’re working with hospitals, you quickly learn to talk about capacity management and patient throughput vs. just saying “help you improve your healthcare”. Same with drug discovery, health insurance, informatics, cancer immunotherapy, and so on; all have specialized vocabularies, and the people who do well tend to be great at learning them fast.

One of my favorite insights from Tyler Cowen’s book ‘Talent’ is that the most talented people tend to develop their own vocabularies and memes, and these serve as entry points to a whole intellectual world constructed by that person. Tyler himself is of course a great example of this. Any MR reader can name 10+ Tylerisms instantly - ‘model this’, ‘context is that which is scarce’, ‘solve for the equilibrium’, ‘the great stagnation’ are all examples. You can find others who are great at this. Thiel is one. Elon is another (“multiplanetary species”, “preserving the light of consciousness”, etc. are all memes). Trump, Yudkowsky, gwern, SSC, Paul Graham, all of them regularly coin memes. It turns out that this is a good proxy for impact.

This insight goes for companies, too, and Palantir had its own, vast set of terms, some of which are obscure enough that “what does Palantir actually do?” became a meme online. ‘Ontology’ is an old one, but then there is ‘impl’, ‘artist’s colony’, ‘compounding’, ‘the 36 chambers’, ‘dots’, ‘metabolizing pain’, ‘gamma radiation’, and so on. The point isn’t to explain all of these terms, each of which compresses a whole set of rich insights; it’s that when you’re looking for companies to join, you could do worse than look for a rich internal language or vocabulary that helps you think about things in a more interesting way.

When Palantir’s name comes up, most people think of Peter Thiel. But many of these terms came from early employees, especially Shyam Sankar, who’s now the President of the company. Still, Peter is deeply influential in the company culture, even though he wasn’t operationally involved with the company at all during the time I was there. This document, written by Joe Lonsdale, was previously an internal document but made public at some point and gives a flavor for the type of cultural principles.

One of the things that (I think) came from Peter was the idea of not giving people titles. When I was there, everyone had the “forward deployed engineer” title, more or less, and apart from that there were five or six Directors and the CEO. Occasionally someone would make up a different title (one guy I know called himself “Head of Special Situations”, which I thought was hilarious) but these never really caught on. It’s straightforward to trace this back to Peter’s Girardian beliefs: if you create titles, people start coveting them, and this ends up creating competitive politics inside the company that undermines internal unity. Better to just give everyone the same title and make them go focus on the goal instead.

There are plenty of good critiques of the ‘flat hierarchy’ stance -- The Tyranny of Structurelessness is a great one – and it largely seems to have fallen out of fashion in modern startups, where you quickly get CEO, COO, VPs, Founding Engineers, and so on. But my experience is that it worked well at Palantir. Some people were more influential than others, but the influence was usually based on some impressive accomplishment, and most importantly nobody could tell anyone else what to do. So it didn’t matter if somebody was influential or thought your idea was dumb, you could ignore them and go build something if you thought it was the right thing to do. On top of that, the culture valorized such people: stories were told of some engineer ignoring a Director and building something that ended up being a critical piece of infrastructure, and this was held up as an example to imitate.

The cost of this was that the company often felt like there was no clear strategy or direction, more like a Petri dish of smart people building little fiefdoms and going off in random directions. But it was incredibly generative. It’s underrated just how many novel UI concepts and ideas came out of that company. Only some of these now have non-Palantir equivalents, e.g. Hex, Retool, Airflow all have some components that were first developed at Palantir. The company’s doing the same for AI now – the tooling for deploying LLMs at large enterprises is powerful.

The ‘no titles’ thing also meant that people came in and out of fashion very quickly, inside the company. Because everyone had the same title, you had to gauge influence through other means, and those were things like “who seems really tight with this Director right now” or “who is leading this product initiative which seems important”, not “this person is the VP of so and so”. The result was a sort of hero-shithead rollercoaster at scale – somebody would be very influential for awhile, then mysteriously disappear and not be working on anything visible for months, and you wouldn’t ever be totally sure what happened.

5. Bat-signals

Another thing I can trace back to Peter is the idea of talent bat-signals. Having started my own company now (in stealth for the moment), I appreciate this a lot more: recruiting good people is hard, and you need a differentiated source of talent. If you’re just competing against Facebook/Google for the same set of Stanford CS grads every year, you’re going to lose. That means you need a set of talent that is (a) interested in joining you in particular, over other companies (b) a way of reaching them at scale. Palantir had several differentiated sources of recruiting alpha.

First, there were all the people who were pro defense/intelligence work back when that wasn’t fashionable, which selected for, e.g., smart engineers from the Midwest or red states more than usual, and also plenty of smart ex-army, ex-CIA/NSA types who wanted to serve the USA but also saw the appeal in working for a Silicon Valley company. My first day at the company, I was at my team’s internal onboarding with another guy, who looked a bit older than me. I asked him what he’d done before Palantir. With a deadpan expression, he looked me in the eye and said “I worked at the agency for 15 years”. I was then introduced to my first lead, who was a former SWAT cop in Ohio (!) and an Army vet.

There were lots of these people, many extremely talented, and they mostly weren’t joining Google. Palantir was the only real ‘beacon’ for these types, and the company was loud about supporting the military, being patriotic, and so on, when that was deeply unfashionable. That set up a highly effective, unique bat-signal. (Now there’s Anduril, and a plethora of defence and manufacturing startups). [6]

Second, you had to be weird to want to join the company, at least after the initial hype wave died down (and especially during the Trump years, when the company was a pariah). Partly this was the aggressive ‘mission focus’ type branding back when this was uncommon, but also the company was loud about the fact that people worked long hours, were paid lower than market, and had to travel a lot. Meanwhile, we were being kicked out of Silicon Valley job fairs for working with the government. All of this selected for a certain type of person: somebody who can think for themselves, and doesn’t over-index on a bad news story.

6. Morality

The morality question is a fascinating one. The company is unabashedly pro-West, a stance I mostly agree with – a world more CCP-aligned or Russia-aligned seems like a bad one to me, and that’s the choice that’s on the table. [7] It’s easy to critique free countries when you live in one, harder when you’ve experienced the alternative (as I have - I spent a few childhood years in a repressive country). So I had no problem with the company helping the military, even when I disagreed with some of the things the military was doing.

But doesn’t the military sometimes do bad things? Of course - I was opposed to the Iraq war. This gets to the crux of the matter: working at the company was neither 100% morally good — because sometimes we’d be helping agencies that had goals I’d disagree with — nor 100% bad: the government does a lot of good things, and helping them do it more efficiently by providing software that doesn’t suck is a noble thing. One way of clarifying this is to break down the company’s work into three buckets – these categories aren’t perfect, but bear with me:

  1. Morally neutral. Normal corporate work, e.g. FedEx, CVS, finance companies, tech companies, and so on. Some people might have a problem with it, but on the whole people feel fine about these things.
  2. Unambiguously good. For example, anti-pandemic response with the CDC; anti-child pornography work with NCMEC; and so on. Most people would agree these are good things to work on.
  3. Grey areas. By this I mean I mean ‘involve morally thorny, difficult decisions’: examples include health insurance, immigration enforcement, oil companies, the military, spy agencies, police/crime, and so on.

Every engineer faces a choice: you can work on things like Google search or the Facebook news feed, all of which seem like marginally good things and basically fall into category 1. You can also go work on category 2 things like GiveDirectly or OpenPhilanthropy or whatever.

The critical case against Palantir seemed to be something like “you shouldn’t work on category 3 things, because sometimes this involves making morally bad decisions”. An example was immigration enforcement during 2016-2020, aspects of which many people were uncomfortable with.

But it seems to me that ignoring category 3 entirely, and just disengaging with it, is also an abdication of responsibility. Institutions in category 3 need to exist. The USA is defended by people with guns. The police have to enforce crime, and - in my experience - even people who are morally uncomfortable with some aspects of policing are quick to call the police if their own home has been robbed. Oil companies have to provide energy. Health insurers have to make difficult decisions all the time. Yes, there are unsavory aspects to all of these things. But do we just disengage from all of these institutions entirely, and let them sort themselves out?

I don’t believe there is a clear answer to whether you should work with category 3 customers; it’s a case by case thing. Palantir’s answer to this is something like “we will work with most category 3 organizations, unless they’re clearly bad, and we’ll trust the democratic process to get them trending in a good direction over time”. Thus:

  • On the ICE question, they disengaged from ERO (Enforcement and Removal Operations) during the Trump era, while continuing to work with HSI (Homeland Security Investigations).
  • They did work with most other category 3 organizations, on the argument that they’re mostly doing good in the world, even though it’s easy to point to bad things they did as well.
    • I can’t speak to specific details here, but Palantir software is partly responsible for stopping multiple terror attacks. I believe this fact alone vindicates this stance.

This is an uncomfortable stance for many, precisely because you’re not guaranteed to be doing 100% good at all times. You’re at the mercy of history, in some ways, and you’re betting that (a) more good is being done than bad (b) being in the room is better than not. 

This was good enough for me. Others preferred to go elsewhere.

The danger of this stance, of course, is that it becomes a fully general argument for doing whatever the power structure wants. You are just amplifying existing processes. This is where the ‘case by case’ comes in: there’s no general answer, you have to be specific. For my own part, I spent most of my time there working on healthcare and bio stuff, and I feel good about my contributions. I’m betting the people who stopped the terror attacks feel good about theirs, too. Or the people who distributed medicines during the pandemic.

Even though the tide has shifted and working on these ‘thorny’ areas is now trendy, these remain relevant questions for technologists. AI is a good example – many people are uncomfortable with some of the consequences of deploying AI. Maybe AI gets used for hacking; maybe deepfakes make the world worse in all these ways; maybe it causes job losses. But there are also major benefits to AI (Dario Amodei articulates some of these well in a recent essay).

As with Palantir, working on AI probably isn’t 100% morally good, nor is it 100% evil. Not engaging with it – or calling for a pause/stop, which is a fantasy – is unlikely to be the best stance. Even if you don’t work at OpenAI or Anthropic, if you’re someone who could plausibly work in AI-related issues, you probably want to do so in some way. There are easy cases: build evals, work on alignment, work on societal resilience. But my claim here is that the grey area is worth engaging in too: work on government AI policy. Deploy AI into areas like healthcare. Sure, it’ll be difficult. Plunge in.8

When I think about the most influential people in AI today, they are almost all people in the room - whether at an AI lab, in government, or at an influential think tank. I’d rather be one of those than one of the pontificators. Sure, it’ll involve difficult decisions. But it’s better to be in the room when things happen, even if you later have to leave and sound the alarm.

7. What next?

Am I bullish on the company still? The big productivity gains of this AI cycle are going to come when AI starts providing leverage to the large companies and businesses of this era - in industries like manufacturing, defense, logistics, healthcare and more. Palantir has spent a decade working with these companies. AI agents will eventually drive many core business workflows, and these agents will rely on read/write access to critical business data. Spending a decade integrating enterprise data is the critical foundation for deploying AI to the enterprise. The opportunity is massive.

So yes, I’m bullish.

As for me, I’m carrying out my long-awaited master plan and starting a company next. Yes, there will be a government component to it. The team is great, and yes we’re hiring. We even talk about Wittgenstein sometimes. 

Thanks to Rohit Krishnan, Tyler Cowen, Samir Unni, Sebastian Caliri, Mark Bissell, and Vipul Shekhawat for their feedback on this post.

--

[1] Both OpenAI and Palantir required backing by rich people with deep belief and willingness to fund them for years without any apparent or obvious breakthroughs (Elon/YC Research, and Peter Thiel, respectively). Palantir floundered for years, barely getting any real traction in the gov space, and doing the opposite of the ‘lean startup’ thing; OpenAI spent several years being outdone (at least, hype-wise) by DeepMind before language models came along. As Sam Altman pointed out:
“OpenAI went against all of the YC advice,” Altman told Stripe cofounder and fellow billionaire John Collison

He rattled off the ways: “It took us four and half years to launch a product. We’re going to be the most capital-intensive startup in Silicon Valley history. We were building a technology without any idea of who our customers were going to be or what they were going to use it for.”

On Saturday, Altman tweeted: "chatgpt has no social features or built-in sharing, you have to sign up before you can use it, no inherent viral loop, etc. seriously questioning the years of advice i gave to startups."

There’s something to this correlation: by making the company about something other than making money (civil liberties; AI god) you attract true believers from the start, who in turn create the highly generative intellectual culture that persists once you eventually find success.

It’s hard to replicate, though - you need a visionary billionaire and an overlooked sector of the economy. AI/ML was not hot in 2015; govtech was not hot in 2003.

[2] Ted Mabrey’s essay on the FDE model here is good: https://tedmabrey.substack.com/p/sorry-that-isnt-an-fde

[3] Sarah Constantin – also an ex-Palantirian - goes into greater detail on this point in her great essay: https://sarahconstantin.substack.com/p/the-great-data-integration-schlep

[4] One side note: the company was often cast as a ‘data company’ in the press, or worse, a ‘data mining’ company or similar. As far as I can tell, this was a simple misunderstanding on the press’s part. Palantir does data integration for companies, but the data is owned by the companies – not Palantir. “Mining” data usually means using somebody else’s data for your own profits, or selling it. Palantir doesn’t do that - customer data stays with the customer.

[5] As Byrne Hobart notes in his deeply perceptive piece about the company, “Cult” is just a euphemism for “ability to pay below-market salaries and get above-average worker retention.” This is also fair – the company paid lower than market salaries, and it was common to stick around for 5+ years. That said, most early employees did very well, thanks to the performance of the stock. But it was not obvious that we would do well; most of us had mentally written off the value of our equity, especially during the toughest years. I vividly remember there was one of those ‘explaining the value of your equity’ pamphlets that showed the value of the equity if the company was valued at $100bn, and a group of us laughing about the hubris of that. The company is, as of writing, at $97.4 billion.

[6] By the way, the company wasn’t some edgelord right-wing anti-woke haven, even back then. Yes, there were people on all ends on the ideological spectrum, but by an large I remember the vast majority of my colleagues being normie centrists.

[7] Most activist types are, in my view, deluded about the degree to which we do actually need a strong military. I wonder how many of them revised their views after Russia’s invasion of Ukraine - (and indeed, Palantir played a critical role in Ukraine’s response). Drones alone are a frightening new development in international affairs which most people have not sufficiently updated on.

[8] Paul Christiano is a good example of this on the AI safety side - he went into government and now leads the US AI safety center.