Cliff Asness manages $130 billion and has spent his entire career arguing that quantitative human judgment - applied with sufficient rigor, sufficient data, sufficient mathematical elegance - creates alpha. He is, by any measure, one of the most sophisticated pattern-recognition machines walking around on two legs. Last year he conceded, with the air of a man confessing to an affair, that AI is "annoyingly better" than him at making investment factor calls. Not at cleaning data or running backtests, but at the actual deciding. AQR now runs a fifth of its flagship trading signals through machine learning.

Bridgewater - the world's largest hedge fund, the one with the cult-like devotion to "radical transparency" - launched a $2 billion fund with AI as the primary decision-maker, not the assistant, and Pure Alpha returned 34% in 2025. Dan Loeb at Third Point restructured his entire research operation around AI and put it simply: "You'll either be a beneficiary of AI or AI roadkill." No hedging, no "exciting augmentation tool" corporate-speak. Roadkill.

I start with hedge fund managers for a specific reason: these people don't do optimism. Their survival depends on seeing reality before everyone else does, and then betting large sums of money on what they see. When they start talking like this, it means something different than when a McKinsey partner says the same words over a $47 salmon at Davos.

And the numbers are ugly for the humans…Hedge funds had their best year since 2009, up 12.6%, and the industry crossed $5 trillion in total assets. But the split is the story to pay attention to: AI-first funds averaged 12-15% returns in 2025 versus 8-10% for non-AI peers. One AI-driven strategy called "Situational Awareness" returned 47% in the first half of 2025 alone. More than 35% of new fund launches last year branded themselves as AI-driven. Over 70% of global hedge funds now use machine-learning models somewhere in their trading pipeline, and about 18% rely on AI for more than half their signal generation. The humans who refuse to use AI are losing to the humans who do, and the humans who do are starting to wonder what they add that the AI doesn't.

Yes I did say benchmarks were broken, but at least they are consistently broken, so we get some signal in all the noise. GPT-5 now scores 88% on the FinanceReasoning benchmark. Claude Opus 4.6 leads on real-world SEC filing analysis. GPT-5.2 handles 68.4% of junior investment banking tasks - remember those three-statement models, LBOs for take-privates, the kind of work that currently keeps twenty-somethings in Manhattan awake until 3 AM? And what about the coveted CFA? Five frontier models pass all three levels of the CFA exam. Gemini 3 Pro scored 97.6% on Level I. The November 2025 human pass rate? 43% for Level I, 42% for Level II. Candidate volume has collapsed to roughly half of pre-pandemic levels and hasn't bounced back while models now solve for the hardest problem sets (ethics included), in minutes.

All of this was supposed to be impossible, of course. Finance was supposed to be too complex, too relational, too dependent on that mysterious thing called "judgment." That was the pitch and it held up well until the machines started outperforming the people who made it, and that is hat we are seeing happen now in realtime.

Two Weeks in February

On February 5, in what felt less like a product launch and more like a coordinated detonation, Anthropic and OpenAI released competing AI systems within minutes of each other. The tech press treated it as a horse race, but it was more like two demolition crews showing up at the same condemned building.

Anthropic dropped Claude Opus 4.6 with a million-token context window - enough to inhale an entire data room and produce what Bloomberg politely described as "detailed financial analyses that would normally take a person days." FactSet's stock promptly dropped 10%, which tells you something. FactSet doesn't employ analysts. It sells tools to analysts. The stock dropped because investors suddenly had to consider what happens to the middleman when AI goes from raw filings to finished analysis without pausing for coffee or a Bloomberg terminal.

Claude for Excel arrived with Agent Skills for DCFs, comp analysis, due diligence packs, earnings reports, and initiating coverage - plugged live into Moody's, LSEG, and Aiera. People are building 11-tab financial models in ten minutes. Sensitivity analysis, cash flow projections, risk quant. The thing reads your existing spreadsheet, grasps the formula dependencies, and edits across tabs without blowing anything up.

Now here's the part that should unsettle anyone who thinks about this for more than thirty seconds: building those models was never just about the models. Analysts built them as it was about learning. They figured out how finance actually works by building DCFs the hard way, cell by cell, the same way surgeons learn by cutting. If the AI builds the model, who learns the craft? And in five years, when someone needs to verify the AI's output, where's the human who understands what they're looking at?

Claude for PowerPoint reads your firm's templates and builds pitch decks that look like your firm made them. Claude Code Agent Teams run multiple AI agents in parallel - one on a DCF, one on comps, one pulling filings, one doing competitive analysis - coordinating through a shared task list like a little army of tireless associates who never complain about face time. Bridgewater powers its Investment Analyst Assistant with Claude. Week-long due diligence, same-day deliverable.

Meanwhile OpenAI launched GPT-5.3-codex and Frontier, an enterprise platform where AI agents get employee identities and audit trails like actual staff members. BBVA gave ChatGPT Enterprise to all 120,000 employees. Microsoft Excel's Agent Mode now lets you pick between GPT-5.2 and Claude Opus 4.5, because apparently we've reached the point where your spreadsheet offers you a choice of artificial intelligence the way a restaurant offers sparkling or still.

Goldman Sachs, not to be outdone in the quiet-desperation department, revealed it had embedded Anthropic engineers inside the bank for six months. Six months! Building autonomous agents. They also deployed Devin - an AI coding agent - across their 12,000 developers. We are all tech companies now.

The week before all of this, Anthropic released 11 Cowork plugins, open-source, each one a bundle of skills and data connectors for a specific job function. The Finance plugin does month-end close, journal entries, account reconciliation, variance analysis, income statement generation. It plugs into NetSuite, SAP, Snowflake, BigQuery, Tableau, Looker.

These plugins erased $2 trillion from enterprise software stocks in days. Thomson Reuters fell 16%. LegalZoom 20%. SAP lost a third of its value from yearly highs. Jefferies called it the SaaSpocalypse, which is one of those portmanteaus that sounds clever until you realize it describes $2 trillion in destroyed market capitalization.

And the why of it is almost embarrassingly simple. Enterprise software exists because humans need screens to do their jobs. Salesforce is a UI for managing customers. SAP is a UI for managing supply chains. When AI agents skip the screen entirely - pulling data, making decisions, executing - the whole human-interface industry starts looking like an expensive habit from a previous era.

Then JPMorgan, managing its casual $7 trillion, quietly fired ISS and Glass Lewis - the two firms controlling 90% of the proxy advisory market - and replaced them with an internal AI called Proxy IQ. Dimon had called proxy advisors "a cancer." The bank looked at hundreds of well-paid professionals doing analytical work, asked whether the analysis could be automated (yes) and whether the fiduciary responsibility could (no), and kept the responsibility while vaporizing the analysis.

Every major bank is now running the same play. Keep the regulatory shell. Hollow out the cognitive interior.

Then, on February 10, the contagion reached financial services directly.

The $100 Tool That Crashed Wall Street

A fintech called Altruist launched something called Hazel. It's an AI tax planning tool. It costs a hundred dollars a month. It eats your client's tax returns, paystubs, account statements, meeting notes, and emails, and spits out personalized tax strategies in minutes. CEO Jason Wenk, with what I can only describe as cheerful ruthlessness, said: "Hazel makes average advice a lot harder to justify."

Raymond James had its worst day since March 2020, falling 8.75%. LPL dropped 8.31%. Schwab lost 7.42%. St. James's Place tumbled 13%. Bloomberg ran the headline "Wall Street Is Dumping Stocks Seen Vulnerable to AI." A hundred-dollar subscription from a startup nobody had heard of wiped billions from the wealth management sector. The media called it the industry's "DeepSeek moment."

Let’s do the fee math. Financial advice has always been sold as a bundle: the relationship (real, valuable), the analysis (labor-intensive, expensive), and the execution (mostly automated already). The analysis is what justifies charging 1% of AUM. When that analysis drops to a hundred bucks a month, the bundle falls apart. You're left charging for trust and trade execution, which is a very different business at a very different price point.

Blackstone's Jon Gray admitted AI disruption is "top of the page" and said they might need to exit portfolio companies faster. When the president of Blackstone is talking about rushing for the exits, the building may actually be on fire.

Finance Thinks It's Different

Schwab CEO Rick Wurster, in what may end up being one of the great miscalculations of the decade, went on Bloomberg and compared AI to robo-advisors. "This is the same story from 10 years ago," he said, "when robo-advice was going to displace the adviser community."

He said this while his stock was cratering.

Ares Management's CEO called the fears "odd" and "frustrating." A Citizens JMP analyst insisted wealth management "does not stand out as a business obviously ripe for near-term disruption." One imagines Kodak's board saying similar things about the iPhone.

The "relationships" defense is the finance industry's security blanket, and like all security blankets, it gets less effective the older you get. Yes, clients want a human they can call at 11 PM when the market is melting. That trust is real. But what are they paying for, exactly? Not the phone call. They're paying for the analysis, the planning, the portfolio construction. When AI does that work better and cheaper, the trust is still there - it's just wrapped around a product that's worth less every quarter. You're a very expensive delivery mechanism for a commodity.

Here's how I'd describe what's actually happening inside every financial institution: picture a regulatory exoskeleton - the license, the capital requirements, the compliance infrastructure, the balance sheet. Inside that exoskeleton, humans do cognitive work: analysis, modeling, research, documentation. AI strips the cognitive work out of the shell. The shell stands. The people who confused themselves for the shell discover they were the soft interior all along.

I've heard the "built on relationships" pitch from newspaper executives talking about the internet, taxi companies talking about Uber, Kodak talking about digital cameras. The argument bats .000 when the technology is good enough and cheap enough. Robo-advisors were a popgun. These tools are something else entirely.

A firm called Childfree Wealth in Tennessee already eliminated every single paraplanner. Not reduced. Eliminated. Joe Tsai at Alibaba says equity research analysts "can be completely replaced." The CFA Institute - the organization that sells the credential - is quietly rebranding the CFA career path as "Analyst to AI Strategist." When the people selling the exam start pivoting the marketing, they've read the actuarial tables on the profession.

Wenk's parting shot at Schwab's CEO: "Keeping up is hard if your infrastructure is old and not particularly AI friendly."

Twelve to Eighteen Months

On February 12, Mustafa Suleyman, CEO of Microsoft AI, gave an interview to the Financial Times that landed like a grenade in a library:

"Most, if not all, professional tasks - so white-collar work, where you're sitting down at a computer, either being a lawyer or an accountant or a project manager or a marketing person - most of those tasks will be fully automated by an AI within the next 12 to 18 months."

Dario Amodei at Anthropic gives it one to five years. On CBS he said AI could erase half of all entry-level white-collar jobs and push unemployment to 10-20%. Vinod Khosla says 80% within five years. These predictions used to be scattered across decades like confetti. Now they're clustering in a window of one to five years, and the man running AI at the world's most valuable company sits at the aggressive end.

The more interesting signal, though, is the corporate double-speak. Suleyman's boss Satya Nadella spent Davos calling AI "scaffolding" that augments human intelligence. He told employees to "embrace AI or leave" while promising that new hires would pack "significantly more punch." So: the head of AI at Microsoft says everything will be automated in 18 months. The CEO of Microsoft says it's a helpful tool and they're hiring. Both men work at the same address in Redmond, presumably passing each other in the hallway.

Call it audience management. To investors: AI transforms our business. To employees: AI augments you. To regulators: AI is just a tool. Behind closed doors: we all know what's happening. You don't rehearse three different versions of a story about something that isn't really happening.

Larry Fink at BlackRock worries about what happens "if AI does to white-collar workers what globalization did to blue-collar workers." The IMF's Georgieva says it's "hitting the labor market like a tsunami." Jensen Huang calls the fears "the most illogical thinking." Goldman's Solomon isn't "in the job apocalypse camp."

Everyone picking the frame that flatters their portfolio.

"Something Big Is Happening"

On February 11, Matt Shumer - a tech founder, not a futurist or a pundit - published a 5,000-word essay that racked up 76 million views in 24 hours. For context, that's roughly the population of Turkey reading one man's blog post in a single day.

"I think we're in the 'this seems overblown' phase of something much, much bigger than Covid."

The piece worked because Shumer wasn't prophesying from a mountaintop. He was describing Tuesday. "I describe what I want built, in plain English, and it just... appears. Not a rough draft I need to fix. The finished thing." The AI, he said, showed "something that felt, for the first time, like judgment. Like taste."

Judgment. There's the word. Finance people have been wrapping themselves in it like a cashmere scarf for years. We don't just crunch numbers, they say. We exercise judgment. The AI can do the grunt work but it'll never have our judgment. Shumer, who has no particular reason to lie about this, is reporting that the scarf is coming off.

His argument: "If your job happens on a screen - reading, writing, analyzing, deciding, communicating - then AI is coming for significant parts of it." And here's the thing about finance specifically: previous automation was vertical. ATMs killed teller jobs. Robots killed assembly line jobs. One tool, one function, one industry. AI is horizontal. It eats the knowledge-work layer across everything, all at once. That's why every sector bled in the same two weeks - SaaS, wealth management, data providers, legal services. The market was reacting to a layer of the economy getting repriced, not just an industry story.

"We're not making predictions," Shumer wrote. "We're telling you what already occurred in our own jobs."

Fortune ran a rebuttal. Gary Marcus pointed to hallucination errors. Fair enough. People said similar things about early automobiles, and the automobiles kept improving.

The essay appeared at the same moment Anthropic's head of Safeguards Research, Mrinank Sharma, resigned with a public warning that "the world is in peril." The optimists building this technology and the pessimists trying to slow it down can't agree on anything except that something without precedent is underway. If you're calmer about the situation than both groups, I'd love to know what you're seeing that they're not.

The Goldman Contradiction

Goldman published research in January projecting that AI-exposed industries will shed 20,000 jobs per month through 2026. The following week, Goldman CEO David Solomon told reporters: "I'm not in the job apocalypse camp."

He then mentioned, almost as an aside, that AI can now draft 95% of an S1 IPO prospectus - in minutes. Work that used to occupy a team of six for weeks. "The last 5% now matters because the rest is now a commodity."

I find this the most revealing number in the whole story. Goldman has been billing clients for that 100% for decades. Huge fees, because all of it supposedly required expensive human brainpower. Turns out 95% was formatting, data assembly, boilerplate, regulatory box-checking. The real value - the positioning insight, the pricing judgment, the read on investor appetite - always lived in a narrow sliver. AI didn't destroy the value. It held up a mirror and showed everyone that 95% of the billable work was the scaffolding, and Goldman had been charging premium rates for scaffolding.

Solomon is technically right that there won't be an "apocalypse" at Goldman. They'll keep the 5% people and shed the 95% people. But if you're in the 95% -- and let's be honest, most jobs at most banks are -- his calm reassurance is not about you.

Dimon at JPMorgan says there will be fewer employees in five years. Their AI is used by 200,000 people across 450+ use cases. But Dimon also says he'd "welcome government bans" on replacing workers with AI and warns of "civil unrest." Think about what it takes for the CEO of JPMorgan Chase to say the words "civil unrest" in a public interview.

Jane Fraser at Citi: "We are not graded on effort." Her bank will shed 60,000 employees by year-end. Staff have entered 6.5 million prompts into AI systems. Moynihan at BofA: "We can just make decisions not to hire and let the headcount drift down." Wells Fargo: 22 straight quarters of headcount cuts, like a slow bleed that nobody's trying to stop.

Goldman and Morgan Stanley are reportedly considering cutting analyst classes by two-thirds. Entry-level employment in AI-exposed jobs is down 16% since 2022. UK Big Four graduate accounting listings are down 44%. January 2026 saw 108,435 U.S. layoffs - the worst January since the Great Recession.

The entry-level picture is where I'd focus if I were a 22-year-old planning a career. Banks aren't firing managing directors. They're just not hiring the next generation of analysts. Quietly, almost gently, they're turning off the pipeline. In five years the industry will wake up with a missing generation -- the people who were supposed to learn the craft, earn their stripes, move up -- and no way to conjure them back into existence.

The Builders Feel It Too

Sam Altman spent time with his own Codex agent -- GPT-5.3-codex, his company's pride and joy -- and came away shaken. "I felt a little useless, and it was sad." The AI's feature suggestions were better than his. "I am feeling nostalgic for the present."

Aditya Agarwal, former CTO of Dropbox, Facebook employee number ten:

There's something genuinely new happening when the people who built the technology feel displaced by it. The printing press didn't give Gutenberg an existential crisis. The calculator didn't make mathematicians question their life choices. When Altman says the AI makes better product decisions than he does -- and this is a man whose entire identity is making those decisions -- he's not describing grunt work being automated. He's reporting that the machine outperforms him at the thing he's best at.

Every previous technology made the best people more valuable. Bloomberg Terminal: a brilliant analyst with one was worth ten average analysts without. AI inverts this completely. When everyone has an agent that builds DCFs in minutes and parses 200,000 earnings transcripts, the distance between brilliant and average collapses. What matters now is who has the best AI, the richest proprietary data, the biggest balance sheet. Corporate advantages. You, the individual analyst, regardless of how many hours you bill or how elegant your models, are not a corporate advantage.

OpenAI's Project Mercury hired 100+ former bankers from JPMorgan, Goldman, and Morgan Stanley. They're paid $150/hour to build DCFs, LBOs, and IPO models -- teaching AI to do the exact work they used to do. eFinancialCareers ran it under the headline: "Ex-JPMorgan, Goldman Sachs bankers paid $150 an hour to ruin their old jobs."

There's a dark comedy to it. Once those bankers finish teaching the machine everything they know, the machine won't need them or anyone who comes after them. They are, with full awareness and presumably some ambivalence, the last generation of experts needed to make expertise obsolete.

The Model Arms Race

All of what I just described - Claude for Excel, Frontier, Cowork plugins, Proxy IQ, the whole demolition - comes from two companies. Two. And there are multiple serious labs in this race, each one spending billions. Each one building models that do cognitive work faster and cheaper than the last version did six months ago. The moat around financial expertise, such as it was, is being handed out like flyers on a street corner.

And the models are doing things the humans genuinely cannot. Ninety-one percent of U.S. banks run AI fraud detection now, pulling accuracy rates between 87 and 97%. The old rule-based systems managed 37.8%, which is roughly the accuracy of guessing with mild confidence. Auditors have always sampled - 5%, maybe 10% of entries - because no human brain can hold a hundred million data points and spot patterns across all of them. The AI eats the whole dataset. That difference alone makes entire compliance departments look like a rounding error, and the banks know it.

Three hundred thousand accountants have walked away in two years. AI adoption in accounting went from 9% to 41% in a single year, which looks less like a technology trend and more like a profession grabbing whatever flotsam it can find while the ship tilts. PwC expects full audit automation this year. Deloitte built Zora AI with Nvidia. KPMG launched Workbench to replicate human audit teams. EY's tools handle 80,000 tax professionals across 3 million cases. The Big Four are building the machines that replace their own people, which is either brilliant or the professional services version of an autoimmune disease.

And then there's coding. Two years ago everyone pointed at software engineering as the skill that would survive. Too creative, too complex, too human. It's 25 to 40% automated now and accelerating. Financial analysis - and let's be honest with ourselves here - is mostly pattern recognition and spreadsheet construction. Structurally simpler than writing software. The only reason it hasn't been automated faster is that banks move like glaciers. That's a delay, not a defense.

The Other Side

I owe you the counter-argument, and honestly, parts of it land.

Only 2% of executives have actually cut jobs based on AI that works. 39% cut in anticipation. Over half regret it. So most of the pain right now is self-inflicted - companies swinging the axe before the tool is ready.

Klarna is the cautionary tale. They fired 60% of their workforce, plugged in an OpenAI chatbot, declared $40 million in savings, and then watched customer satisfaction collapse and losses double. Their software engineers ended up answering customer service phones because the AI couldn't hack it. They're rehiring the humans they fired. Productivity nationally is declining, not surging - Oxford Economics points this out. Fortune calls it "AI-washing," companies using AI as a respectable excuse for cuts they wanted to make anyway. Gartner says finance AI adoption went from 58% to 59% in a year. One percentage point. Some revolution.

The gap between what AI does on a demo stage and what it does inside a bank running compliance software from the Clinton era is real. People sabotage rollouts - 41% of younger workers actively resist. When your portfolio drops 20% you want to yell at a person, not a chat window.

All fair. But Klarna is customer service. Goldman's 95% IPO number is the analytical layer. The counter-argument works for jobs that depend on human relationships and falls apart for jobs that depend on computation. Most of what happens in finance is computation.

The Verdict

So what actually happens? Not the McKinsey version with the optimistic transformation curve. The real version.

Finance splits into two layers, and the split is already underway. On the bottom: the institutional substrate. Licenses, balance sheets, deposit guarantees, access to central bank liquidity, the legal obligation to stand behind a trade when everyone else runs for the door. That layer is mandated by law and reinforced by every financial crisis that ever taught a regulator a lesson.

On top: the cognitive layer. Analysis, modeling, research, compliance documentation, pricing, pattern recognition. Everything that used to require a floor of humans staring at screens. That layer is being absorbed into software, and the five labs I just mentioned are racing to make it cheaper by the quarter.

The interesting part - and this is what the "relationships will save us" crowd keeps missing - is how differently this plays out across the industry. Retail banking barely notices. AI just makes the existing plumbing run smoother: better credit scoring, tighter fraud detection, faster loan processing. The bank persists because nobody's figured out how to open-source a deposit guarantee or replicate access to central bank liquidity with a subscription product.

Financial advice gets hollowed from the inside. The generic modeling - portfolio construction, asset allocation, tax optimization - becomes computationally cheap. Hazel does it for a hundred bucks. What survives is the stuff that never had anything to do with spreadsheets: talking a panicking client off the ledge at midnight during a 2008-style drawdown, navigating a family succession dispute where the siblings hate each other, being the person who gets sued when something goes wrong. That work is real and irreplaceable. It's also about 10% of what most advisors actually do all day.

Asset management stops being about who has the smartest analyst and starts being about who has the deepest proprietary data, the most capital to deploy, and the widest distribution network. When every fund runs the same class of model on the same data feeds, individual stock-picking genius - the kind that gets profiled in Barron's - becomes about as relevant as individual typesetting skill after the printing press.

Investment banking keeps the balance sheets and the senior advisory relationships. Everything underneath - the modeling, the documentation, the grunt process work - gets routed through AI that unbundles each piece and finds the cheapest execution. The banks gradually stop being proprietors of opaque deal packages and start being suppliers of capacity into a network they no longer fully control. Which is a humbling transition for institutions that built their cultures around controlling everything.

And as AI squeezes the inefficiencies out of pricing, the obvious trades disappear. You'd think that kills speculation. It doesn't. It just makes speculation weirder. Instead of betting on earnings or macro or sector rotation, traders start betting on what the other AI systems are about to do - where the automated flows are concentrated, which models are overweight the same positions, when a rebalancing cascade might trigger. The trading floor doesn't die. It just stops being about markets and starts being about reverse-engineering other people's algorithms. A poker game where everyone's trying to read the bots instead of each other.

Step back far enough and the whole industry starts looking less like finance and more like infrastructure. Regulated capital on the bottom doing what the law requires it to do. Software on top doing what used to take thousands of people. And the middle - the analytical layer, the one that hired the most people, billed the most hours, and convinced a generation of ambitious twenty-two-year-olds to get MBAs - getting compressed from both sides by machines that Anthropic and OpenAI built for an entirely different industry.

Nobody asked for this version of the future. But the version where everything stays the same requires the technology to stop improving, the labs to stop competing, and the economics to stop economicing. I'd want long odds on that parlay.

Two weeks of evidence. Zero weeks of planning. And the meter's running.

Reply

Avatar

or to participate

Keep Reading