Artificial intelligence has already become part of the hidden architecture of financial markets. Algorithms trade in milliseconds, risk models monitor portfolios in real time, and machine learning systems flag compliance anomalies before regulators could ever detect them. Yet beneath this adoption lies a deeper strategic dilemma: should firms rely on generalist AI models like Claude or ChatGPT, which scan across domains and asset classes, or specialist AI models, which solve narrow but vital tasks such as pattern detection, credit scoring, liquidity forecasting, or fraud detection?
This distinction — breadth versus depth — is rapidly being unsettled by a new category: agentic AI. In the language of AI research, “agentic” refers to systems that not only generate insights but also plan, sequence, and act. They monitor conditions continuously, decompose tasks, call on other models, and execute decisions within delegated boundaries.
But here, language matters. “Agentic” is a technical term. What is really at stake, however, is the older and weightier concept of agency — the capacity to act intentionally, to choose ends, and above all, to bear responsibility for action. Finance is not merely a technical domain; it is one where responsibility, authorship, and accountability are central. And the arrival of systems that act forces us to ask: can machines be said to exercise agency, or do they merely simulate it?
Generalist models are built on the premise that breadth itself is a form of intelligence. They are the systems that consume oceans of heterogeneous data — economic indicators, corporate filings, news, political speeches, satellite images, even social media streams — and search for patterns that no narrow system could perceive.
In practice, generalists serve functions similar to macro strategists inside investment banks or hedge funds. A generalist AI can simultaneously link a sudden decline in Baltic Dry shipping indices to weakness in global trade, connect this to Chinese manufacturing data, and finally tie it to currency flows into safe havens. Where a specialist might only see “shipping volatility,” the generalist situates the event in a world-system perspective.
The unique strength of generalists is their ability to operate under what John Maynard Keynes called true uncertainty. In his Treatise on Probability (1921), Keynes insisted that the world often refuses to conform to probability distributions. In such environments, specialist models — calibrated to historical data — break down. Generalists, however, because they are designed to synthesize disparate signals, are better at navigating fog.
This strength is evident in practice. A generalist might detect that climate-related shocks in agricultural commodities are not simply volatility “noise,” but an indicator of a broader inflationary spiral. It might pick up early rumblings of political instability by correlating civil unrest with energy prices and sovereign spreads. It does not know with certainty what will happen, but it grasps the systemic resonance.
From a philosophical lens, this makes generalists closer to Karl Popper’s epistemology: they propose bold conjectures about connections, which must then be tested. They are not machines of certainty but machines of provisional synthesis. In moments of rupture — the 1971 end of Bretton Woods, the 2008 collapse of interbank lending, the COVID-19 pandemic — such conjectural breadth is invaluable.
But generalists also suffer from chronic weaknesses. Their very openness to multiple signals makes them prone to hallucination or narrative excess. They may weave patterns where none exist. In finance, this can be fatal: interpreting random noise as a trend can lead to catastrophic bets.
Furthermore, generalists often lack the calibrational precision that regulators, auditors, and risk managers demand. It is one thing to say “systemic fragility is rising”; it is another to compute, with defensibility, how a specific bank’s liquidity coverage ratio will deteriorate under Basel III stress tests. Here the generalist falters.
Organizationally, generalists also face skepticism. Within banks, portfolio managers want actionable signals, not sweeping narratives. Risk officers want defensible numbers, not probabilities wrapped in prose. Thus, generalists risk being perceived as overly abstract — useful for framing but less useful for execution.
Finally, generalists face what philosophers of science call the problem of underdetermination. Multiple narratives can fit the same data. Is a currency collapse a sign of speculative attack, or of underlying structural weakness? Is an equity rally optimism about growth, or short-covering? Generalists can argue either side, and therein lies both their brilliance and their unreliability.
Where generalists swim across the ocean of data, specialists dive deep into a single trench. They are optimized for well-defined problems, calibrated against historical records, and rigorously validated against benchmarks.
The power of specialists lies in determinacy. They deliver answers that are precise, auditable, and repeatable. A credit scoring model, for instance, may assign a 0.37 probability of default to a borrower, backed by decades of data. A high-frequency trading algorithm can decide in microseconds how to route an order to minimize slippage. These are tight feedback loops where success or failure is immediately measurable.
Specialists are the darlings of regulators for precisely this reason. Their models can be explained, validated, and stress-tested. They satisfy the Kantian demand for rule-based action: behavior determined not by inclination but by adherence to universalizable laws. Just as Kant sought moral laws that could apply to all rational beings, regulators seek models that can be transparently applied across institutions.
In the language of Thomas Kuhn, specialists practice “normal science.” They do not reinvent paradigms; they solve puzzles within them. The Basel framework defines the puzzle, and risk specialists solve it. Accounting rules define valuation, and valuation specialists compute it.
From an organizational standpoint, specialists are indispensable. Risk departments depend on them. Traders rely on them for execution. Compliance relies on them to flag violations. They embody the reliability that makes finance legible to regulators, auditors, and shareholders.
But the precision of specialists is also their vulnerability. They are brittle. When the environment shifts in ways not captured by historical data, they collapse.
Consider the 2008 crisis. Credit scoring models underestimated systemic correlation between mortgage defaults because such a correlation had not existed in the data. Consider volatility models during COVID-19: standard GARCH specifications, trained on decades of data, could not comprehend volatility spikes triggered by global lockdowns. In both cases, the specialist’s determinacy was exposed as fragility.
Philosophically, this echoes the problem Kant himself faced. While his categorical imperative demanded universality, real-world moral life often confronted radical novelty. Rules designed for one context break when confronted with events beyond imagination. So too with specialist models: their rules, calibrated to yesterday’s world, collapse in tomorrow’s.
Specialists also face the problem of attribution narrowness. Because they focus on tight puzzles, they sometimes miss systemic interconnections. A compliance specialist may flag a suspicious trade without realizing it is part of a larger geopolitical play. A risk specialist may compute exposure to oil futures without seeing the macro narrative of energy transition. In this sense, specialists risk becoming bureaucratic — precise but blind.
History reveals how generalists and specialists would have complemented each other in crisis.
The lesson is consistent: generalists anticipate and contextualize; specialists execute with precision. Neither can succeed alone.
Aristotle drew a distinction between poiesis (making, producing) and praxis (acting, doing with purpose). Generalists and specialists remain, in the end, in the realm of poiesis. They produce forecasts, insights, numbers, scenarios. The final act — the choice to rebalance a portfolio, execute a trade, or file a compliance report — remains human. But the emergence of agentic AI marks a departure. Here, systems are designed not merely to interpret the world, but to act within it.
In technical AI research, an “agentic” system is one that can sequence tasks, monitor its environment, call external tools, and pursue goals over time without constant human prompting. Applied to finance, this means an AI system that, upon detecting a liquidity risk, does not merely raise an alert but initiates a cascade of actions: rebalancing positions, adjusting hedges, querying specialists for calibration, and even drafting documentation for regulators.
This is qualitatively different from generalists and specialists. It is a move from interpretation to intervention.
Yet, here we must pause. The technical term “agentic” should not be confused with the philosophical concept of agency. To have agency, in the philosophical sense, is not simply to act; it is to act with authorship, intentionality, and responsibility. A system that executes instructions, however sophisticated, does not intend its acts; it follows patterns.
Hannah Arendt, in The Human Condition (1958), reminds us that action is not simply behavior. To act is to insert oneself into the world in a way that is unpredictable, bound up with responsibility, and irreducible to mechanical causality. By this definition, agentic AI does not have agency. It mimics the form of agency without its essence.
This is where confusion is most dangerous. In finance, what matters is not only that actions are taken, but that someone owns them. An AI that liquidates a position may save a firm from collapse; but who owns that act? The coder who designed it? The executive who approved its deployment? Or the machine itself?
Without clarity, we risk what Arendt feared: action without authorship.
History provides examples of what happens when systems act without clear attribution.
Agentic AI, if left unchecked, risks repeating this pattern at greater scale.
The arrival of agentic AI therefore confronts finance with three interlocking philosophical risks:
Agentic systems, if deployed in finance, will face immediate institutional challenges.
The deeper question, however, is not technical but philosophical. Finance is not merely about efficiency; it is about trust. Trust rests on the assumption that actions are intentional and attributable. If the industry embraces “agentic” systems while neglecting the question of agency, it may inadvertently hollow out its own foundations.
Aristotle’s distinction between praxis and poiesis is useful here. Generalists and specialists remain within poiesis: they produce knowledge. Agents step into praxis. But Aristotle insisted that true praxis involves deliberation over ends, not just means. By this measure, no machine has agency. Only humans deliberate over ends.
The task, then, is not to deny the utility of agentic AI but to ensure that it remains subordinated to human agency. It may act, but humans must own its acts.
The likely future of finance is a three-layered ecology:
But this hierarchy only works if humans remain the authors. The danger is not that machines become too intelligent, but that humans abdicate too much responsibility.
Heidegger warned that technology risks “enframing” humanity itself, reshaping not just our tools but our purposes. Finance must resist this. If the sector allows “agentic” systems to redefine what counts as responsible action, it risks eroding the very notion of agency.
The opposition between generalists and specialists was always incomplete. The real disruption is the arrival of systems that act. But if “agentic” describes a technical capacity, agency describes a moral and philosophical necessity.
Finance cannot allow itself to forget this distinction. Markets depend on attribution, accountability, and authorship. Generalists anticipate, specialists execute, agents integrate. But only humans can bear responsibility. To confuse simulation with reality is to risk building a financial system that acts without owners, intervenes without intention, and optimizes without accountability.
The future of finance will not be decided by whether we deploy generalists, specialists, or agents. It will be decided by whether we preserve agency — the human capacity to act and to own action — in an age of agentic machines.