Generalists, Specialists, and the Question of Agency: Rethinking AI in Finance

By

Jerome Favresse

calendar_month

August 20, 2025

schedule

10

Min Read

Artificial intelligence has already become part of the hidden architecture of financial markets. Algorithms trade in milliseconds, risk models monitor portfolios in real time, and machine learning systems flag compliance anomalies before regulators could ever detect them. Yet beneath this adoption lies a deeper strategic dilemma: should firms rely on generalist AI models like Claude or ChatGPT, which scan across domains and asset classes, or specialist AI models, which solve narrow but vital tasks such as pattern detection, credit scoring, liquidity forecasting, or fraud detection?

This distinction — breadth versus depth — is rapidly being unsettled by a new category: agentic AI. In the language of AI research, “agentic” refers to systems that not only generate insights but also plan, sequence, and act. They monitor conditions continuously, decompose tasks, call on other models, and execute decisions within delegated boundaries.

But here, language matters. “Agentic” is a technical term. What is really at stake, however, is the older and weightier concept of agency — the capacity to act intentionally, to choose ends, and above all, to bear responsibility for action. Finance is not merely a technical domain; it is one where responsibility, authorship, and accountability are central. And the arrival of systems that act forces us to ask: can machines be said to exercise agency, or do they merely simulate it?

Generalists: The Intelligence of Breadth

Generalist models are built on the premise that breadth itself is a form of intelligence. They are the systems that consume oceans of heterogeneous data — economic indicators, corporate filings, news, political speeches, satellite images, even social media streams — and search for patterns that no narrow system could perceive.

In practice, generalists serve functions similar to macro strategists inside investment banks or hedge funds. A generalist AI can simultaneously link a sudden decline in Baltic Dry shipping indices to weakness in global trade, connect this to Chinese manufacturing data, and finally tie it to currency flows into safe havens. Where a specialist might only see “shipping volatility,” the generalist situates the event in a world-system perspective.

Strengths of Generalists

The unique strength of generalists is their ability to operate under what John Maynard Keynes called true uncertainty. In his Treatise on Probability (1921), Keynes insisted that the world often refuses to conform to probability distributions. In such environments, specialist models — calibrated to historical data — break down. Generalists, however, because they are designed to synthesize disparate signals, are better at navigating fog.

This strength is evident in practice. A generalist might detect that climate-related shocks in agricultural commodities are not simply volatility “noise,” but an indicator of a broader inflationary spiral. It might pick up early rumblings of political instability by correlating civil unrest with energy prices and sovereign spreads. It does not know with certainty what will happen, but it grasps the systemic resonance.

From a philosophical lens, this makes generalists closer to Karl Popper’s epistemology: they propose bold conjectures about connections, which must then be tested. They are not machines of certainty but machines of provisional synthesis. In moments of rupture — the 1971 end of Bretton Woods, the 2008 collapse of interbank lending, the COVID-19 pandemic — such conjectural breadth is invaluable.

Weaknesses of Generalists

But generalists also suffer from chronic weaknesses. Their very openness to multiple signals makes them prone to hallucination or narrative excess. They may weave patterns where none exist. In finance, this can be fatal: interpreting random noise as a trend can lead to catastrophic bets.

Furthermore, generalists often lack the calibrational precision that regulators, auditors, and risk managers demand. It is one thing to say “systemic fragility is rising”; it is another to compute, with defensibility, how a specific bank’s liquidity coverage ratio will deteriorate under Basel III stress tests. Here the generalist falters.

Organizationally, generalists also face skepticism. Within banks, portfolio managers want actionable signals, not sweeping narratives. Risk officers want defensible numbers, not probabilities wrapped in prose. Thus, generalists risk being perceived as overly abstract — useful for framing but less useful for execution.

Finally, generalists face what philosophers of science call the problem of underdetermination. Multiple narratives can fit the same data. Is a currency collapse a sign of speculative attack, or of underlying structural weakness? Is an equity rally optimism about growth, or short-covering? Generalists can argue either side, and therein lies both their brilliance and their unreliability.

Specialists: The Intelligence of Depth

Where generalists swim across the ocean of data, specialists dive deep into a single trench. They are optimized for well-defined problems, calibrated against historical records, and rigorously validated against benchmarks.

Strengths of Specialists

The power of specialists lies in determinacy. They deliver answers that are precise, auditable, and repeatable. A credit scoring model, for instance, may assign a 0.37 probability of default to a borrower, backed by decades of data. A high-frequency trading algorithm can decide in microseconds how to route an order to minimize slippage. These are tight feedback loops where success or failure is immediately measurable.

Specialists are the darlings of regulators for precisely this reason. Their models can be explained, validated, and stress-tested. They satisfy the Kantian demand for rule-based action: behavior determined not by inclination but by adherence to universalizable laws. Just as Kant sought moral laws that could apply to all rational beings, regulators seek models that can be transparently applied across institutions.

In the language of Thomas Kuhn, specialists practice “normal science.” They do not reinvent paradigms; they solve puzzles within them. The Basel framework defines the puzzle, and risk specialists solve it. Accounting rules define valuation, and valuation specialists compute it.

From an organizational standpoint, specialists are indispensable. Risk departments depend on them. Traders rely on them for execution. Compliance relies on them to flag violations. They embody the reliability that makes finance legible to regulators, auditors, and shareholders.

Weaknesses of Specialists

But the precision of specialists is also their vulnerability. They are brittle. When the environment shifts in ways not captured by historical data, they collapse.

Consider the 2008 crisis. Credit scoring models underestimated systemic correlation between mortgage defaults because such a correlation had not existed in the data. Consider volatility models during COVID-19: standard GARCH specifications, trained on decades of data, could not comprehend volatility spikes triggered by global lockdowns. In both cases, the specialist’s determinacy was exposed as fragility.

Philosophically, this echoes the problem Kant himself faced. While his categorical imperative demanded universality, real-world moral life often confronted radical novelty. Rules designed for one context break when confronted with events beyond imagination. So too with specialist models: their rules, calibrated to yesterday’s world, collapse in tomorrow’s.

Specialists also face the problem of attribution narrowness. Because they focus on tight puzzles, they sometimes miss systemic interconnections. A compliance specialist may flag a suspicious trade without realizing it is part of a larger geopolitical play. A risk specialist may compute exposure to oil futures without seeing the macro narrative of energy transition. In this sense, specialists risk becoming bureaucratic — precise but blind.

Crises as Tests of Breadth and Depth

History reveals how generalists and specialists would have complemented each other in crisis.

  • 2008 Global Financial Crisis: Generalists warned of systemic leverage and fragile shadow banking. Specialists in mortgage-backed securities engineered precise short trades. The former saw the storm; the latter built the instruments to profit (Roubini & Mihm, 2010).
  • Dot-com Bubble (2000): Specialists in technology embraced inflated valuations, trapped by their own models of “new economy” growth. Generalists, grounded in value fundamentals, resisted the mania (Cassidy, 2002).
  • Asian Financial Crisis (1997): Generalists flagged the fragility of pegged exchange rates. FX specialists executed targeted speculative attacks, bringing pegs down (Corsetti, Pesenti & Roubini, 1999).
  • COVID-19 Shock (2020): Generalists guided cross-asset reallocations in response to narrative shocks. Specialists monetized volatility clustering in derivatives markets (BIS, 2021).

The lesson is consistent: generalists anticipate and contextualize; specialists execute with precision. Neither can succeed alone.

Agentic AI: From Analysis to Action

Aristotle drew a distinction between poiesis (making, producing) and praxis (acting, doing with purpose). Generalists and specialists remain, in the end, in the realm of poiesis. They produce forecasts, insights, numbers, scenarios. The final act — the choice to rebalance a portfolio, execute a trade, or file a compliance report — remains human. But the emergence of agentic AI marks a departure. Here, systems are designed not merely to interpret the world, but to act within it.

In technical AI research, an “agentic” system is one that can sequence tasks, monitor its environment, call external tools, and pursue goals over time without constant human prompting. Applied to finance, this means an AI system that, upon detecting a liquidity risk, does not merely raise an alert but initiates a cascade of actions: rebalancing positions, adjusting hedges, querying specialists for calibration, and even drafting documentation for regulators.

This is qualitatively different from generalists and specialists. It is a move from interpretation to intervention.

Simulated Agency versus True Agency

Yet, here we must pause. The technical term “agentic” should not be confused with the philosophical concept of agency. To have agency, in the philosophical sense, is not simply to act; it is to act with authorship, intentionality, and responsibility. A system that executes instructions, however sophisticated, does not intend its acts; it follows patterns.

Hannah Arendt, in The Human Condition (1958), reminds us that action is not simply behavior. To act is to insert oneself into the world in a way that is unpredictable, bound up with responsibility, and irreducible to mechanical causality. By this definition, agentic AI does not have agency. It mimics the form of agency without its essence.

This is where confusion is most dangerous. In finance, what matters is not only that actions are taken, but that someone owns them. An AI that liquidates a position may save a firm from collapse; but who owns that act? The coder who designed it? The executive who approved its deployment? Or the machine itself?

Without clarity, we risk what Arendt feared: action without authorship.

Historical Parallels

History provides examples of what happens when systems act without clear attribution.

  • 1987 Black Monday: Portfolio insurance strategies, designed as semi-automatic hedging systems, accelerated the crash. No single trader “chose” to panic-sell; algorithms executed mechanically. The market was destabilized by action without clear agency.
  • 2010 Flash Crash: High-frequency trading systems cascaded into a liquidity vacuum. Regulators struggled to assign responsibility. Was it the firms, the coders, or the logic of the system itself? Again, the event revealed the dangers of systems that act faster than attribution can follow.

Agentic AI, if left unchecked, risks repeating this pattern at greater scale.

Philosophical Risks of Delegated Agency

The arrival of agentic AI therefore confronts finance with three interlocking philosophical risks:

  1. The Illusion of Calculability
    Keynes distinguished between measurable risk and unmeasurable uncertainty. Yet agentic systems, by design, must act as if all uncertainty were calculable. They collapse the unknowable into a rule-set. Heidegger would call this Gestell — the reduction of the world into mere resources to be optimized. When finance is enframed in this way, the human recognition of uncertainty — our caution, our doubt — is eroded.

  2. The Erosion of Autonomy
    Kant argued that moral worth lies in autonomy — self-legislation according to reason. If financial institutions increasingly delegate decisions to agentic systems, they risk sliding into heteronomy: following rules that originate outside human authorship. In practical terms, this means executives will defend decisions not as their own but as “what the system decided.” Kant would see this as a collapse of moral responsibility.

  3. The Diffusion of Responsibility
    Arendt warned that modern bureaucracies diffuse responsibility until no one owns action. Agentic AI risks deepening this. A trade executed by an agent will be the product of coders, compliance rules, supervisors, and the system’s own routines. The more distributed the act, the less anyone feels accountable. But finance, unlike other domains, is built on the assumption that accountability exists — every trade has a counterparty, every contract an author. If responsibility dissolves, trust collapses.

Institutional Implications

Agentic systems, if deployed in finance, will face immediate institutional challenges.

  • Regulation: Current regulatory frameworks (Basel, MiFID, Dodd-Frank) assume that decisions can be traced back to human sign-off. Agentic systems break this assumption. Regulators will demand new audit frameworks, perhaps even “AI responsibility registers,” where each autonomous action is logged with attribution to a human overseer.
  • Governance: Firms will need governance architectures that do more than monitor outputs. They will need structures that assign responsibility for every action, even when initiated by agents. This may involve new roles — “AI conduct officers” — tasked with owning agentic outputs.
  • Systemic Risk: Agentic systems could create correlated behavior at unprecedented speed. If multiple institutions deploy similar agents, systemic herding could occur in milliseconds, leading to crashes more severe than 1987 or 2010.

Beyond Risk Management: A Philosophical Reframing

The deeper question, however, is not technical but philosophical. Finance is not merely about efficiency; it is about trust. Trust rests on the assumption that actions are intentional and attributable. If the industry embraces “agentic” systems while neglecting the question of agency, it may inadvertently hollow out its own foundations.

Aristotle’s distinction between praxis and poiesis is useful here. Generalists and specialists remain within poiesis: they produce knowledge. Agents step into praxis. But Aristotle insisted that true praxis involves deliberation over ends, not just means. By this measure, no machine has agency. Only humans deliberate over ends.

The task, then, is not to deny the utility of agentic AI but to ensure that it remains subordinated to human agency. It may act, but humans must own its acts.

Toward a Three-Layered Future

The likely future of finance is a three-layered ecology:

  • Generalists will interpret uncertainty, weaving disparate signals into narratives.
  • Specialists will provide depth, determinacy, and compliance.
  • Agents will integrate these and act, translating knowledge into intervention.

But this hierarchy only works if humans remain the authors. The danger is not that machines become too intelligent, but that humans abdicate too much responsibility.

Heidegger warned that technology risks “enframing” humanity itself, reshaping not just our tools but our purposes. Finance must resist this. If the sector allows “agentic” systems to redefine what counts as responsible action, it risks eroding the very notion of agency.

Conclusion: Preserving Agency in an Agentic Age

The opposition between generalists and specialists was always incomplete. The real disruption is the arrival of systems that act. But if “agentic” describes a technical capacity, agency describes a moral and philosophical necessity.

Finance cannot allow itself to forget this distinction. Markets depend on attribution, accountability, and authorship. Generalists anticipate, specialists execute, agents integrate. But only humans can bear responsibility. To confuse simulation with reality is to risk building a financial system that acts without owners, intervenes without intention, and optimizes without accountability.

The future of finance will not be decided by whether we deploy generalists, specialists, or agents. It will be decided by whether we preserve agency — the human capacity to act and to own action — in an age of agentic machines.

You may also like...

Jerome Favresse

Directeur des Opérations

Jerome est un stratège multi-actifs avec plus de 15 ans d'expérience dans la banque d'investissement, occupé à incuber de nouveaux algos d'aide à la décision et à l'IA. Il est spécialisé dans la recherche et la stratégie, le Big Data, l'actualité et l'analyse des sentiments. Jérôme est titulaire d'un Master of Science (MSc) de la Bordeaux Management School en France.