Is a zero-person logistics company realistic? Trade compliance sets the limits 

Spread the word:
Zero-person logistics

“ChatGPT, can you run a logistics company for me?”

In 2026, this question no longer sounds unrealistic. AI systems already support order capture, planning, coordination, and execution across logistics chains. Autonomous agents can work continuously, collaborate at speed, and handle tasks that once required large teams.

If AI can already do so much, it naturally raises a bigger question: Is a zero-person logistics company realistic?

To answer that, we first need to be clear about what full automation in logistics actually means, and where the boundaries lie.

The ideal vision of an autonomous logistics chain

When people talk about fully autonomous logistics, they are usually describing a future vision rather than today’s reality.

In that vision, AI is embedded across the entire logistics transaction lifecycle. It does not just assist individual tasks, but supports how orders are created, processed, executed, and closed, with coordination happening continuously across systems and partners.

AI supports order capture, inventory verification, and commercial negotiation. Goods are picked, packed, labelled, and required documentation is generated automatically. Logistics coordination and transportation planning are handled digitally, with shipments received, scanned, verified, and routed to support compliant flows.

As goods move across borders, shipments are pre-screened, risks are assessed, and regulatory requirements are taken into account. Transport execution is coordinated digitally, with deviations identified and addressed at regulatory touchpoints. Delivery is confirmed, reports are generated, and the transaction lifecycle is closed.

From a conceptual perspective, this is coherent and end-to-end. Each step can be described, connected, and partially automated. It represents an ambitious view of what AI-enabled logistics could look like in the future.

At the same time, this remains a conceptual model. It does not describe how logistics operates today, particularly in regulated environments. This is where the first practical limitations begin to surface.

Why AI struggles when compliance enters the picture

The challenge emerges when compliance becomes central to the process.

Ensuring compliance is not a single task. It is a complex, multi-step process that is not suited to be handled end-to-end by Large Language Models.

Compliance starts with understanding the transaction itself. Data must be extracted from commercial invoices, waybills, and related documents. That data then needs to be enriched with HS codes, ownership information, and supply chain context. Anomalies such as misclassification or undervaluation must be identified before any decision can be made.

From there, attention shifts to understanding the regulations. This requires identifying the relevant jurisdictions and applicable rules for a specific situation. Legal obligations must be interpreted across domains such as sanctions, export controls, and ESG, and aligned with internal policy and risk appetite.

Only after these steps can compliance actually be checked. Obligations are identified, additional data may be required, context must be reasoned about, and controls must be determined and executed. AI can support individual steps in this process, but it cannot reliably manage the entire compliance chain on its own.

This limitation becomes even clearer when AI is asked to take full responsibility.

What happens when AI is asked to run the company

The limits of human-like AI become visible when organizations try to let autonomous agents run an entire company.

This idea was tested in the scientific “Zero-person company” experiment conducted by KPMG in collaboration with the University of Amsterdam.

The study asked whether a business could operate entirely under the control of AI agents, with humans only in oversight roles.

In the experiment, a team of AI agents was set up to decide what kind of company to launch and then to run it. The agents chose to build an online webshop offering personalized AI-generated art. With roles such as a virtual CEO, they worked continuously and could operate without breaks, generating business plans and coordinating tasks at speeds beyond what a human team can sustain.

However, the researchers quickly encountered important limitations. When autonomous agents were given broad, human-like roles such as financial or executive leadership, they drifted away from their instructions, hallucinating or stopping work in ways that undermined reliability.

This showed that simply assigning human titles to AI roles does not provide the structure or robustness needed for complex, end-to-end operations. As a result, the research shifted toward breaking work down into detailed processes and assigning agents to small, well-defined tasks, a more stable approach, but one that still highlights how far current AI is from independently running an entire organization.

For logistics and trade compliance, where processes are deeply regulated and intolerant of error, this insight is particularly important.

Trade compliance sets hard limits on autonomy

Trade compliance requires both broad context and strict rule-following. Humans are able to reason across complex situations while still adhering to rules. AI struggles to balance these two demands at the same time.

There is an inherent tension between the broad context required to perform complex work and the narrow task specification needed to avoid hallucination. Large language models cannot reliably resolve this tension in regulated, end-to-end processes.

That is why relying on ChatGPT or generative AI alone to ensure compliance is explicitly described as not a wise idea.

The need for a hybrid approach to AI

Rather than replacing humans entirely, regulated domains require a different approach to automation.

In complex, end-to-end processes such as cross-border trade and transport, effective automation is built on a hybrid model. Deterministic workflows provide structure and control, while smaller, task-focused AI services support specific activities within clearly defined boundaries. These services assist the process, but do not own it.

The critical difference is that decisions are guided by machine-readable regulations. By encoding regulatory obligations directly into the workflow, organizations can automate at scale while preserving predictability, traceability, and accountability.

This combination of structured orchestration and constrained AI is what makes safe automation possible in practice, particularly in areas such as trade and transport compliance.

Is a zero-person logistics company realistic?

Based on what is observed in practice, the answer is no, at least not today.

AI can support many logistics activities and significantly accelerate operations. However, compliance remains a complex, multi-step process that cannot be fully delegated to large language models.

Trade compliance sets clear limits on autonomy. The realistic future is not a zero-person logistics company, but a carefully designed hybrid model where automation supports logistics at scale, while compliance remains structured, constrained, and accountable.

sign up for updates
sign up for updates