Key Takeaways
- Upstart's CEO publicly admitted that humans have never been good at underwriting, but cautioned that AI alone won't solve the problem either.
- For MCA lenders, the real opportunity is building hybrid workflows where AI handles extraction and pattern detection while humans make final judgment calls.
- Automated bank statement analysis catches inconsistencies that manual review routinely misses, but it still requires underwriter oversight for edge cases.
- Revenue-based financing models, now endorsed by NYC's mayor, increase the urgency for accurate, real-time cash flow verification.
- Platforms like Let's Submit combine AI-powered document extraction with human review layers to give funders both speed and accuracy.
Upstart's Admission and What It Means for MCA Lenders
During Upstart's Q4 earnings call, CEO Paul Gu made a statement that rattled the lending world: "Unfortunately, humans have never really been very good at precisely underwriting loans and figuring out the cash flows they're going to produce for the next 5 years." The admission was striking not because it was wrong, but because it came with an equally candid qualifier. AI underwriting for merchant cash advance and other lending products, Gu acknowledged, inherits many of the same limitations. Models are only as good as the data they consume and the assumptions they encode.
For MCA funders processing dozens or hundreds of applications per week, this creates a real strategic question. If humans miss patterns in bank statements and AI models can be fooled by fabricated documents or unusual cash flow profiles, where does that leave underwriting? The answer isn't choosing one over the other. It's building a workflow where each compensates for the other's weaknesses. Automated extraction catches what tired eyes miss. Human judgment catches what rigid models can't contextualize.
This article breaks down why the human-versus-AI framing is a false choice, what a hybrid underwriting workflow actually looks like in practice, and how MCA lenders can implement one without rebuilding their entire operation.
Why Manual Underwriting Breaks Down at Scale
The Cognitive Limits of Bank Statement Review
An experienced MCA underwriter reviewing a three-month bank statement package is scanning for daily balances, deposit consistency, NSF fees, negative days, existing MCA payment patterns, and revenue trends. That's a lot of signal packed into dense, often poorly formatted PDF pages. Research on cognitive load in financial decision-making consistently shows that accuracy drops sharply after the first 20 to 30 minutes of continuous document review. By the time an underwriter reaches the third application in a stack, they're pattern-matching on autopilot rather than genuinely analyzing.
The result is predictable. Subtle red flags get missed. A business that shows healthy average deposits but has a suspiciously regular cadence of same-day withdrawals might sail through review. An applicant whose statements show classic signs of MCA stacking could get funded simply because the underwriter was fatigued when they reviewed page seven of a twelve-page statement.
Inconsistency Across Underwriters
Manual processes also produce inconsistency. Two underwriters at the same shop, given the same application, will frequently reach different conclusions about risk level. One might flag a business with seasonal revenue dips as high risk. Another might recognize it as a normal pattern for a landscaping company and approve. Neither is necessarily wrong, but the lack of standardization makes portfolio-level risk management nearly impossible.
This is precisely the gap Upstart was pointing to. Humans bring judgment and context, but they lack precision and repeatability. When your funding volume grows, those inconsistencies compound into real portfolio losses.
Why AI Alone Doesn't Solve the Problem
The Garbage-In, Garbage-Out Problem
AI underwriting models, whether they use gradient-boosted decision trees, neural networks, or large language models for document parsing, share a fundamental vulnerability: they trust their inputs. If a fraudster submits a well-crafted fake bank statement, a pure AI pipeline may extract the data flawlessly, calculate the ratios correctly, and approve the deal based on numbers that never existed.
As we've explored in our analysis of how AI fraud detection catches fabricated bank statements, the best detection systems look beyond the numbers themselves. They examine PDF metadata, font consistency, pixel-level anomalies, and cross-reference transaction patterns against known bank formatting. But even these systems produce false negatives. The arms race between fraud tools and detection tools is ongoing, and no single AI layer catches everything.
Context That Models Can't Easily Learn
Consider a restaurant owner who just relocated to a new storefront. Their bank statements show a sudden drop in deposits during the transition month, followed by a slow ramp back up. A human underwriter who reads the application notes and sees the new lease agreement understands the story. An AI model trained on historical defaults might simply flag the deposit drop as a risk signal and either reject the deal or assign a punitive factor rate.
MCA lending is full of these contextual edge cases. Seasonal businesses, businesses recovering from natural disasters, merchants transitioning between payment processors: all of these create cash flow patterns that look like risk to a model but make perfect sense to a knowledgeable underwriter. The Federal Reserve's Small Business Credit Survey consistently shows that small businesses experience highly variable revenue patterns, which makes rigid model-based decisioning particularly dangerous in this segment.
Growing Regulatory Scrutiny of AI Decisions
There's also a regulatory dimension. In 2026, both the Consumer Financial Protection Bureau and state-level regulators are paying closer attention to how AI is used in credit decisions. The concern isn't just accuracy; it's explainability. If an MCA funder denies a deal based on an AI model's output, can they explain why? Black-box models that produce a risk score without a clear rationale create legal exposure, especially as disclosure requirements tighten under frameworks like California's evolving small business lending regulations.
What a Hybrid AI-Human Underwriting Workflow Looks Like
The smartest MCA operations aren't choosing between AI and human underwriting. They're layering them. Here's how that works in practice.
Layer One: AI-Powered Extraction and Flagging
The first layer is automated. When an application arrives, whether through a secure upload link, a forwarded email, or a broker submission, AI immediately goes to work. Documents are classified by type: bank statements, tax returns, voided checks, driver's licenses, business applications. Key fields are extracted: business name, EIN, owner information, monthly deposits, average daily balances, NSF counts, existing advance payments.
This is where platforms like Let's Submit operate. The system parses uploaded documents using AI-powered extraction, pulls structured data from unstructured PDFs, and surfaces it in a clean dashboard for underwriter review. Instead of spending 15 minutes per application manually keying data into a spreadsheet, the underwriter sees everything pre-populated and ready for analysis.
At this stage, the AI also flags anomalies. Statements that appear to have been edited. Deposit patterns that don't match the stated business type. Gaps in statement dates. These flags don't trigger automatic rejections; they trigger human attention.
Layer Two: Human Judgment and Context
The second layer is where experienced underwriters earn their keep. They review the AI-extracted data, examine flagged items, and apply the contextual reasoning that models can't replicate. They call the merchant if something looks off. They cross-reference the application details against what the bank statements actually show. They make the final funding decision.
This layer is also where deal velocity matters. If the AI extraction is accurate and the flagging is intelligent, the underwriter spends their time on judgment calls rather than data entry. A workflow that used to take 45 minutes per deal compresses to 10 or 15, with higher accuracy because the underwriter is focused on analysis rather than transcription.
Layer Three: Feedback Loops That Improve Both
The most sophisticated operations build a feedback loop between the two layers. When an underwriter overrides an AI flag, that override gets recorded. When a funded deal defaults, the system can trace back to see what the AI flagged (or missed) and what the human decided. Over time, this data makes both the AI models and the human reviewers better.
This is the real promise of AI in MCA underwriting. Not replacing humans, but creating a system where machine precision and human judgment reinforce each other continuously.
Revenue-Based Financing and the Verification Urgency
The timing of this conversation matters. NYC Mayor Mamdani recently endorsed revenue-based financing as the preferred model for small business lending through the city's Future Fund program, as reported by deBanked. When repayment is tied directly to a percentage of revenue, the accuracy of cash flow verification becomes even more critical. Overestimate revenue and the merchant drowns in payments. Underestimate it and the funder's returns collapse.
Revenue-based models, including standard MCA structures, depend on having a clear, verified picture of how much money actually flows through a merchant's bank account. That picture can't come from a quick glance at a PDF. It requires systematic extraction, validation, and analysis of every deposit, withdrawal, and balance over months of activity.
For lenders operating in this space, the workflow described above isn't optional. It's table stakes. The volume of applications is growing. The complexity of fraud is increasing. And the regulatory environment is demanding more documentation of how decisions are made. A hybrid AI-human approach, supported by purpose-built tooling, is the only way to scale without scaling your risk proportionally.
Frequently Asked Questions
Can AI fully replace human underwriters in MCA lending?
No, not reliably. AI excels at extracting data from bank statements, flagging anomalies, and performing pattern recognition across large document sets. However, MCA underwriting involves contextual judgment that current AI models handle poorly, such as understanding why a business's deposits dropped during a relocation or recognizing seasonal revenue patterns unique to certain industries. The most effective approach combines AI-powered extraction and flagging with human review for final decisioning.
How does AI underwriting detect fake bank statements?
AI fraud detection systems analyze multiple layers of a bank statement beyond the numbers. They examine PDF metadata, font rendering consistency, spacing patterns, and pixel-level artifacts that indicate digital manipulation. More advanced systems cross-reference transaction patterns against known bank formatting templates and flag statistical anomalies in deposit or withdrawal cadences. No single technique catches everything, which is why layered detection combined with human review produces the best results.
What is hybrid underwriting for MCA lenders?
Hybrid underwriting combines AI-powered document extraction and risk flagging with human underwriter review. The AI layer handles data extraction, document classification, and anomaly detection automatically. The human layer reviews the AI's output, applies contextual judgment, and makes the final funding decision. This approach reduces manual data entry by up to 80% while maintaining the nuanced decision-making that pure automation can't provide. Platforms like Let's Submit are built specifically to enable this hybrid workflow for MCA funders.
Why is cash flow verification important for revenue-based lending?
Revenue-based lending, including merchant cash advances, ties repayment directly to a business's incoming revenue. If the lender's picture of that revenue is inaccurate, either from poor data extraction or fraudulent documents, the entire deal economics break down. Accurate, automated bank statement analysis ensures that deposit patterns, average balances, and revenue trends are correctly captured before a funding decision is made. This protects both the lender's returns and the merchant from unsustainable payment obligations.
Conclusion
Upstart's CEO said the quiet part out loud: humans aren't great at underwriting. But the lesson for MCA lenders isn't to hand everything over to algorithms. It's to build workflows where AI handles what it does best, extraction, pattern detection, and speed, while humans handle what they do best, context, judgment, and merchant relationships. The lenders who get this balance right will fund more deals, catch more fraud, and build more durable portfolios than those chasing either extreme.
Let's Submit was built for exactly this kind of workflow. AI-powered document extraction feeds directly into an underwriter-friendly review dashboard, giving your team the speed of automation and the confidence of human oversight. Visit letssubmit.ca to see how it fits into your operation.