Risky Business: AI’s Adoption By USCIS, EU Gov’ts Illuminates EB-5 Obstacles

Risky Business: AI’s Adoption By USCIS, EU Gov’ts Illuminates EB-5 Obstacles

2023/07/25 11:03am

By Mona Shah, Esq. and Rebecca S. Singh, Esq.

Trick question: Could the use of artificial intelligence (“AI”) by U.S. Citizenship and Immigration Services (“USCIS”) be the answer to EB-5 practitioners’ woes … or will it just add to them, making delays and other inefficiencies worse?

While the burgeoning technology continues to make inroads into governments both overseas and domestic, some experts—though cognizant of AI’s benefits—warn that it is not a one-size-fits-all solution, particularly for residency- and citizenship-by-investment (“RCBI”) programs. Indeed, implementation of AI without attention to context could be a big issue for America’s own initiative in this space: the aforementioned EB-5 program offered by USCIS, whose processes are too nuanced to be left entirely to the devices of nonhuman adjudicators. In order to gain more insight, Mona Shah & Associates Global (“MSA Global) turned to a couple of business writers in the EB-5 space for input.

“Until AI gets fully integrated with market research resources, it seems premature for it to judge the merits of a business plan or pro forma,” noted William T. Dean, VP of Immigration at Masterplans.com, a Portland, Oregon-based business-plan specialist and consultancy with expertise in EB-5. He postulated a hypothetical scenario involving OpenAI’s ChatGPT algorithm GPT-4 as an example. “Think of it this way: [If] an EB-5 investor in a restaurant is wondering whether a certain size property can accommodate 10 jobs, you can feed the square footage into a resource like … GPT-4 and get accurate estimates for customer seating and, by extension, staffing headcount. You could then use that data to model out a seemingly viable EB-5 investment in a business plan.”

Dean added, however, that such data would not necessarily be a precise fit for all EB-5 package components. “GPT-4 doesn’t understand that a dense urban core will support that business, whereas a ghost town will not—it’s just spitting out numbers.” As such, he suggested a bespoke approach informed by human intuition would be better.

“Context is vital, and market-specific data sets must be taken into account from the outset.” —Dean

“Context is vital, and market-specific data sets must be taken into account from the outset,” explained Dean. “Would AI, in its current form, discover that the ghost town restaurant is a bad EB-5 investment, or would it confirm its own numbers and falsely conclude that those jobs will be real? I’m sure some plan writers are making ample use of this emerging technology to save time, but there’s a serious risk if it’s not checked carefully by human eyes.”

That may limit AI’s applications in a space where so much depends on adjudicators’ interpretations. “I can’t think through how AI will assist in [the decision-making and the review of the file overall] unless they come up with a grading system, which they don’t really have,” said Bernard Rojano, Founder and Lead Consultant of Houston-based strategic consulting services firm Xecute Business Plan Solutions. Calling the process “very subjective,” Rojano said the best way for USCIS to use AI would be “blocking and tackling of the checklist items they have to work through, and [reviewing] the filing in a more efficient way.”

This charge may be easier said than done. “The U.S. government is terrible at applying new technology,” opined Rojano, adding that he believes USCIS will use AI for “playing defense” rather than as an “offensive tool.”

Consider, for one, the agency’s slow-as-molasses path to go electronic—even though for years practitioners have been sending reams and reams of paper filings … often multiplying by 50 or more when dealing with project issues entailing large raises Though USCIS has expressed interest in heading down the electronic route, its efforts have been piecemeal at best. “Hopefully they’ll go paperless pretty soon,” Rojano noted. “If they’d go paperless and start taking on files, then AI is a tool for them.”

AI’s potential as an instrument of efficacy, however, is tempered by its capacity for disruption. Recognizing this, the European Union (“E.U.”) reportedly has targeted AI in sweeping legislation labeled the “AI Act” that would regulate the industry—via designations assessing the levels of risk companies’ AI systems present—in an effort to mitigate the potential spread and impact of misinformation. With the EU’s embrace of this new technology encompassing its adoption by countries such as Portugal, whose government recently opted to incorporate AI into its digital processes surrounding applications for Portuguese citizenship, the continent’s collective focus is not only on the benefits AI can provide, but also on the need for a consistent, ongoing vetting process to ensure compliance and public safety.

The statistics show the basis for these worries. According to the EU’s Eurostat resource, 8% of EU enterprises in 2021 used AI technologies; that year, 53% of EU enterprises that utilized AI purchased ready-to-use commercial AI software or systems. Although the percentage of enterprises using at least one AI technology varied greatly among EU countries, it is clear that the trend favors its usage: Per Eurostat, the top players in this category for 2021 were Denmark (24 %), Portugal (17 %) and Finland (16 %), with more than half of all EU nations hovering above the 5% level. When one considers AI’s capabilities for reducing inefficiencies—including the speed of communications, data analyses, and internal processes—these numbers may come as no surprise.

They may, however, warrant a cautious approach. During the Investment Migration Council’s recent Investment Migration Forum 2023 in London, stakeholders discussed AI’s potential applications to RCBI with reference to the need to address risk. These conversations made their way into panels such as “Revolutionizing Due Diligence with AI: Exploring Emerging Technologies and Other Best Practices,” which in part covered the EU’s privacy concerns relating to AI regulations; issues with how AI draws its conclusions; and the search limits instituted by governments that parcel out due-diligence duties to third-party companies.

“There’s an enormous risk in dehumanizing traditionally human tasks like vetting someone’s immigration candidacy.” —Dean

Yet despite endeavors to police this sector, much remains unknown—and putting privacy and other concerns in the hands of automata may present problems. “I think there’s an enormous risk in dehumanizing traditionally human tasks like vetting someone’s immigration candidacy,” asserted Dean. “Confidentiality violations are just one facet of that; what bothers me more is the idea of a robot judging a sensitive issue that ought to involve some element of compassion and subjectivity.”

So far, such troubling scenarios may be more science-fiction than real life, though AI’s rapid climb though the tiers of E.U. and U.S. government agencies suggests that this is just the start of a technological avalanche. Already, a number of American federal offices utilize AI, including USCIS’s fellow Department of Homeland Security (“DHS”) divisions U.S. Customs and Border Protection (“CBP”) and U.S. Immigration and Customs Enforcement (“ICE”). At USCIS, AI plays a big part in areas such as biometrics cultivation and analysis; for example, the agency’s Biometrics Enrollment Tool (“BET”) fingerprint quality score uses a trained machine-learning algorithm to obtain numerical scores grading the quality of fingerprint results.

Given AI’s transformative impact on the industry and the relative swiftness with which the technology has been implemented, experts are wary of welcoming these changes without taking into consideration a time-tested adage: with great power comes great responsibility. Because this is, in general, uncharted territory for the sector, the application of AI solutions to problems such as processing delays may elicit unforeseen challenges.

Because this is uncharted territory, the application of AI solutions to problems such as processing delays may elicit unforeseen challenges.

“There’s a valid argument to be made that something—ANYTHING—that can speed up the pitifully poor processing timelines is a good idea, and lots of stakeholders want greater consistency in how petitions get adjudicated, but at what cost?” inquired Dean. “Consider the many E-2 visa investors whose startup concepts rely on proprietary data, or National Interest Waiver candidates whose roles in [science, technology, engineering and mathematics] involve confidential research or product development. AI platforms are, by nature, learning modules; they grow more powerful as they are fed information.”

As such, “[there’s] a tangible risk that data could be pasted into AI which discloses something not intended for public consumption,” continued Dean. “Are we to trust that all program settings will be religiously set and monitored so that private content remains confidential? How would we ever know? What safeguards would there be? Who’s liable if a breach occurs?”

All of these questions may be answered as AI oversight continues to evolve. At this stage, however, Dean—like others in the industry—is not convinced the technology will become a headache-free alternative to traditional EB-5 processes. “I am sure these risks can be mitigated,” he opined, “but I do not believe they can be eliminated, so I’m skeptical that widespread adoption of AI in these applications is a guaranteed ‘good thing.’”

It may be a while, anyway, before USCIS explores the full potential of AI to assist in adjudication and fraud detection. “I don’t see that happening for quite some time,” said Rojano. “They’ll have live people in those cubicles probably until I’m gone from this earth.”

That may well sound like sci-fi, but as EB-5 practitioners know, when it comes to USCIS, reality is stranger than fiction.

Simon Butler contributed to this article.