Advertisements
On February 24, 2025, the Global Developer Pioneer Conference took place in Shanghai, featuring a subforum titled “Building a New Financial Ecosystem: The Application and Practice of AI Large Models.” This immersive event was led by Mao Mingjiang, Chief Editor of TMT at Financial Associated Press, and it brought together prominent figures in the financial and tech sectors, including Li Qiang, Chief Information Officer at FuGuo Fund, Yu Haohan, General Manager of Data Management and Application Division at Shanghai Bank, Cui Hongyu, Chief Data Officer at Huaxin Securities, and Yu Li, Chief Information Officer of New Knowledge GroupThe discussion center-staged around “New Opportunities for the Financial Industry in the AI Age.”
The panelists highlighted the unique challenges that the financial sector faces compared to ordinary industries, particularly in areas such as data accuracy, compliance, and securityDespite these challenges, they unanimously agreed on the irreversible trend of embracing AI large models in financeAlthough issues like “model hallucination,” data security concerns, and stringent regulations exist, the potential of AI in reducing costs, increasing efficiency, and exploring new business avenues is a prospect worth delving into.
Li Qiang shared insights into FuGuo Fund’s experience with AI, stating that they began integrating large language models into their investment decision-making processes as early as the second half of 2023. “We have injected large language models into our traditional deep learning and machine learning systems for quantitative investment decision-making,” he explained. “Currently, we primarily use large language models to analyze sentiment factors, feeding market research reports and news into the model, creating a ‘thinking chain’ to score them and develop a sentiment index.”
In terms of investment practices, FuGuo Fund has successfully harnessed large language model technology for asset allocation and report analysis, significantly enhancing operational efficiency. “Previously, it took about a week to summarize all market research reports, categorize them by industry, and then review them before sending them to institutional investors
Advertisements
Now, a researcher can complete the categorization, summarization, and publication of all industry reports in just three hours each day,” Li noted.
Meanwhile, Yu Haohan from Shanghai Bank emphasized their exploratory initiatives using large models to enhance financial services. “The application of large models in banking customer service has drastically disrupted our entire development modelThe new approach resembles scriptwriting rather than following a fixed process—it involves a process of soft constraint returning to its rootsHowever, performance bottlenecks remain; while transactions previously concluded in two seconds, achieving the same speed post-optimization has proven challenging,” Yu elaborated.
Moreover, Shanghai Bank is piloting an innovative project termed “Smart Inquiry.” “Our objective is to have the business departments directly communicate their data development needs to the large model, with 80% of inquiries handled by the model itself, leaving only 20% for human operationsWe have almost completed this project and anticipate launching it soon,” Yu shared.
In the securities sector, Cui Hongyu from Huaxin Securities highlighted the paramount importance of data, particularly the pursuit of highly accurate data access. “Structured data has achieved precision through the deployment of large models, providing real-time retrievalHowever, dealing with unstructured data—such as numerous research reports, annual reports, and prospectuses—presents a challenge due to the sheer volume and complexity of sourcesFinding accurate datasets related to specific events amid vast amounts of unstructured data is particularly difficult,” Cui opinedTo combat this, Huaxin Securities emphasizes maintaining the accuracy of their data. “For instance, we ensure that only the most precise data materials are utilized in the backend databaseWe cannot allow arbitrary reports or meeting minutes to contaminate the data, thereby minimizing clutter.”
The New Knowledge Group has successfully integrated DeepSeek-R1 and DeepSeek-V3 with the New Knowledge platform
Advertisements
Yu Li conveyed the significance of their ongoing collaborations with mainstream computing power providers, like Ascend and Haiguang, ensuring rigorous evaluations and testing in the production environment, covering throughput and baseline performance. “As an open-source model, DeepSeek leaves many gaps within the ecosystem; hence, we are actively exploring this avenue,” he stated.
On the topic of application, Yu mentioned a polarized response from business sides. “The main confusion lies in whether the financial industry should truly adopt a Workflow-centric approach or if there should be tiered categorizationI believe that any scenario developed with heavy human intervention will likely remain constrained by the designer's capabilitiesIf the industry genuinely requires a super application, is it not more beneficial for AGI to be utilized more heavily?” he queried, expressing a desire to collaborate with clients to advance adaptive intelligent agent scenarios. “Our aim is to harness this to uncover genuinely valuable scenarios,” he concluded.
Reflecting on the challenges posed by model hallucination, Li Qiang highlighted the dual-edged nature of this phenomenon, describing it as a driving force behind human creativity and knowledge generation. “During practical applications, it is essential to leverage thinking chains, knowledge bases, and expert insights to guide AI, effectively curtailing some of the hallucinationsAlthough AI may not surpass human expertise, it can generate unforeseen value in niche areas, driven by human prompts,” he remarked.
Yu Haohan echoed this sentiment, stating that hallucination and creativity are two facets of the same coin, inseparable. “There are some engineering methods to mitigate this—output intervention, input intervention, review processes, and the disassembly of mixture of experts (MoE) modelsI advocate for cautious utilization, treating it primarily as an assistant for now
Advertisements
Over time, it may converge into an acceptable range, as humans too are fallible; the goal is to control the error rate to manageable levels, whether that be 3% or 10%,” he added.Cui Hongyu pointed out that while hallucination can yield novel perspectives, it is essential to avoid them within the securities sector. “Addressing hallucination fundamentally involves resolving issues of data accuracy and information asymmetryBy enhancing our technological capabilities, we can tackle the inaccuracies in data that lead to hallucinationsFor instance, extracting precise information from vast documents requires utilizing algorithms akin to re-ranking for decision-making,” he explained.
Yu Li stressed that while the issue of hallucination is a cornerstone of AI applications, it is a reality that must be coexistent. “In very narrow contexts, where low tolerance for hallucination is crucial, it requires betting on technical pathwaysOur technical direction is more inclined towards code chains (CoC). We prefer generating code via large models and executing this code to yield final results,” he emphasized.