As governments around the world move to put new guardrails on artificial intelligence in 2026, University of Delaware professor Xiao Fang brings a perspective shaped long before AI became a business buzzword.
More than 25 years ago, when few business scholars were studying artificial intelligence, Fang began asking how the technology could be designed to support better decisions—not just faster ones. Today, as organizations face growing pressure to make AI systems transparent, accountable and fair, his research offers practical insight into how AI can be built to serve people and society, not just machines.
A professor of management information systems in UD’s Alfred Lerner College of Business and Economics, Fang focuses on “use-inspired AI for business,” developing tools that address real-world business and societal challenges while minimizing potential risks. The National Science Foundation distinguishes between foundational AI, which develops general methods independent of application, and use-inspired AI, which is motivated by specific real-world problems.
Fang’s work firmly falls into the latter category. His objective is to design AI systems that solve meaningful business and societal problems while carefully considering potential harms. His work spans applications from identifying bias in AI-generated content to building interpretable models for mission-critical decisions such as medical diagnosis and financial analysis.
He is a recognized researcher whose work contributes to UD’s Top 20 ranking on the Association for Information Systems(AIS) list of high-quality journals over the past three years (2023–2025).
Rather than viewing emerging AI regulations as obstacles to innovation, Fang sees them as objectives that can work in harmony with economic goals, helping organizations design AI systems that are both responsible and effective.
A long view on artificial intelligence
Fang’s interest in AI began around 2000 while he was pursuing his doctorate in business. At the time, artificial intelligence was rarely studied in business schools, and research in the area was often misunderstood.
“As a business Ph.D. student, I took graduate-level computer science courses, including artificial intelligence,” Fang said. “That exposure, along with my work in data mining, really sparked my interest.”
Despite early challenges getting AI-focused work published in business journals, Fang remained committed. Over the past 26 years, he has watched AI evolve from symbolic systems built on explicit rules and logic to today’s data-driven models powered by machine learning and neural networks. While the technology has changed dramatically, his core focus has remained steady.
“My research focus has become clearer over time,” Fang said. “I work on AI that is driven by real applications and real needs.”
Addressing AI’s risks alongside its rewards
As AI tools have become more powerful and widely accessible, Fang has increasingly examined their unintended consequences. When ChatGPT was released in late 2022, he immediately recognized both opportunity and risk.
“We quickly realized two major issues,” Fang said. “The ease of generating misinformation and the potential for social, gender and racial bias.”
Fang and his collaborators conducted research demonstrating that AI-generated content can reflect and amplify existing biases embedded in training data. Such findings carry important implications for organizations relying on generative AI for communication, hiring, marketing or decision support. As policymakers work to ensure fairness, transparency and accountability in AI systems, research identifying bias and mitigation strategies becomes increasingly relevant.
For Fang, maximizing AI’s benefits requires equal attention to minimizing its risks. Responsible design, he argues, must be embedded from the beginning rather than retrofitted after problems emerge.
From theory to practice
Many of Fang’s projects are designed with direct business applications in mind. One example is his research on industry classification systems, which group companies into sectors for use by governments, investors and financial analysts.
“Traditionally, industries are classified manually,” Fang explained. “That process is time-consuming, costly and subjective.”
Existing systems, such as the U.S. government’s Standard Industry Classification (SIC) or the Global Industry Classification System (GICS) used by financial firms, rely on human judgment to assign firms to categories. Fang’s AI-based system automatically groups firms by analyzing their annual report filings, identifying similarities in business activities and language patterns.
The result is a classification approach that is more adaptive, scalable and objective. It also allows new firms to be assigned to industries automatically based on their disclosures. Applications range from portfolio construction and risk assessment to executive compensation benchmarking and academic research.
“This is a case where AI can make an existing process more accurate, efficient and transparent,” Fang said.
Making AI explain itself
Transparency becomes especially critical when AI systems are used in high-stakes settings. In a study published in Management Science, Fang and his co-authors developed an interpretable AI model to assist in diagnosing depression associated with chronic disease.
“For mission-critical tasks like medical diagnosis, it’s not enough for an AI system to be accurate,” Fang said. “It also needs to explain why it made a particular prediction.”
The model mirrors aspects of human reasoning by learning representative “prototypes” from data. When evaluating a patient, the system identifies which learned prototypes most closely match the patient’s symptoms and uses those comparisons to explain its diagnosis. Rather than functioning as a black box, the model provides reasoning that clinicians can evaluate and question.
Such interpretability aligns closely with emerging regulatory priorities emphasizing explainability and accountability in AI systems. For Fang, transparency is not simply a compliance requirement but a design principle that strengthens trust and long-term effectiveness.
Rethinking regulation and innovation
As new AI regulations take shape in the United States and abroad, Fang encourages organizations to reconsider how they frame compliance.
“Many people see regulation as a constraint,” he said. “I see it as an objective.”
He compares business strategy to an optimization problem. Traditionally, companies seek to maximize profit or minimize cost. Fang argues that social objectives, such as fairness, accountability and transparency, should be incorporated directly into that optimization framework.
“In the long run, aligning economic and social objectives will benefit businesses,” Fang said. “Responsible AI builds trust, and trust is essential for sustainable success.”
Rather than slowing innovation, well-designed guardrails can encourage more thoughtful and resilient AI deployment. Organizations that proactively embed responsible design principles may be better positioned to adapt as regulatory expectations evolve.
Training the next generation
Beyond his research contributions, Fang is committed to mentoring doctoral students and preparing future scholars.
“We need to train students so they can become our peers,” Fang said. “I really enjoy watching them grow into independent researchers.”
Many of his former students have gone on to academic careers of their own, extending the impact of his approach to responsible, application-driven AI research.
As artificial intelligence enters a more regulated and consequential era, Fang’s decades-long focus on use-inspired, responsible AI offers a steady and informed voice. His work underscores a central principle: innovation and accountability are not competing goals but complementary ones. Designing AI systems that are transparent, fair and aligned with societal values may ultimately determine how successfully the technology reshapes business and society in the years ahead.




