Interview Questions for Priscilla Alexander, Vice President of Engineering at ArthurAI
In the rapidly evolving world of artificial intelligence (AI), New York-based startup ArthurAI is making waves with its innovative software for monitoring, auditing, and maintaining the performance of machine learning systems. Co-founded by Priscilla Alexander, a seasoned AI professional with a background at Capital One, ArthurAI is addressing a pressing need for AI transparency and bias mitigation.
Alexander identified a gap in the market for a tool to automate the monitoring of machine learning decisions. Traditionally, this process has been manual, costly, and time-consuming. With ArthurAI, companies can now explain the decision-making of their machine learning models, such as for investment decisions or in healthcare, while maintaining accuracy.
The Arthur platform operationalizes explainability for every inference and across all model decisions, including loan recommendations and investment decisions in financial services. It can run anywhere models run, whether on AWS, GCP, or in a data center with no internet access, making it a versatile solution for businesses of all sizes.
ArthurAI's software is platform-agnostic, able to integrate any model serving platform, future-proofing monitoring for new model serving technology. It finds disparate impact by computing the difference in outcomes across hundreds or thousands of subpopulations, ensuring fairness in model decisions.
One of the key challenges in AI adoption has been regulatory compliance concerns, scarce talent, and a lack of business leader education. Alexander identified these factors as major hurdles for companies looking to implement AI solutions. However, with ArthurAI, businesses can work with a team of experts who can help navigate these complexities.
John Dickerson, ArthurAI's chief scientist, has written a detailed blog post about the process of detecting and mitigating bias in machine learning models. The platform's explainability and bias mitigation techniques are bleeding edge, but there is a lot more research ahead in these fields.
ArthurAI supports various techniques for mitigating bias in models, but humans are needed to decide which technique to use as each has different trade-offs. Explainability in particular has to keep up with the latest advances in deep-learning as model algorithms grow more complex.
Recently, ArthurAI worked with Harvard's Dumbarton Oaks Institute, providing explainability for a computer vision model that analyzed the labeling of thousands of photographs. The Arthur platform allowed the Institute to understand where the training examples weren't sufficient, improving model accuracy and accelerating the process of categorizing artifacts from global heritage.
ArthurAI works with industries that manage risk, particularly healthcare, finance, and insurance, as these industries have had statistical models at the core of their business for decades. However, the platform's versatility extends beyond these sectors. It is also being used in industries such as energy (including power and heat management), healthcare (medical data processing and AI in pathology), and possibly manufacturing or automation sectors.
In the future, the platform could expand to support more industries facing complexity from digital transformation, such as space exploration resource management and others requiring AI-driven automation and flexibility solutions. As the public becomes more skeptical of algorithms making decisions without oversight, the need for AI transparency and monitoring is more important than ever. With ArthurAI, businesses can take a significant step towards achieving this transparency and ensuring fair and unbiased AI decision-making.