Back to news list

BVA calls for open-minded approach to AI use but cautions technology must support, not replace, vet expertise

07 Jan 2026

Share:

BVA launches new policy position on the use of artificial intelligence in the veterinary profession.

BVA calls for open-minded approach to AI use but cautions technology must support, not replace, vet expertise  Image

The British Veterinary Association (BVA) has published a set of general principles for using artificial intelligence (AI) to support vets to use these emerging technologies safely, effectively and ethically.  

The eight principles, which form a part of BVA’s new policy position on AI in the veterinary profession, cover its use across clinical practice, education, research, epidemiology, and admin and practice management. They include advice for vets on how to: 

  1. Use AI as a tool to support, not replace, the vet. 
  2. Understand how AI technologies work and feel confident in using them. 
  3. Actively participate in the design, development, and validation of AI tools for animal health and welfare. 
  4. Understand how an AI system was trained and the contexts in which bias may appear. 
  5. Be confident understanding how AI technologies are advancing and adapt to potentially quick changes in the tools available. 
  6. Ensure data privacy and client consent. 
  7. Oversee AI use in clinical practice and be responsible for final decisions. 
  8. Be able to easily access what data was used and explain how an AI tool reached its conclusion. 

BVA’s policy position encourages vets to have a positive, proactive and open-minded approach to veterinary AI technology while being aware of its potential ethical risks. Apart from urging all veterinary professionals to actively engage with understanding AI and following the above principles when using it, recommendations in the policy position call for all veterinary workplaces to develop AI use policies; undertake thorough risk assessments and develop resources to assist vets in understanding how AI tools work and how they can be evaluated. 

In addition, it calls for the wider sector to create international governance and explainability standards for veterinary AI tools; develop active regulation of veterinary AI tools used in the UK by the country’s veterinary regulators; and for AI tech developers to provide transparent validation data.   

British Veterinary Association President Dr. Rob Williams MRCVS said:   

“The AI revolution is here to stay and brings with it both important opportunities as well as challenges for the veterinary profession. Having a positive and open-minded approach that views AI as a tool to support vets and the wider vet team is the best way forward to make sure that the profession is confident applying these technologies in their day-to-day work. The general principles developed in BVA’s new policy position offer a timely and helpful framework for all veterinary workplaces considering the safe and effective use of AI technologies.  

“Vets must also be involved in the development process for AI tools as early and as frequently as possible so the profession can lead from the front when applying these emerging technologies, to ensure we continue to deliver on our number one priority of supporting the highest levels of animal health and welfare.” 

Data from BVA’s Voice of the Veterinary Profession survey shows that 1 in 5 vets working in clinical practice (21%) are already using AI tools, with the most commonly reported benefits being data interpretation, improved diagnostic testing and time saving. However, vets also noted potential risks, most commonly the possibility that results are interpreted without context or follow up checks, an overreliance on AI undermining human skills, and a lack of data protection.  

To help tackle this, BVA has developed a helpful risk pyramid that classifies the risks of some of the more common or considered AI use cases in different veterinary settings from ‘minimal’ to ‘unacceptable’. The organisation has also published a handy set of questions that vets should ask software companies when undertaking risk assessments.  

Dr Williams added: 

"We know that the degree of risk in AI use exponentially increases with the degree of autonomy an AI tool has. This risk pyramid is a handy reference for vets looking to incorporate AI in their work, with tasks lower down the pyramid such as marketing or clerical tasks able to be undertaken with more confidence of safety than those closer to the top, such as automated diagnosis or clinical decision making. As use cases move closer to the top, the importance of following the principles set out in BVA's policy position becomes more critical as the impacts on animal health and welfare, professional standards, and people will be more significant. I'd urge all colleagues to take a look at this risk pyramid alongside the general principles."

Read BVA’s new policy position on AI in the veterinary profession.

Share:

Want the latest updates from BVA?

For tailored content in your inbox, access to world-class veterinary journals, member-only resources and support, join BVA today. Be part of our veterinary community of over 19,000 members.