The potential of AI in pharma: Balancing innovation and ethics is critical
As with many industries, the pharmaceutical and life sciences sector is embracing artificial intelligence (AI) to drive transformation. The sector is using the technology to reshape drug development, improve organisational processes, support R&D and more.
Indeed, just last month GSK announced it was employing AI to improve its productivity as a means of compensating for the potential financial hit from US tariffs.
When implemented effectively, AI has the potential to outperform humans in certain tasks. This can unlock significant value across the pharmaceutical value chain, accelerate effectiveness and efficiency, and crucially bring innovative medicines to patients more quickly.
We recently surveyed over 50 C-suite executives* in the life sciences and pharmaceutical sector, and found that the adoption of generative AI is advancing rapidly, with 71% reporting they have a dedicated strategy for implementing the technology, and a further 24% are currently in the planning stages. Their primary focus areas being to improve operational agility and support growth initiatives.
However, despite the progress and scale of AI implementation, many business leaders in the sector are concerned about the ethical implications.
Our research found that more than two-thirds (69%) of pharma business leaders globally express ethical concerns with the technology, while a significant majority (86%) believe that more AI regulation is either “essential” or “very important” to ensure responsible innovation.
Others are worried about workforce disruption, as nearly three in five (59%) expect AI to replace jobs in their organisations.
Additional concerns include the risk of misinformation, reduced human oversight, questions of liability, lack of transparency or accountability, as well as heightened cybersecurity and data privacy challenges.
Striking the right balance between innovation and regulation is therefore critical to ensure that the use of AI is not only cutting-edge but also responsible and ethical.
Clearly, business leaders need reassurance to address these concerns and ensure they are managed effectively. As technology advances, developing robust governance frameworks that give the ability to comply with data protection laws and safeguard sensitive patient data while maintaining operational flexibility and digital competitiveness will be critical.
For example, firms should consider establishing specialised data protection offices that work closely with IT departments to integrate privacy and security directly into digital initiatives – especially when adopting AI-driven systems for drug discovery or patient care.
Additionally, implementing cloud-based solutions compliant with the General Data Protection Regulation (GDPR) and the Health Insurance Portability and Accountability Act (HIPAA) is important.
Secure data-sharing protocols are essential to preserving patient confidentiality while fostering innovation and access to real-time data.
Collaborating with regulatory bodies to stay ahead of rapidly changing regulatory landscapes is also vital for responsibly implementation, as is investing in employee training and raising awareness of both the opportunities and risks of emerging technologies.
The potential power of AI to enhance the pharmaceutical and life sciences sector is undeniable. However, the full potential will only be realised through responsible, transparent, and well-regulated implementation of the technology.
As organisations build their AI strategies, they must integrate strong data governance and ethical oversight at every step. Without these safeguards, pharmaceutical businesses risk falling short of meeting their goals.
*Research taken from Forvis Mazars’ annual global survey of C-suite executives, conducted via an online panel of C-suite executives at for-profit organisations with annual revenues of $1m+ between 28 September – 23 October 2024. Responses are taken from 51 C-suite executives from the global Life Sciences & Pharmaceutical Sector.