As artificial intelligence (AI) continues to gain considerable momentum across the life sciences sector, companies must take steps to understand how AI technologies operate and the risks that they present, the extent to which they can transform and add value to their business, and implement a framework to provide for their effective incorporation into the organisation.
In the final article in our series, ‘Artificial intelligence in life sciences’, we highlight some of the key steps that life sciences companies should consider adopting in order to build and deploy a successful AI business strategy, effectively manage outcomes generated by AI technologies, and safeguard against the potential risks arising.
Key steps:
- Implementation of a robust and effective data strategy: As data is the fuel that powers AI processes and decision making, the quality, volume, and integrity of that data is fundamental to achieving non-biased and reliable outcomes. In the life sciences sector, bias in product design, testing, and clinical trials may result in some healthcare products not being as effective on certain patient groups. A robust and effective data strategy is therefore critical to ensure that complete and accurate data sets are collated and maintained.
- Re-evaluation of privacy and cybersecurity risks: With the EU and UK focusing hard on evaluating and updating existing product safety laws and regulations to encompass a legislative framework for AI, life sciences companies should take steps to evaluate the safety of their AI-powered products, particularly from a cyber and privacy perspective. This would include the assessment of the safety of products both in isolation and when connected to other products, ensuring all parties in the supply chain are aware of and trained on, the obligations that will be imposed on them, and the adoption of current European and/or national standards for assessing the cybersecurity of products.
- Development of a modern governance and risk management framework: In view of the risk profile, the existing regulatory and legal framework and the speed and depth at which AI-driven technologies are being utilised, life sciences organisations are having to appropriately adapt existing governance and risk management frameworks to harness the power of AI. Historically, and notwithstanding the demands of evolving regulatory changes, organisations have typically depended upon relatively static risk management frameworks and systems, which relied upon key individuals within the organisation updating risk registers according to their responsibilities.
The use of AI technologies provides an opportunity for a step-change in risk management through connectivity between AI and key risk indicator information, such as complaints and adverse events data. Collaborative discussions and appropriate planning with risk management, information technology specialists, engineers, and other key stakeholders are crucial to reducing such risks.
- Management of employee skill sets and adapting the company workforce: Employees are a key consideration in making AI an integral part of business operations. Life sciences businesses will need to invest in robust learning and development programmes to allow existing and future employees to acquire the necessary skillsets to develop and integrate AI-linked solutions. Businesses may also consider the creation of new roles within the company to manage the risks arising from the increasing adoption of AI technologies within business operations. For example, with the life sciences sector becoming increasingly vulnerable to privacy and cybersecurity related risks, companies may wish to deploy personnel with expertise in these areas in order to safeguard their products.
Conclusion
Companies play a crucial role in ensuring the implementation of appropriate safeguards to prevent and respond to risks arising from the potentially negative consequences of AI technologies. While the role of risk professionals within the life sciences industry will undoubtedly change going forward, the combined forces of AI-enabled risk management with subject matter direction and oversight will create a future in which questions such as “how much risk should I take?” can be informed and updated in real-time.
With changes to existing regulatory and liability regimes on the horizon, life sciences companies, and in particular, manufacturers in the sector, should seek to ensure that their products comprising AI-based technologies have undergone rigorous testing and safety checks and comply with existing laws and regulations before release to market.