Without proper and due consideration in the design and delivery of new therapies and diagnostics, there is a risk that biased datasets will skew outcomes towards certain groups. AI systems are not immune to such bias, requiring, by their nature, input data to “train” the system and identify patterns that relate features (whether biological or demographic) to outcomes. Trained AI systems can then predict outcomes, such as a patient’s response to a drug, only when the input features are known. The accuracy of such predictions are therefore highly dependent on the input (training) data used to build the AI algorithms. The training data should represent the patient population, without being skewed towards particular demographics. It is the responsibility of researchers, clinicians, and AI developers to recognize areas sensitive to bias and, when generating training data, take appropriate measures to recognise any inherent bias. Failing to recognise such bias, either during the creation or use of AI systems, could result in the improper treatment and diagnosis of patient groups that should be benefiting from these exciting advancements. There have already been documented instances where marginalized groups have been negatively impacted by the projected bias of AI technology. Concerns have arisen around overall patient risk assessments, the relationship between cost and quality of care and the lack of exploration into sex-specific differences in therapeutic responses to treatment. Two main areas of concern are racial bias and sex-specific bias.