We downloaded datasets in the form of MRI scanned images from kaggle (
https://www.kaggle.com/navoneel/brain-mri-images-for-brain-tumour-detection). The dataset includes 253 brain MRI images. We used data augmentation to make more images because we only had a few datasets. We also experienced a difficulty with an unequal distribution of tumorous and non-tumorous instances (55+ percent tumorous). In order to crop the area of the image that only contains the brain, we used a cropping approach to find the extreme top, bottom, left, and right points of the brain. After reading each image, we cropped the section of the image that exclusively represented the brain. I then enlarged the image because the images in the dataset are of different sizes (in terms of width, height, and number of channels). We want all of our images to be digital in order to give it as an input to the neural network (240, 240, 3). Because we needed the pixel values to be scaled to a range of 0-1, we used normalisation. The label was added to y, and the image was added to X. We splitted the information into three categories: 80% instruction, 10% development, and 10% testing. We developed the model, complied with that as well, and then trained it. Below is a diagram of the network architecture,