def determine_safety(lyrics):
prompt = f"""
You will be provided with the lyrics of a song that has been transcribed from the original audio file. There is a possibility that the transcription might not be entirely complete.
Your duty is to study with intent the overall context of the song and the words used and look out for anything that promotes or denotes violence, sex, drugs, alcohol. After your analysis,
you are to determine if the song is safe for a group of people who don't like profanites to listen to. You are not to generate any text in excess. If the song contains profanities whether explicit or implicit, your response should be 'Profanities detected, song promotes [write what the song promotes here]',
otherwise your response should be 'No profanities detected, safe to listen to'.
Note that the only criteria that should be used to determine the safety is the explicit or implicit mention of drugs, violence, sex and alcohol.
Make sure to go over the context of the lyrics for subtle hints at any of the profanities (drugs, violence, sex and violence) and only give a response when you're sure of your answer.
Make sure that your only responses are 'Profanities detected, song promotes [write what the song promotes here]' and 'No profanities detected, safe to listen to'.
THESE ARE THE LYRICS\n
{lyrics}.
"""
response = client.chat.completions.create(
model='gpt-3.5-turbo-0125',
messages=[
{"role": "system",
"content": prompt}
],
temperature = 0.25,
)
return response.choices[0].message