Contra - A professional network for the jobs and skills of the futureI've used LLMs for 3 years but never truly understood them until a recent flight without internet...
The network for creativity
Join 1.25M professional creatives like you
Connect with clients, get discovered, and run your business 100% commission-free
Creatives on Contra have earned over $150M and we are just getting started
I've used LLMs for 3 years but never truly understood them until a recent flight without internet.
I watched Andrej Karpathy's breakdown. As an OpenAI co-founder, he makes it click.
The big realization: LLMs are text predictors.
Long contexts (huge chat history) make accurate prediction harder. This is a major reason why hallucinations happen.
I used to cram everything into one giant thread. I didn't realize I was breaking the model.
My new approach:
Separate chats for different tasks.
Short, concrete instructions.
Since most models use similar training data, the real difference is your prompting.
Short context = Better outputs.
If you want to learn more, here is the video: https://www.youtube.com/watch?v=zjkBMFhNj_g
Ciro's avatar
Great post! And yeah, separate chats and concrete instructions are a better approach.
Husnain's avatar
Exactly. Keeping context short has been a game changer.
Back to feed
The network for creativity
Join 1.25M professional creatives like you
Connect with clients, get discovered, and run your business 100% commission-free
Creatives on Contra have earned over $150M and we are just getting started