One thing about using AI LLMs is learning how to call it on it's bullshit. You need to be very wary of when the LLM is being highly confident and handwaving because as soon as you start asking certain questions, the LLM will sometimes admit fault or continue on its hallucinatory adventure.
Taming a confident argumentative LLM. You need to learn to use captial letters and to direct it to never argue with source code or an article. You must learn to tell it "You are BANNED from coding" or "You are BANNED from questioning the validity of this article". "DO NOT ARGUE WITH WHAT I AM SAYING and TRUST ME" also works.
Taming an eager to code/fix LLM. "DO NOT FIX AND DO NOT CODE. JUST EXPLAIN" tends to be a good way to stop the LLM from jumping to conclusions. Ask questions, repeat "DO NOT FIX, DO NOT CODE" from time to time.
Taming a terse LLM. This tends to be hard, but you need to repeat asking it for completeness. Like if you wanted to implement a protocol, LLMs will jump to common methods/summary of a protocol. You need to be able to detect this and ask for expansion.
Taming a destructive LLM. "YOU ARE BANNED FROM ANY DESTRUCTIVE OPERATIONS. ZERO. YOU MUST ASK FOR INPUT IF WE NEED DESTRUCTIVE OPERATIONS." "LIMIT DESTRUCTIVE OPERATIONS TO A MINIMUM" Things like that to make sure you don't get a rogue "rm -fr" in your scripts or innocuous DELETE in your sql scripts.
Planning. Discuss with the LLM a topic and expand. Eventually, you'll have to start asking for specifications, implementation plans, pseduocode, etc. It will hallcucinate, so you will have to actually read and make sure the plans make sense. Sometimes you have to expand on parts and discuss then ask for the whole plan again with the new refined changes.
System prompts can be helpful, but do not significantly alter the behavior of the LLM. It depends on the LLM though.
Humans rely on predictions all the time, your body relies on predictions all the time. Your eye sight is from predictions your brain makes. LLMs are predictive tools, reality is what you decide to shape it to be. Shape the LLM or it will shape you and your reality. If something seems off, always ask a question and verify.
You shouldn't be surprised about learning to discern what information you want to keep and what to throw away. I throw away most information even from LLMs. I get a lot of output from LLMs and reject a lot of code it generates, I keep only what I think makes sense in the systems I build. If you look at my open source projects, a lot of output was rejected as I curated what I wanted to add from an LLM to them. This is a learned skill.
Use good LLMs to build intuition like we did with Google Search. The best google searchers were the ones who shaped the keywords to find results quickly. You didn't believe everything on Google Search, you shouldn't believe everything from an LLM. DURH!