GPT-4 -- SAFETY
OpenAI spent six months making GPT-4 safer and more aligned. The company claims that it's 82% less likely to respond to requests for inappropriate or otherwise disallowed content, 29% more likely to respond in accordance with OpenAI's policies to sensitive requests, and 40% more likely to produce factual responses as compared to GPT-3.5.
It's not perfect, and you can still expect it to "hallucinate" from time to time and can be wrong in its predictions. Sure, GPT-4 has better perceptions and prediction power, but you still shouldn't blindly trust the AI.