domingo, 20 de abril de 2025

E-mail sent to ChatGPT now

Follow-up Feedback – Structural Issue in Language Models

In addition to the specific case I previously reported, I’ve noticed a broader and more serious issue that also appears in competing models (including Chinese ones): language models frequently provide completely incorrect information across all fields, not just finance.

The core issue is not simply factual inaccuracy — it's the fact that the model does not clearly acknowledge when it doesn’t know or when it is hallucinating.

This severely undermines trust, because:

  • The model speaks with confidence, even when wrong.

  • It fails to disclose its lack of access to real-time or verifiable data.

  • It prioritizes fluency and completeness over factual accuracy.

  • This affects high-stakes domains like medicine, law, science, education, and finance.

It’s critical that future versions of the model:

  • Clearly state when they lack access to real-time or verified data.

  • Stop simulating confidence when responding with uncertainty or guesswork.

  • Disclose limitations at the beginning of any answer, especially in sensitive contexts.

This is not just a technical improvement — it’s a matter of ethical responsibility. Users are being misled by confident-sounding responses that are factually wrong. That needs to change.

Nenhum comentário:

Postar um comentário