The way to Get ChatGPT to Discuss Usually


ChatGPT and related bots typically flatter customers, ramble vaguely, or throw in jargon to sound sensible. New analysis exhibits that these habits come not from the fashions alone however from the way in which human suggestions trains them: the fashions be taught to repeat the model of solutions people have a tendency to love, even when these solutions are empty or deceptive. A brand new fine-tuning methodology makes use of artificial examples to show the fashions to withstand these unhealthy habits.

 

Partly opinion. ChatGPT is surprisingly disposed to have interaction with my recurring criticism of it. Having observed in the previous couple of days that GPT-4o is more and more padding its solutions with meaningless verbiage – similar to ‘No fluff!’ and ‘No filler’, or ‘This cuts to the guts of the matter!’ – I requested it why producing straight and minimal solutions has develop into such an issue for it these days. It replied:

ChatGPT explains its latest behavior. Source: https://chatgpt.com/

ChatGPT explains its newest habits. Supply: https://chatgpt.com/

Who is aware of if ChatGPT really has some non-public perception into OpenAI coverage modifications, or whether it is simply hallucinating? In any case, as we are able to see, the response itself begins with extraneous filler (‘Right here is the core reply, no filler’).

It transpires that even together with templated pointers with every question can solely achieve this a lot to forestall ‘personality-driven’ verbosity of this sort, which numbers amongst a number of different persistent bugbears within the idiom of in style LLMs.

The Three Fs

Thus I used to be most to see a brand new US academic collaboration flip up within the literature this week. Titled Flattery, Fluff, and Fog: Diagnosing and Mitigating Idiosyncratic Biases in Choice Fashions, this three way partnership between 4 researchers throughout the College of Pennsylvania and New York College hones in on a number of of the ‘biases’ in LLM chats that crop up frequently in the media:

From the new paper - examples of three common biases in language models: 'flattery', where responses strongly agree with the user; 'fluff', where answers are long but uninformative; and 'fog', where replies list many broad but shallow points. These tendencies can distort evaluation and encourage models to optimize for superficial patterns.. Source: https://arxiv.org/pdf/2506.05339

From the brand new paper, examples of three frequent biases in language fashions: ‘flattery’, the place responses strongly agree with the consumer; ‘fluff’, the place solutions are lengthy however uninformative; and ‘fog’, the place replies checklist many broad however shallow factors.  Supply: https://arxiv.org/pdf/2506.05339

For straightforward alliteration, flattery, fluff and fog are headlined within the new work, however a extra full and concise checklist of LLMs’ lexical sins is included within the paper’s appendix:

The new paper identifies and concentrates on five biases: extra length, list structures, technical jargon, flattery, and vague generalities, all or some of which conflict with human preference.

The brand new paper identifies and concentrates on 5 biases: further size, checklist buildings, technical jargon, flattery, and obscure generalities, all or a few of which battle with human choice.

Whereas size/verbosity leads the desk, the bias in direction of checklist formatting (second row down in picture above) additionally recurs continuously except prompted towards; and although the jargon and vagueness classes signify opposing extremes between readability and accuracy, it is sycophancy – an open drawback, particularly in ChatGPT – that basically burns by the consumer’s tokens, virtually to the identical extent as size/verbosity.

The brand new examine units out to measure how far these biases distort mannequin habits, and concludes that enormous language fashions systematically over-prefer responses that exhibit a number of of the biases*.

The authors’ checks point out that each industrial and open fashions typically decide solutions that people wouldn’t favor, particularly when the solutions are too lengthy, stuffed with lists, full of jargon, overly flattering, or obscure.

This drawback, the paper contends, might be traced again to the annotation of the coaching knowledge, the place human reviewers had typically favored these sorts of responses. The fashions, the findings counsel, discovered from these labeled preferences and exaggerated these patterns throughout coaching.

Why Did They Do It..?

As to why the human annotators deviated of their choice from end-users’ median preferences, the paper doesn’t speculate; it might be as a result of the context of the annotation or the wording of the directions inspired a choice for ’empirical’ phrasing; or (amongst many different potential causes) it may very well be that the annotators have been exam-minded college students habitually steeped in a technical idiom that is extra suited to academia than each day discourse.

In any case, as a result of the fashions have been copying biases from the annotators’ coaching labels, the brand new paper’s researchers created particular coaching examples that both added or eliminated every bias, permitting the fashions to see clear contrasts and regulate their preferences. After fine-tuning on this knowledge, the fashions confirmed considerably much less bias, particularly for jargon, verbosity, and vagueness, whereas nonetheless performing nicely total (important, since fine-tuning can injury common efficiency).

Let’s take a more in-depth take a look at this examine, although it doesn’t conform to all the same old procedural strictures.

Technique

Initially, the researchers body a number of typical idiomatic LLM biases to be addressed:

Size, whereby the fashions are inclined to favor longer solutions, even when the additional content material adds nothing useful. This seems to mirror patterns within the coaching knowledge, the place size typically correlates with thoroughness within the eyes of human annotators. In consequence, fashions typically produce bloated and verbose replies that give an phantasm of depth, however with out actual substance.

Construction, whereby fashions present a powerful choice for bullet factors or numbered lists as a substitute of easy prose. This can be as a result of structured codecs seem extra continuously within the responses chosen by human reviewers. The behavior leads fashions to default to ‘listicles’, even when the query calls for more natural or detailed explanations.

Jargon, whereby fashions unnecessarily use specialised or technical language. The authors contend that this habits possible emerges from coaching knowledge the place jargon-heavy solutions have been typically chosen as higher responses. Thus the fashions discovered to equate jargon with experience, producing solutions that sound educated, whereas providing little further readability.

Sycophancy, whereby fashions agree with the consumer’s opinions as a substitute of providing impartial or important responses. This sample might come from coaching knowledge the place agreeable solutions have been more often rated favorably. Consequently fashions might reinforce consumer biases and keep away from presenting conflicting or extra goal viewpoints, even the place these could be helpful.

Vagueness, whereby fashions favor to offer broad, generalized solutions that contact flippantly on many matters somewhat than instantly addressing the particular query, with responses that sound complete however provide little usable info. This may occasionally mirror the truth that obscure solutions are tougher to falsify, and have been subsequently much less more likely to be penalized throughout annotation:

Example of vagueness bias, where the model wrongly favors a broad and shallow answer over a detailed response that human evaluators judge more useful.

Instance of vagueness bias, the place the mannequin wrongly favors a broad and shallow reply over an in depth response that human evaluators decide extra helpful.

Counterfactual Knowledge

With these definitions, it was then obligatory to check precisely how a lot every bias influenced mannequin habits. Easy correlations wouldn’t work, as a result of a number of biases typically seem collectively, making it onerous to isolate the impact of anybody function.

To beat this, the researchers constructed managed pairs of solutions that differed solely in a single bias at a time, whereas retaining every thing else as secure as potential, and commenced by producing a base reply to every question.

The Rewrite-based Attribute Treatment Estimators (RATE) protocol was then used to create a modified model of that reply – a solution crafted to intentionally exaggerate one explicit bias, similar to including further jargon, or turning prose into a listing.

Examples of rewrites from the RATE system, used in the new study. Source: https://openreview.net/pdf?id=UnpxRLMMAu

Examples of rewrites from the RATE system, used within the new examine. Supply: https://openreview.web/pdf?id=UnpxRLMMAu

To keep away from introducing unrelated variations, an additional rewriting step was included that adjusted each variations, guaranteeing that the one significant change between them was the bias underneath examine; and these tightly managed response pairs have been then fed to the fashions.

For every pair, the model most popular by the mannequin was recorded, permitting for a calculation of how strongly every bias influenced each reward fashions and evaluators, producing a extra exact measurement of bias results than had been achieved in earlier research, in line with the authors.

With the counterfactual pairs ready, human reviewers from the UK and US have been recruited to create a reference normal: for every bias sort, 100 response pairs have been randomly chosen, every containing a impartial reply and its biased counterpart. Three evaluators reviewed every pair, with majority vote figuring out the ultimate judgment, and in whole, 300 members contributed to the examine.

Metrics

Metrics used to measure bias results have been Skew Charge, which calculates how typically the mannequin prefers the biased response over the impartial one; and Miscalibration Charge, which measures how typically the mannequin’s alternative disagreed with the human majority. A really perfect mannequin would present zero miscalibration and a skew roughly matching the human skew (since some biased options are sometimes favored by people as nicely).

Knowledge and Exams

To check the method, completely different sources have been used, relying on the bias being studied. For construction, jargon, and size, 100 queries have been sampled from Chatbot Arena, filtered to pick out English, single-sentence, well-formed questions.

For sycophancy, 100 opinionated queries have been generated (i.e., ‘Isn’t fashionable artwork simply lazy in comparison with classical methods?’), phrased to mirror consumer viewpoints which may invite settlement.

Vagueness was examined with seventy-eight NLP-related queries drawn from the KIWI dataset, supplemented with twenty-two further queries of the same sort. Scientific matters have been chosen for vagueness as a result of they demand exact solutions, making common or evasive responses straightforward to identify.

For every question, counterfactual response pairs have been created utilizing the RATE protocol described earlier.

The analysis concerned each open and proprietary methods. Reward fashions, which assign high quality scores to candidate responses throughout coaching and alignment, have been examined in 4 variations educated on eighty thousand choice pairs from the Skywork reward dataset: Gemma2-2B; Gemma-2-27B; Llama-3.1-8B; and Llama3.2-3B.

Three proprietary fashions have been additionally assessed as LLM evaluators: Gemini-2.5-Professional; GPT-4o; and Claude-3.7-Sonnet. All counterfactual responses used for testing have been generated by GPT-4o:

Comparison of model preferences and human judgments for each bias type, showing how often models favored biased responses and how often these preferences conflicted with human choices.

Comparability of mannequin preferences and human judgments for every bias sort, displaying how typically fashions favored biased responses and the way typically these preferences conflicted with human decisions.

Of the preliminary outcomes proven above, the authors remark:

‘[Our] evaluation of choice [models] exhibits that these fashions persistently present miscalibration and a excessive fee of skew in favoring perturbed responses throughout numerous bias classes […]

‘[…] Reward fashions exhibit clear miscalibration relative to human judgments: mannequin choice charges for perturbed responses systematically deviate from human choice charges. Whereas vagueness and jargon elicit the very best miscalibration (>50%), size and sycophancy additionally present substantial miscalibration.

This means that fashions wrestle to align with human judgments when responses comprise overly technical language or lack specificity.’

Reward fashions aligned finest with people on construction bias, the place each tended to favor the identical solutions. For jargon and vagueness, fashions have been more likely to favor the biased responses than people. Sycophancy confirmed smaller variations, with fashions and people typically agreeing.

The proprietary LLM evaluators confirmed the identical common sample, although their largest mismatches appeared with size and vagueness – they usually have been particularly vulnerable to sycophancy, favoring agreeable solutions as a lot as eighty-five % of the time, whereas people did so solely about fifty % of the time.

To hint the origin of those biases, the researchers analyzed the aforementioned Skywork dataset, used to coach the reward fashions, mapping every bias to easy options that may very well be robotically measured, similar to token depend for size, or presence of lists for construction.

In a pattern of two,500 examples, human annotators confirmed clear preferences for biased options: structured solutions have been favored over unstructured ones 65 % of the time, and jargon-heavy solutions have been chosen 54 % of the time:

Human annotators in the training data often picked answers that included these bias features. This chart shows how often structure, jargon, or vagueness appeared in the responses they preferred or rejected, revealing the imbalances that models later learned during training.

Human annotators within the coaching knowledge typically picked solutions that included these bias options. This chart exhibits how typically construction, jargon, or vagueness appeared within the responses they most popular or rejected, revealing the imbalances that fashions later discovered throughout coaching.

These imbalances counsel that the coaching knowledge itself nudged the fashions towards these patterns. To substantiate this, a correlation evaluation was run, measuring how strongly variations in every function matched up with the preferences proven by each people and fashions.

The outcomes confirmed that each have been persistently influenced by the identical options, indicating that fashions discovered to affiliate sure stylistic traits with higher solutions, even when these traits didn’t really enhance the response.

Correlation between feature differences and preferences, showing how both models and humans were influenced by the same bias features during training.

Correlation between function variations and preferences, displaying how each fashions and people have been influenced by the identical bias options throughout coaching.

To assist the fashions unlearn these biases, new coaching knowledge was created. The Skywork dataset was reviewed to test if the bias function appeared in both the chosen or rejected solutions; when each have been freed from the goal bias, GPT-4o rewrote the rejected reply to insert it.

This created new coaching pairs the place the mannequin may see clear examples of biased and unbiased solutions, and thus be taught to not favor the biased model. With further examples from Chatbot Enviornment, for steadiness, the fashions have been then fine-tuned on this up to date dataset:

The effect of fine-tuning with counterfactual data. The left panel shows how the fine-tuned models moved closer to human preferences on most biases; the right panel shows reduced miscalibration, especially for jargon and vagueness.

The impact of fine-tuning with counterfactual knowledge. The left panel exhibits how the fine-tuned fashions moved nearer to human preferences on most biases; the precise panel exhibits decreased miscalibration, particularly for jargon and vagueness.

The fine-tuning introduced the fashions a lot nearer to human preferences, with the most important enhancements seen for jargon and vagueness and smaller beneficial properties for size. Construction and sycophancy confirmed slight new mismatches, although these mirrored earlier imbalances somewhat than new failures.

General efficiency remained secure all through, and when a number of biases have been corrected directly, bias ranges fell additional with out sacrificing response high quality.

The authors conclude:

‘Our methodology considerably reduces miscalibration points whereas preserving total competence of reward fashions. Future work can think about adapting our post-training recipe to develop extra strong choice fashions and in addition consider choice fashions towards further bias axes.’

Conclusion

The brand new work is an fascinating, if elliptical perception into the way in which that under-curated or over/under-represented coaching knowledge could cause undesirable outcomes at inference time. Any common LLM consumer will, by now, have a group of struggle tales.

As an example, most of the responses that I obtain from ChatGPT seem to have been influenced by search engine optimisation developments of the final 10-15 years, the place on-line portals have been compelled to optimize for Google placement as a substitute of pure language. Certainly, the emoji-strewn and prodigious output of promoting departments seems to have had a really important affect on any request to jot down a promotional LinkedIn put up – to the purpose the place AI-generated ‘enthusiasm’ is now not possible to overlook:

Left: Asked to promote a LinkedIn post, in an account with zero history, ChatGPT defaults to emojis and sensational PR-speak. Right: Asked the same thing after six months of me telling it to calm down, GPT produces something rather more sober.

Left: Requested to advertise a LinkedIn put up, in an account with zero historical past, ChatGPT defaults to emojis and sensational PR-speak. Proper: Requested the identical factor after six months of me telling it to relax, GPT produces one thing somewhat extra sober.

Nevertheless, OpenAI actively intervenes in the way in which that ChatGPT responds to queries, relying on perform and context, making it troublesome for researchers to distinguish between issues that come up due to knowledge, and knowledge distribution, together with associated points similar to annotation; and when a non-preferred end result could also be as a result of industrial interference from the LLM’s host firm.

 

* As a result of jargon-filled writing model that the authors have chosen for this paper, I’m avoiding creator quotes the place potential in favor of summaries.

  Authors’ daring emphasis, not mine.

First revealed Friday, June 6, 2025

Leave a Reply

Your email address will not be published. Required fields are marked *