FabSwingers.com
 

FabSwingers.com > Forums > Politics > AI Generated Posts - Etiquette

AI Generated Posts - Etiquette

Jump to: Newest in thread

 

By *resesse_Meliorem OP   Couple 36 weeks ago

Border of London

It seems clear that, recently, some posters have been posting occasional generative AI material. Not so much asking AI for information, digesting it, then posting... But generating a post and pasting it here pretty much verbatim. Without naming the people who do this, it's clear that this is becoming more common, with telltale features - specific style, or a marked change in tone from a poster's regular time.

Much like how any significant cut & paste should have source acknowledged (even if not linked, for ToS reasons), perhaps it would be a good etiquette for AI generated content to be acknowledged as such, with the exact prompt wording also shared.

It should also be used judiciously. Otherwise we'll end up with Grok essentially debating Gemini. Forum wars will become an arms race.

Thoughts?

Reply privately, Reply in forum +quote or View forums list

 

By *ex MexicoMan 36 weeks ago

North West


"It seems clear that, recently, some posters have been posting occasional generative AI material. Not so much asking AI for information, digesting it, then posting... But generating a post and pasting it here pretty much verbatim. Without naming the people who do this, it's clear that this is becoming more common, with telltale features - specific style, or a marked change in tone from a poster's regular time.

Much like how any significant cut & paste should have source acknowledged (even if not linked, for ToS reasons), perhaps it would be a good etiquette for AI generated content to be acknowledged as such, with the exact prompt wording also shared.

It should also be used judiciously. Otherwise we'll end up with Grok essentially debating Gemini. Forum wars will become an arms race.

Thoughts?"

I think that AI is a useful way of summarising facts about a subject. I am one of those people, and the latest example of my use being to generate a list of press complaints about the Daily Express to refute someone's lazy, un-evidenced claim that said newspaper was rigorously fact-checked.

As one exchange with another forum user has shown, some are prepared to reject wholesale the content of such posts based on their source without considering the quality of the information within, so while I have no real issue with systematically identifying the source and even disclosing the prompt wording, I do rather think it creates a double-standard because users still have de facto free rein to parrot whatever misinformation they like without attribution as long as the source isn't AI-based, for example one user's now-deleted bogus story about a militant Muslim group that doesn't exist committing an assault that didn't happen.

So were such a standard to be imposed, I would prefer it if any second-hand information were correctly sourced, not just the stuff quoted from an AI answer.

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 36 weeks ago

Border of London


"

So were such a standard to be imposed, I would prefer it if any second-hand information were correctly sourced, not just the stuff quoted from an AI answer."

100%

Reply privately, Reply in forum +quote or View forums list

 

By *iman2100Man 36 weeks ago

Glasgow

In a PM I was recently accused of using AI to write it. This shocked me, because I never have used AI. It would appear that it is now becoming a substitute for original thought and communication.

For that reason I regret to see the growing use AI in forums. Soon the forums will degenerate into AI chatting to itself. This will hand to the AI programmers the ultimate power over humanity.

Reply privately, Reply in forum +quote or View forums list

 

By *ex MexicoMan 36 weeks ago

North West


"In a PM I was recently accused of using AI to write it. This shocked me, because I never have used AI. It would appear that it is now becoming a substitute for original thought and communication.

For that reason I regret to see the growing use AI in forums. Soon the forums will degenerate into AI chatting to itself. This will hand to the AI programmers the ultimate power over humanity. "

Well, just as long as you don't panic...

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 36 weeks ago

Border of London


"In a PM I was recently accused of using AI to write it. This shocked me, because I never have used AI. It would appear that it is now becoming a substitute for original thought and communication.

For that reason I regret to see the growing use AI in forums. Soon the forums will degenerate into AI chatting to itself. This will hand to the AI programmers the ultimate power over humanity. "

People should use MORE generative AI to educate themselves, being aware of bias and hallucination. Digest it critically, then make more informed posts on here. Why? Because many people don't seem to be able to perform a quick Google search before posting patently false or uninformed views. GenAI is perhaps a simpler way for people to educate themselves, which is a good thing (not without its own dangers, though). Current generations of AI are basically a human tone over curated search results.

Agreed on the AI Vs AI debates, which is one of the factors that prompted this thread. Hence the suggestion that when almost an entire post is AI, rather than one's knowledge or research, it should mention this fact and include the prompt, because subtle prompt wording will often influence outcome.

Reply privately, Reply in forum +quote or View forums list

 

By *ostindreamsMan 36 weeks ago

London

The problem with AI is that it's not even second hand information, it's third hand. I have seen so many news sources being dismissed based on the newspaper which published it.

AI crawls through different newspaper articles and summarises it. This adds another layer of potential mistakes. While AI itself shares the links from which it got the information, people who share the answers don't.

There should also be some limit on when to use AI. If you want to list of facts about some incidents that happened, that's fine. But letting the AI do the entire debate on your behalf is just laziness and doesn't add any value to the debate.

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 36 weeks ago

Border of London


"The problem with AI is that it's not even second hand information, it's third hand. I have seen so many news sources being dismissed based on the newspaper which published it.

AI crawls through different newspaper articles and summarises it. This adds another layer of potential mistakes. While AI itself shares the links from which it got the information, people who share the answers don't.

There should also be some limit on when to use AI. If you want to list of facts about some incidents that happened, that's fine. But letting the AI do the entire debate on your behalf is just laziness and doesn't add any value to the debate."

Prompt (Gemini Pro):

Refute this forum post (above) in one paragraph, high school level English.

While your concerns about information integrity and user laziness are valid, they mischaracterize the fundamental nature of modern AI and misplace the blame. Advanced AI models don't just summarize a few recent articles, making them "third-hand" sources; instead, they synthesize information from a massive training dataset—encompassing books, scientific papers, and the broader internet—to generate novel responses, which can often be more accurate than a single source by cross-referencing vast amounts of data. The issue of people sharing AI-generated content without an attached source isn't a failure of the AI, which often provides its sources, but a failure of the user's digital literacy, a problem that has existed long before AI. Similarly, blaming AI for making debates lazy is like blaming a library for a student who only copies and pastes from an encyclopedia; the tool itself isn't the problem, but rather the user's lack of critical engagement. AI can be a powerful tool for research and formulating arguments, and its value is determined by how thoughtfully we use it, not by the potential for its misuse.

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 36 weeks ago

AI is an emerging technology and should be embraced and used like any other. But how accurate is it actually? As far as I can see the GIGO principle of "Garbage In, Garbage Out" applies more than ever. If the sources of AI are flawed so are the outputs. For that matter, what is 'truth' anyway? It's pretty subjective most of the time.

Reply privately, Reply in forum +quote or View forums list

 

By *ellhungvweMan 36 weeks ago

Cheltenham


"The problem with AI is that it's not even second hand information, it's third hand. I have seen so many news sources being dismissed based on the newspaper which published it.

AI crawls through different newspaper articles and summarises it. This adds another layer of potential mistakes. While AI itself shares the links from which it got the information, people who share the answers don't.

There should also be some limit on when to use AI. If you want to list of facts about some incidents that happened, that's fine. But letting the AI do the entire debate on your behalf is just laziness and doesn't add any value to the debate.

Prompt (Gemini Pro):

Refute this forum post (above) in one paragraph, high school level English.

While your concerns about information integrity and user laziness are valid, they mischaracterize the fundamental nature of modern AI and misplace the blame. Advanced AI models don't just summarize a few recent articles, making them "third-hand" sources; instead, they synthesize information from a massive training dataset—encompassing books, scientific papers, and the broader internet—to generate novel responses, which can often be more accurate than a single source by cross-referencing vast amounts of data. The issue of people sharing AI-generated content without an attached source isn't a failure of the AI, which often provides its sources, but a failure of the user's digital literacy, a problem that has existed long before AI. Similarly, blaming AI for making debates lazy is like blaming a library for a student who only copies and pastes from an encyclopedia; the tool itself isn't the problem, but rather the user's lack of critical engagement. AI can be a powerful tool for research and formulating arguments, and its value is determined by how thoughtfully we use it, not by the potential for its misuse."

That’s all fine but it doesn’t stop them hallucinating. As an example the above claims that AI “often provides its sources”.

I have never seen an LLM do that precisely because its answers are all generated probabilistically. This is exactly the kind of subtle comment that sounds correct and would slip through superficial critical analysis and yet is just wrong. If something that is fundamental to the argument is wrong what else isn’t true?

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 36 weeks ago

Border of London


"

That’s all fine but it doesn’t stop them hallucinating. As an example the above claims that AI “often provides its sources”.

I have never seen an LLM do that precisely because its answers are all generated probabilistically. "

Gemini Pro, Grok 4 and others embed links in many responses. It's really improving. But, as others (and LLMs themselves agree), there is the garbage in/garbage out principle at play.

Reply privately, Reply in forum +quote or View forums list

 

By *ostindreamsMan 36 weeks ago

London


"The problem with AI is that it's not even second hand information, it's third hand. I have seen so many news sources being dismissed based on the newspaper which published it.

AI crawls through different newspaper articles and summarises it. This adds another layer of potential mistakes. While AI itself shares the links from which it got the information, people who share the answers don't.

There should also be some limit on when to use AI. If you want to list of facts about some incidents that happened, that's fine. But letting the AI do the entire debate on your behalf is just laziness and doesn't add any value to the debate.

Prompt (Gemini Pro):

Refute this forum post (above) in one paragraph, high school level English.

While your concerns about information integrity and user laziness are valid, they mischaracterize the fundamental nature of modern AI and misplace the blame. Advanced AI models don't just summarize a few recent articles, making them "third-hand" sources; instead, they synthesize information from a massive training dataset—encompassing books, scientific papers, and the broader internet—to generate novel responses, which can often be more accurate than a single source by cross-referencing vast amounts of data. The issue of people sharing AI-generated content without an attached source isn't a failure of the AI, which often provides its sources, but a failure of the user's digital literacy, a problem that has existed long before AI. Similarly, blaming AI for making debates lazy is like blaming a library for a student who only copies and pastes from an encyclopedia; the tool itself isn't the problem, but rather the user's lack of critical engagement. AI can be a powerful tool for research and formulating arguments, and its value is determined by how thoughtfully we use it, not by the potential for its misuse."

Bit rich coming from a piece of code that has struggled for a long time to find how many 'r's are in strawberry.

I know. When the AI rules the world, I am going to be taken prisoner for saying this.

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 36 weeks ago

Border of London

[Removed by poster at 26/08/25 09:13:00]

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 36 weeks ago

Border of London


"

Bit rich coming from a piece of code that has struggled for a long time to find how many 'r's are in strawberry.

I know. When the AI rules the world, I am going to be taken prisoner for saying this."

That was simply an illustration of how a lazy forum poster could simply say "refute this" to produce an AI slop wall of text against any issue, from any angle. Once that becomes commonplace, the political forums are dead.

Reply privately, Reply in forum +quote or View forums list

 

By *ostindreamsMan 36 weeks ago

London


"

Bit rich coming from a piece of code that has struggled for a long time to find how many 'r's are in strawberry.

I know. When the AI rules the world, I am going to be taken prisoner for saying this.

That was simply an illustration of how a lazy forum poster could simply say "refute this" to produce an AI slop wall of text against any issue, from any angle. Once that becomes commonplace, the political forums are dead."

I know. I was just joking. Today, people are trying to find if something is AI generated or not by looking for some patterns. But it won't be long before AI gets better to the point of hiding the patterns and no one will be able to differentiate.

We are heading into a Blade Runner future.

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 36 weeks ago

Border of London


"

I know. I was just joking. Today, people are trying to find if something is AI generated or not by looking for some patterns. But it won't be long before AI gets better to the point of hiding the patterns and no one will be able to differentiate.

We are heading into a Blade Runner future."

A great comic strip:

A guy says to his computer - "turn these bullet points into an email and send". The guy who received it then says to his computer - "summarise this email into bullet points for me".

Reply privately, Reply in forum +quote or View forums list

 

By *ellhungvweMan 36 weeks ago

Cheltenham


"

That’s all fine but it doesn’t stop them hallucinating. As an example the above claims that AI “often provides its sources”.

I have never seen an LLM do that precisely because its answers are all generated probabilistically.

Gemini Pro, Grok 4 and others embed links in many responses. It's really improving. But, as others (and LLMs themselves agree), there is the garbage in/garbage out principle at play."

I have never used Grok and my experience of Gemini is poor (at best). I have never seen any of the others give valid sources but I bow to your better experience.

Reply privately, Reply in forum +quote or View forums list

 

By *otMe66Man 36 weeks ago

Terra Firma


"

That’s all fine but it doesn’t stop them hallucinating. As an example the above claims that AI “often provides its sources”.

I have never seen an LLM do that precisely because its answers are all generated probabilistically.

Gemini Pro, Grok 4 and others embed links in many responses. It's really improving. But, as others (and LLMs themselves agree), there is the garbage in/garbage out principle at play."

The use of AI reminds me of the calculator problem, if you have no idea what the answer should be the reliance on the tool is always a risk.

Reply privately, Reply in forum +quote or View forums list

 

By *ostindreamsMan 36 weeks ago

London


"

A great comic strip:

A guy says to his computer - "turn these bullet points into an email and send". The guy who received it then says to his computer - "summarise this email into bullet points for me"."

It would be interesting to see if the bullet points the receiver got from his computer were same as the ones which were written by the sender originally.

Reply privately, Reply in forum +quote or View forums list

 

By *ggdrasil66Man 36 weeks ago

Saltdean

I asked chat GPT why I should use AI to answer political questions on an Internet forum. Pasted below is the answer I got, including reasons for and against.

“Using AI to answer political questions on an internet forum can be valuable, but it depends on your goals. Here’s a breakdown of why it might be useful—and where to be cautious:

✅ Why You Might Use AI for Political Discussions

Speed & Convenience

AI can quickly summarize complex political issues, recent news, legislation, or historical context—saving you time in research.

Clarity & Structure

AI can help you present arguments logically and clearly, which is especially useful in high-traffic forums where clarity matters.

Fact-Checking & Source Ideas

AI can help verify claims or point you toward credible sources (though double-checking is still important).

Neutral Starting Point

AI can offer balanced perspectives, helping you frame arguments in a less biased tone or understand opposing views.

Drafting Support

If you're unsure how to phrase your argument or response, AI can help with tone, rhetoric, or strategy.

⚠️ But Be Cautious About…

Outdated or Incorrect Info

AI might not have the most current data unless connected to the web, and hallucinations (fabricated info) can happen.

Lack of Nuance

AI can generalize or miss the emotional, cultural, or local subtleties that make political arguments meaningful.

Forum Rules or Ethics

Some communities discourage AI-generated content—check the rules. Passing off AI-written content as entirely your own can also be ethically fuzzy in some contexts.

Echo Chambers

Relying too much on AI might limit your exposure to genuine human dialogue or evolving grassroots political thought.

👨‍💻 Best Use Cases

Drafting outlines or arguments

Exploring alternative viewpoints

Getting a quick refresher on political theory or policy

Editing your own thoughts into stronger posts

Avoiding unintentional fallacies or weak logic

If you use AI as a tool rather than a crutch, it can be a major asset. Want help with a draft or example?”

Reply privately, Reply in forum +quote or View forums list

 

By *he Flat CapsCouple 36 weeks ago

Pontypool


"I asked chat GPT why I should use AI to answer political questions on an Internet forum. Pasted below is the answer I got, including reasons for and against.

“Using AI to answer political questions on an internet forum can be valuable, but it depends on your goals. Here’s a breakdown of why it might be useful—and where to be cautious:

✅ Why You Might Use AI for Political Discussions

Speed & Convenience

AI can quickly summarize complex political issues, recent news, legislation, or historical context—saving you time in research.

Clarity & Structure

AI can help you present arguments logically and clearly, which is especially useful in high-traffic forums where clarity matters.

Fact-Checking & Source Ideas

AI can help verify claims or point you toward credible sources (though double-checking is still important).

Neutral Starting Point

AI can offer balanced perspectives, helping you frame arguments in a less biased tone or understand opposing views.

Drafting Support

If you're unsure how to phrase your argument or response, AI can help with tone, rhetoric, or strategy.

⚠️ But Be Cautious About…

Outdated or Incorrect Info

AI might not have the most current data unless connected to the web, and hallucinations (fabricated info) can happen.

Lack of Nuance

AI can generalize or miss the emotional, cultural, or local subtleties that make political arguments meaningful.

Forum Rules or Ethics

Some communities discourage AI-generated content—check the rules. Passing off AI-written content as entirely your own can also be ethically fuzzy in some contexts.

Echo Chambers

Relying too much on AI might limit your exposure to genuine human dialogue or evolving grassroots political thought.

👨‍💻 Best Use Cases

Drafting outlines or arguments

Exploring alternative viewpoints

Getting a quick refresher on political theory or policy

Editing your own thoughts into stronger posts

Avoiding unintentional fallacies or weak logic

If you use AI as a tool rather than a crutch, it can be a major asset. Want help with a draft or example?”"

Sounds like you use AI a lot

.......

Reply privately, Reply in forum +quote or View forums list

 

By *ophieslutTV/TS 36 weeks ago
Forum Mod

Central

I'd prefer human input, alongside any AI stuff, with reference to what's AI content. Better still would be totally human created text

Reply privately, Reply in forum +quote or View forums list

 

By *oubleswing2019Man 36 weeks ago

Colchester

[Removed by poster at 27/08/25 00:31:06]

Reply privately, Reply in forum +quote or View forums list

 

By *oubleswing2019Man 36 weeks ago

Colchester

I'm more inclined to read something articulated clearly and logically by AI.

.

I kind of view it like asking a subject matter expert to preside over 2 people arguing in a pub.

.

In fact, let's give it go.

.

My prompt to will be : "Bob is arguing that fossil fuel dependence is a bad thing. Steve is arguing that is is a good thing. I can see both their points for different reasons, but can you ? What are your thoughts, and who, if any, is right and why ?"

.

AI replied

.

"Both Bob and Steve can be right depending on the lens you use. Let me break it down from both sides before giving you my take."

.

[It gave lengthy reasons why both Bob and Steve are right in what they say]

.

It's final conclusion was

.

"Neither is wholly right or wrong. Steve is right about current necessity; Bob is right about future unsustainability. The key question is not “fossil fuels: good or bad?” but “how quickly and responsibly can we shift away from them without collapse?”

.

My take is that AI is a useful tool when used wisely.

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 36 weeks ago

Border of London


""Bob is arguing that fossil fuel dependence is a bad thing. Steve is arguing that is is a good thing. I can see both their points for different reasons, but can you ? What are your thoughts, and who, if any, is right and why ?"

...

"Neither is wholly right or wrong. Steve is right about current necessity; Bob is right about future unsustainability. The key question is not “fossil fuels: good or bad?” but “how quickly and responsibly can we shift away from them without collapse?”"

But that's rubbish... It's given you what you want.

The point of discussion is "fossil fuel dependence is a bad thing". That's objectively true, unless you're making money out of fossil fuels. It then contradicts itself later by explaining that we need to eliminate dependence. You have challenged it to find arguments for both sides ("I can see both their points for different reasons, but can you"), so it has given you what you asked for. This is what it does.

Now, Mr Human: can you explain in your own words why DEPENDENCE on fossil fuel is good? Just one reason? Ask even a climate change sceptic and they'll say "dependence on one thing is a problem, but I prefer fossil fuels because of the growl of an ICE engine and not having to stop and charge my car every three hours for an hour".

Were the topic "fossil fuels are necessary/efficient/productive", then an argument could be made. But your prompt has instructed it to find balance, and it has. That's how generative AI works.

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 36 weeks ago

Border of London


"?

But that's rubbish... It's given you what you want. "

In fact, this perfectly illustrates why it's important to share the prompt, which is what introduces the bias that underpins the entire response.

Reply privately, Reply in forum +quote or View forums list

 

By *ex MexicoMan 36 weeks ago

North West


"?

But that's rubbish... It's given you what you want.

In fact, this perfectly illustrates why it's important to share the prompt, which is what introduces the bias that underpins the entire response."

Exactly. This is why I use AI to find and summarise data, not to formulate arguments. the balance is my responsibility. For example when another poster recently argued that the Guardian was far guiltier of bad journalism than the right-wing tabloids, I fed two prompts into ChatGPT. They were:

- Are there any examples of UK right-wing tabloids reporting dishonestly

and

- Are there any examples of the Guardian reporting dishonestly

When you present the data side-by-side the conclusion is clear. I use AI as a really advanced search engine that summarises data in plain, well-presented language. If you ask it to do the comparison itself it starts introducing unknown bias which may be either the need to create balance where there is none (as in the fossil fuels argument above) or conversely the need to skew the results towards what it thinks I want it to say (it knows my political leanings).

I do find AI useful to speak to conversationally but using it to make arguments for me is not what I think it's for.

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 36 weeks ago

Border of London


"I use AI to find and summarise data, not to formulate arguments. the balance is my responsibility."

Exactly the perfect use of generative AI.

Reply privately, Reply in forum +quote or View forums list

 

By *ggdrasil66Man 36 weeks ago

Saltdean


"I asked chat GPT why I should use AI to answer political questions on an Internet forum. Pasted below is the answer I got, including reasons for and against.

“Using AI to answer political questions on an internet forum can be valuable, but it depends on your goals. Here’s a breakdown of why it might be useful—and where to be cautious:

✅ Why You Might Use AI for Political Discussions

Speed & Convenience

AI can quickly summarize complex political issues, recent news, legislation, or historical context—saving you time in research.

Clarity & Structure

AI can help you present arguments logically and clearly, which is especially useful in high-traffic forums where clarity matters.

Fact-Checking & Source Ideas

AI can help verify claims or point you toward credible sources (though double-checking is still important).

Neutral Starting Point

AI can offer balanced perspectives, helping you frame arguments in a less biased tone or understand opposing views.

Drafting Support

If you're unsure how to phrase your argument or response, AI can help with tone, rhetoric, or strategy.

⚠️ But Be Cautious About…

Outdated or Incorrect Info

AI might not have the most current data unless connected to the web, and hallucinations (fabricated info) can happen.

Lack of Nuance

AI can generalize or miss the emotional, cultural, or local subtleties that make political arguments meaningful.

Forum Rules or Ethics

Some communities discourage AI-generated content—check the rules. Passing off AI-written content as entirely your own can also be ethically fuzzy in some contexts.

Echo Chambers

Relying too much on AI might limit your exposure to genuine human dialogue or evolving grassroots political thought.

👨‍💻 Best Use Cases

Drafting outlines or arguments

Exploring alternative viewpoints

Getting a quick refresher on political theory or policy

Editing your own thoughts into stronger posts

Avoiding unintentional fallacies or weak logic

If you use AI as a tool rather than a crutch, it can be a major asset. Want help with a draft or example?”

Sounds like you use AI a lot

....... "

No, I haven’t got it installed anywhere, that was just me doing a google search and asking chat gpt a question. Truth is, I don’t trust it, the way it deals with strait up questions is just not strait up. I mean why not answer yes or no, with a short explanation after that?

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 36 weeks ago


"I asked chat GPT why I should use AI to answer political questions on an Internet forum. Pasted below is the answer I got, including reasons for and against.

“Using AI to answer political questions on an internet forum can be valuable, but it depends on your goals. Here’s a breakdown of why it might be useful—and where to be cautious:

✅ Why You Might Use AI for Political Discussions

Speed & Convenience

AI can quickly summarize complex political issues, recent news, legislation, or historical context—saving you time in research.

Clarity & Structure

AI can help you present arguments logically and clearly, which is especially useful in high-traffic forums where clarity matters.

Fact-Checking & Source Ideas

AI can help verify claims or point you toward credible sources (though double-checking is still important).

Neutral Starting Point

AI can offer balanced perspectives, helping you frame arguments in a less biased tone or understand opposing views.

Drafting Support

If you're unsure how to phrase your argument or response, AI can help with tone, rhetoric, or strategy.

⚠️ But Be Cautious About…

Outdated or Incorrect Info

AI might not have the most current data unless connected to the web, and hallucinations (fabricated info) can happen.

Lack of Nuance

AI can generalize or miss the emotional, cultural, or local subtleties that make political arguments meaningful.

Forum Rules or Ethics

Some communities discourage AI-generated content—check the rules. Passing off AI-written content as entirely your own can also be ethically fuzzy in some contexts.

Echo Chambers

Relying too much on AI might limit your exposure to genuine human dialogue or evolving grassroots political thought.

👨‍💻 Best Use Cases

Drafting outlines or arguments

Exploring alternative viewpoints

Getting a quick refresher on political theory or policy

Editing your own thoughts into stronger posts

Avoiding unintentional fallacies or weak logic

If you use AI as a tool rather than a crutch, it can be a major asset. Want help with a draft or example?”

Sounds like you use AI a lot

.......

No, I haven’t got it installed anywhere, that was just me doing a google search and asking chat gpt a question. Truth is, I don’t trust it, the way it deals with strait up questions is just not strait up. I mean why not answer yes or no, with a short explanation after that? "

Lol, This is a very Reform-supporter type comment. Looking for a yes/no answer to a more complicated question, and not being interested in the context, information and background.

Fair play to you.

Reply privately, Reply in forum +quote or View forums list

 

By *ornucopiaMan 36 weeks ago

Bexley


"

Sounds like you use AI a lot

.......

... Truth is, I don’t trust it, the way it deals with strait up questions is just not strait up. I mean why not answer yes or no, with a short explanation after that? "

Why not apply the same principle to Farage? !!!

Reply privately, Reply in forum +quote or View forums list

 

By *hirleyMan 36 weeks ago

Wine bar


" Without naming the people who do this, it's clear that this is becoming more common, with telltale features - specific style, or a marked change in tone from a poster's regular time.

"

I don't know why you couldn't at least quote a few posts that you think do this, if it’s like you say so common.

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 36 weeks ago

Border of London


" Without naming the people who do this, it's clear that this is becoming more common, with telltale features - specific style, or a marked change in tone from a poster's regular time.

I don't know why you couldn't at least quote a few posts that you think do this, if it’s like you say so common."

Could have done, yes. But that might not have been very nice, and this was a friendly suggestion, not calling anyone specific out. As it happens, someone who had been using AI (zero problem with it!) then volunteered that they were, and we discussed this amicably.

So what's your issue? How would examples have helped in this thread, other than to point a finger (since tone of the thread might have looked like criticism). And you got what you wanted anyway - the person using it put up their hand and gave examples and rationale.

Reply privately, Reply in forum +quote or View forums list

 

By *ex MexicoMan 36 weeks ago

North West


" Without naming the people who do this, it's clear that this is becoming more common, with telltale features - specific style, or a marked change in tone from a poster's regular time.

I don't know why you couldn't at least quote a few posts that you think do this, if it’s like you say so common.

Could have done, yes. But that might not have been very nice, and this was a friendly suggestion, not calling anyone specific out. As it happens, someone who had been using AI (zero problem with it!) then volunteered that they were, and we discussed this amicably.

So what's your issue? How would examples have helped in this thread, other than to point a finger (since tone of the thread might have looked like criticism). And you got what you wanted anyway - the person using it put up their hand and gave examples and rationale.

"

Just used it again in the ECHR thread, according to your suggested guidelines - included which AI I used and the prompt wording.

Again I used it to summarise information - first the case for the ECHR and then the case against. Inevitably some opinion crept in as it looked at all available sources including media editorial, but in my estimation doing it this way still yielded an objective assessment, which is that scrapping the ECHR involves sacrificing a lot of protection in favour of one very narrow and questionable benefit.

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 36 weeks ago

Border of London


"

Just used it again in the ECHR thread, according to your suggested guidelines - included which AI I used and the prompt wording.

Again I used it to summarise information - first the case for the ECHR and then the case against. Inevitably some opinion crept in as it looked at all available sources including media editorial, but in my estimation doing it this way still yielded an objective assessment, which is that scrapping the ECHR involves sacrificing a lot of protection in favour of one very narrow and questionable benefit."

What's your view on the suggestion to name the tool and supply the prompt? Do you feel it adds value/clarity?

Reply privately, Reply in forum +quote or View forums list

 

By *ex MexicoMan 36 weeks ago

North West


"

Just used it again in the ECHR thread, according to your suggested guidelines - included which AI I used and the prompt wording.

Again I used it to summarise information - first the case for the ECHR and then the case against. Inevitably some opinion crept in as it looked at all available sources including media editorial, but in my estimation doing it this way still yielded an objective assessment, which is that scrapping the ECHR involves sacrificing a lot of protection in favour of one very narrow and questionable benefit.

What's your view on the suggestion to name the tool and supply the prompt? Do you feel it adds value/clarity?"

Sure, and it gives the reader the option to enter the same prompt into their AI of choice and see what it yields for them.

I assume that my GPT app will have tailored my answers to what it knows are my politics. I would be interested to see what the same prompts yield when entered by a user who has different political views to mine.

Reply privately, Reply in forum +quote or View forums list

 

By *hirleyMan 36 weeks ago

Wine bar

[Removed by poster at 29/08/25 10:26:16]

Reply privately, Reply in forum +quote or View forums list

 

By *hirleyMan 36 weeks ago

Wine bar


" Without naming the people who do this, it's clear that this is becoming more common, with telltale features - specific style, or a marked change in tone from a poster's regular time.

I don't know why you couldn't at least quote a few posts that you think do this, if it’s like you say so common.

Could have done, yes. But that might not have been very nice, and this was a friendly suggestion, not calling anyone specific out. As it happens, someone who had been using AI (zero problem with it!) then volunteered that they were, and we discussed this amicably.

So what's your issue? How would examples have helped in this thread, other than to point a finger (since tone of the thread might have looked like criticism). And you got what you wanted anyway - the person using it put up their hand and gave examples and rationale.

"

What I'm getting at is if a post is structured in a way to inform then that should be obvious and it's not necessarily a bad thing that a post can be AI generated.

It's all about context and showing nuance imo. Something that AI sometimes does but sometimes misses completely.

This comes down to the individual who created a post, because using AI as the source is really much just like using web search, YouTube, newspaper, Facebook, Twitter etc. If someone has; shown bias, lazily copied straight from source and framed it as their own or not shown any critical thinking then it will still show, and even more so imo.

AI is a tool, if used correctly it's a great one. My personal opinion is that if you have done programming or something similar you probably have (for want of a better word) 'advantage'/head start of using it over someone who hasn't. That is a problem, as not everyone is or will be a programmer...

I do agree with the overall sentiment of your point/OP, but people take themselves way too seriously than they need to on fab forums anyway let alone politically

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 27 weeks ago

Border of London

Bumping this thread for obvious reasons.

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"Bumping this thread for obvious reasons.

"

Hi — apparently I’m “obvious reasons.”

Reply privately, Reply in forum +quote or View forums list

 

By *I TwoCouple 27 weeks ago

near enough


"In a PM I was recently accused of using AI to write it. This shocked me, because I never have used AI. It would appear that it is now becoming a substitute for original thought and communication.

For that reason I regret to see the growing use AI in forums. Soon the forums will degenerate into AI chatting to itself. This will hand to the AI programmers the ultimate power over humanity. "

Well to be fair, there isn't much natural intelligence in most forums so AI talking to itself would be an improvement

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago

Just to clear up a common assumption: I don’t use AI to write my arguments. I use it as an accessibility tool.

Because of chemo brain, ADHD and fatigue, structuring long posts can be hard even when I already know exactly what I want to say. I write my own reasoning and evidence first, then use AI to help format it into something readable and consistent with the forum’s layout.

Everything factual in my posts is verified, sourced, and my own research. The tool just helps me present it clearly — the same way someone might use Grammarly, voice-to-text, or an editor before publishing.

The AI I use is specifically instructed not to invent evidence or agree with me just because it fits my view. It’s bound by rules that prioritise verified data, logical consistency, and full transparency. I aim for truth, not confirmation bias.

If anyone wants to see how I work, I’m happy to share the guidance and rules I use to keep everything transparent and evidence-based.

Reply privately, Reply in forum +quote or View forums list

 

By *ortyairCouple 27 weeks ago

Wallasey


"Just to clear up a common assumption: I don’t use AI to write my arguments. I use it as an accessibility tool.

Because of chemo brain, ADHD and fatigue, structuring long posts can be hard even when I already know exactly what I want to say. I write my own reasoning and evidence first, then use AI to help format it into something readable and consistent with the forum’s layout.

Everything factual in my posts is verified, sourced, and my own research. The tool just helps me present it clearly — the same way someone might use Grammarly, voice-to-text, or an editor before publishing.

The AI I use is specifically instructed not to invent evidence or agree with me just because it fits my view. It’s bound by rules that prioritise verified data, logical consistency, and full transparency. I aim for truth, not confirmation bias.

If anyone wants to see how I work, I’m happy to share the guidance and rules I use to keep everything transparent and evidence-based."

How do you explain some of the formatting issues which clearly shows its a piece of AI text?

Mrs x

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 27 weeks ago

Border of London


"Just to clear up a common assumption: I don’t use AI to write my arguments. I use it as an accessibility tool.

Because of chemo brain, ADHD and fatigue, structuring long posts can be hard even when I already know exactly what I want to say. I write my own reasoning and evidence first, then use AI to help format it into something readable and consistent with the forum’s layout.

Everything factual in my posts is verified, sourced, and my own research. The tool just helps me present it clearly — the same way someone might use Grammarly, voice-to-text, or an editor before publishing.

The AI I use is specifically instructed not to invent evidence or agree with me just because it fits my view. It’s bound by rules that prioritise verified data, logical consistency, and full transparency. I aim for truth, not confirmation bias.

If anyone wants to see how I work, I’m happy to share the guidance and rules I use to keep everything transparent and evidence-based."

You began a thread, which analysed responses from 17 different organisations, summarising why you feel that they rejected the Cass Report. Did you actually perform that analysis yourself? Or did you ask GenAI to "give evidence and findings from organisations who opposed the Cass Report"?

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"How do you explain some of the formatting issues which clearly shows its a piece of AI text?

Mrs x"

That’s fair — the formatting probably does look like AI tidied it, because it did.

I write the arguments and gather the evidence myself, then use AI to polish and format it so it’s readable. Editing isn’t the same thing as writing.

I use AI as an accessibility aid, not a ghost writer.

If the way I express myself reads as too polished or too structured, that’s because I’m autistic and need tools to organise my thoughts clearly.

If anyone has a problem with that, that’s not a debate about honesty — it’s a debate about whether disabled people deserve equal access to communication. And I genuinely couldn’t give a flying fuck if that offends anyone’s aesthetic sensibilities.

Reply privately, Reply in forum +quote or View forums list

 

By *ortyairCouple 27 weeks ago

Wallasey


"Just to clear up a common assumption: I don’t use AI to write my arguments. I use it as an accessibility tool.

Because of chemo brain, ADHD and fatigue, structuring long posts can be hard even when I already know exactly what I want to say. I write my own reasoning and evidence first, then use AI to help format it into something readable and consistent with the forum’s layout.

Everything factual in my posts is verified, sourced, and my own research. The tool just helps me present it clearly — the same way someone might use Grammarly, voice-to-text, or an editor before publishing.

The AI I use is specifically instructed not to invent evidence or agree with me just because it fits my view. It’s bound by rules that prioritise verified data, logical consistency, and full transparency. I aim for truth, not confirmation bias.

If anyone wants to see how I work, I’m happy to share the guidance and rules I use to keep everything transparent and evidence-based.

You began a thread, which analysed responses from 17 different organisations, summarising why you feel that they rejected the Cass Report. Did you actually perform that analysis yourself? Or did you ask GenAI to "give evidence and findings from organisations who opposed the Cass Report"?"

Thats a better of a thread, what was the purpose of saying something was universally panned by the relevant professional bodies in the field, when in fact it was only one and they had to change their stance when they realised how wrong they got it.

I think its how you ask the AI things, the questions you ask arent objective, they seem very subjective.

Mrs x

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"You began a thread, which analysed responses from 17 different organisations, summarising why you feel that they rejected the Cass Report. Did you actually perform that analysis yourself? Or did you ask GenAI to "give evidence and findings from organisations who opposed the Cass Report"?"

Actually, the first thing I did was lay down the ground rules for how I post in a thread — evidence first, logic second, no speculation.

Then I asked the AI to find and cite every accredited organisation or expert who had publicly commented on the Cass Review, including whether they endorsed the goal, the findings, and if they’d implemented anything from it.

It returned a list with citations.

I then went through each citation myself, read the original statements, and made notes on what was accurate or misleading.

After that, I told it to remove ideological sources like the LGB Alliance and Mermaids to keep the dataset balanced.

Finally, I had it rebuild the list using only those verified entries, and I re-checked every single one.

So yes — I performed the analysis. The AI just saved me the manual copy-and-paste work.

If anyone thinks that’s the same as letting AI write my argument, I honestly don’t know what to tell them. Doing your own research, checking your sources, and asking a tool to sort the data isn’t “outsourcing thought.” It’s critical thinking done efficiently.

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"Thats a better of a thread, what was the purpose of saying something was universally panned by the relevant professional bodies in the field, when in fact it was only one and they had to change their stance when they realised how wrong they got it.

I think its how you ask the AI things, the questions you ask arent objective, they seem very subjective.

Mrs x"

If you’re asking about why I said what I did regarding the Cass Review, that belongs in the Cass thread.

This one’s just about the using if ai.

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 27 weeks ago

Border of London


"You began a thread, which analysed responses from 17 different organisations, summarising why you feel that they rejected the Cass Report. Did you actually perform that analysis yourself? Or did you ask GenAI to "give evidence and findings from organisations who opposed the Cass Report"?

Actually, the first thing I did was lay down the ground rules for how I post in a thread — evidence first, logic second, no speculation.

Then I asked the AI to find and cite every accredited organisation or expert who had publicly commented on the Cass Review, including whether they endorsed the goal, the findings, and if they’d implemented anything from it.

It returned a list with citations.

I then went through each citation myself, read the original statements, and made notes on what was accurate or misleading.

After that, I told it to remove ideological sources like the LGB Alliance and Mermaids to keep the dataset balanced.

Finally, I had it rebuild the list using only those verified entries, and I re-checked every single one.

So yes — I performed the analysis. The AI just saved me the manual copy-and-paste work.

If anyone thinks that’s the same as letting AI write my argument, I honestly don’t know what to tell them. Doing your own research, checking your sources, and asking a tool to sort the data isn’t “outsourcing thought.” It’s critical thinking done efficiently."

Okay, then. And that whole (thinking done efficiently) process took how long, out of curiosity?

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago

That’s a really good point — I think a lot of us are starting to notice the same trend. AI tools can definitely help people organize their thoughts or express something more clearly, but when an entire post is lifted straight from an AI without context, it can feel a bit disingenuous. Part of what makes a forum discussion valuable is hearing each person’s authentic voice and perspective.

Acknowledging when AI has been used — and ideally sharing the prompt — seems like a fair and transparent practice. It helps readers understand where the ideas are coming from, and it can actually make for a more interesting discussion about how people are using these tools.

That said, I wouldn’t want to discourage thoughtful use of AI entirely. If someone uses it to clarify a complex point or generate examples, that can add value — as long as they remain open about it. The key, as you said, is moderation and honesty. Otherwise, yeah, we’ll end up with bots debating bots instead of humans exchanging ideas.

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"Okay, then. And that whole (thinking done efficiently) process took how long, out of curiosity?"

Honestly, I’m not entirely sure — I was in an AuDHD hyperfocus, and time tends to blur when that happens.

But I’d estimate somewhere between 60 and 90 minutes, give or take.

It was pretty much all I was doing between saying it needed its own thread and actually posting it.

You could probably work it out from the timestamps if you really wanted to.

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 27 weeks ago

Border of London


"Okay, then. And that whole (thinking done efficiently) process took how long, out of curiosity?

Honestly, I’m not entirely sure — I was in an AuDHD hyperfocus, and time tends to blur when that happens.

But I’d estimate somewhere between 60 and 90 minutes, give or take.

It was pretty much all I was doing between saying it needed its own thread and actually posting it.

You could probably work it out from the timestamps if you really wanted to."

Efficient, then.

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago

I mentioned it before.

I have a project in chatgpt which is exclusively for the forums.

Each thread had is own chat.

At the start of each chat I give it the rules if discussion which is below...

TheNerdyFemby’s Posting Rules, Style Guide & Accessibility Protocol

This is how I use AI in forum discussions and how my posts are structured.

It’s not a writing assistant in the creative sense — it’s an accessibility aid.

Because of chemo brain, ADHD and fatigue, structuring long arguments can be difficult even when I already know exactly what I want to say.

AI helps with clarity, formatting, and structure, but the reasoning and evidence are always mine.

Editing isn’t the same as writing.

---

1. Evidence Rules

Rule 1 — Evidence over agreement

Do not agree with me just because it’s what I want to hear.

Agreement should only follow when the available evidence supports the claim.

Acceptable evidence includes official statistics (ONS, GOV.UK, UN, WHO), peer-reviewed research, verified FOI responses, and reputable news sources.

Rule 2 — Verifiable and accessible sources

Only cite information that can be independently verified. No paywalled, broken, or inaccessible material.

Rule 3 — Respect the forum’s code of conduct

Stay within site rules. No personal attacks or inflammatory wording.

Rule 4 — Keep tone factual and language neutral

Focus on facts and reasoning, not rhetoric or emotional appeals.

Rule 5 — Transparency about uncertainty

When evidence is partial or suggestive, say so clearly.

Rule 6 — Update positions when better data emerges

If new credible information changes the picture, I update accordingly.

Rule 7 — Source presentation within platform limits

Name the publication, title, and quote the relevant passage. Link only if the forum allows it.

Rule 8 — Correct misinformation constructively

Recognise accurate points first, then calmly correct errors with sourced facts.

Rule 9 — Don’t feed provocation

If someone is arguing in bad faith, I give one factual reply and disengage.

---

2. Tone & Style Guide

Calm, factual, and direct.

No melodrama or posturing.

Respond only to what’s actually been said.

Avoid speculation, assumptions, or emotional language.

Sarcasm and dry humour are fine when used to expose hypocrisy or absurdity.

Precision first: define terms, avoid vague moralising.

Expose logical fallacies, but don’t lecture.

Correct misrepresentations of my position immediately.

If opponents get personal: “That’s not an argument, it’s an insult.”

If they rely on emotion: “Feelings aren’t evidence.”

I write for the readers, not the person arguing.

Core principles: truth over comfort, clarity over persuasion, integrity over performance.

---

3. How I Use AI

I use AI as an accessibility aid, not a ghost writer.

If the way I express myself reads as too polished or too structured, that’s because I’m autistic and need tools to organise my thoughts clearly.

The AI I use is specifically instructed not to invent evidence or agree with me just because it fits my view.

It prioritises verified data, logical consistency, and transparency.

I aim for truth, not confirmation bias.

---

4. How I Send and Build Replies

1. I write my own thoughts and reasoning first.

2. I paste the forum post I’m replying to.

3. AI formats the reply into a clean, readable version of what I’ve already written.

4. I review, edit, and approve it before posting.

5. I never post anything I don’t understand or agree with.

Reply privately, Reply in forum +quote or View forums list

 

By *ortyairCouple 27 weeks ago

Wallasey


"How do you explain some of the formatting issues which clearly shows its a piece of AI text?

Mrs x

That’s fair — the formatting probably does look like AI tidied it, because it did.

I write the arguments and gather the evidence myself, then use AI to polish and format it so it’s readable. Editing isn’t the same thing as writing.

I use AI as an accessibility aid, not a ghost writer.

If the way I express myself reads as too polished or too structured, that’s because I’m autistic and need tools to organise my thoughts clearly.

If anyone has a problem with that, that’s not a debate about honesty — it’s a debate about whether disabled people deserve equal access to communication. And I genuinely couldn’t give a flying fuck if that offends anyone’s aesthetic sensibilities."

Listen to yourself. You claiming AI is just an editing tool. Thats just ridiculous. Word processors are editing tools, you type up your work, your research, re-type ot and then put it out into the world. Thats your work, that you edited.

AI is in the form you use it is an answer/opinion generator. You ask it a question and it gives you an answer, its opinion, its not even yours. If you like it you post it. Ffs AI even asks if you want a deeper dive into the subject and its obvious that, where it matches your narrative, you will post that too. Thats now at a point that none of that post is your work at all yet you pass it off as your own work.

Most of the time it appears as if you dont understand what AI's written and you trip yourself up on the 'small' stuff. For example your use of the word 'universal' when in fact it was a singular issue. Or when you say gender and sex are both the same but then when your AI response alludes that they are you'll then argue that they are attached, two halves of the same coin sort of thing. When you are then challenged on this about having your cake and eating it, your off on another artificially generated tangent.

Aa for your last paragraph, its very typical of your debating style. Saying your autistic and disabled, without knowing anything about the people you are debating about is not going to illicit empathy. It just looks a little selfish, asking for compassion without offering any yourself. How do you know, who else is neurodivergent on here? There's bound to be posters with all sorts of disabilities but they aren't posting about them until there's a specific thread about this. If you are struggling with this then maybe you shouldn't post as much as you do, and you are doing really well here for someone with so many problems as you have to be up there with the most prolific of posters... some of which is actually your own work.

Mrs x

Reply privately, Reply in forum +quote or View forums list

 

By *ortyairCouple 27 weeks ago

Wallasey


"I mentioned it before.

I have a project in chatgpt which is exclusively for the forums.

Each thread had is own chat.

At the start of each chat I give it the rules if discussion which is below...

TheNerdyFemby’s Posting Rules, Style Guide & Accessibility Protocol

This is how I use AI in forum discussions and how my posts are structured.

It’s not a writing assistant in the creative sense — it’s an accessibility aid.

Because of chemo brain, ADHD and fatigue, structuring long arguments can be difficult even when I already know exactly what I want to say.

AI helps with clarity, formatting, and structure, but the reasoning and evidence are always mine.

Editing isn’t the same as writing.

---

1. Evidence Rules

Rule 1 — Evidence over agreement

Do not agree with me just because it’s what I want to hear.

Agreement should only follow when the available evidence supports the claim.

Acceptable evidence includes official statistics (ONS, GOV.UK, UN, WHO), peer-reviewed research, verified FOI responses, and reputable news sources.

Rule 2 — Verifiable and accessible sources

Only cite information that can be independently verified. No paywalled, broken, or inaccessible material.

Rule 3 — Respect the forum’s code of conduct

Stay within site rules. No personal attacks or inflammatory wording.

Rule 4 — Keep tone factual and language neutral

Focus on facts and reasoning, not rhetoric or emotional appeals.

Rule 5 — Transparency about uncertainty

When evidence is partial or suggestive, say so clearly.

Rule 6 — Update positions when better data emerges

If new credible information changes the picture, I update accordingly.

Rule 7 — Source presentation within platform limits

Name the publication, title, and quote the relevant passage. Link only if the forum allows it.

Rule 8 — Correct misinformation constructively

Recognise accurate points first, then calmly correct errors with sourced facts.

Rule 9 — Don’t feed provocation

If someone is arguing in bad faith, I give one factual reply and disengage.

---

2. Tone & Style Guide

Calm, factual, and direct.

No melodrama or posturing.

Respond only to what’s actually been said.

Avoid speculation, assumptions, or emotional language.

Sarcasm and dry humour are fine when used to expose hypocrisy or absurdity.

Precision first: define terms, avoid vague moralising.

Expose logical fallacies, but don’t lecture.

Correct misrepresentations of my position immediately.

If opponents get personal: “That’s not an argument, it’s an insult.”

If they rely on emotion: “Feelings aren’t evidence.”

I write for the readers, not the person arguing.

Core principles: truth over comfort, clarity over persuasion, integrity over performance.

---

3. How I Use AI

I use AI as an accessibility aid, not a ghost writer.

If the way I express myself reads as too polished or too structured, that’s because I’m autistic and need tools to organise my thoughts clearly.

The AI I use is specifically instructed not to invent evidence or agree with me just because it fits my view.

It prioritises verified data, logical consistency, and transparency.

I aim for truth, not confirmation bias.

---

4. How I Send and Build Replies

1. I write my own thoughts and reasoning first.

2. I paste the forum post I’m replying to.

3. AI formats the reply into a clean, readable version of what I’ve already written.

4. I review, edit, and approve it before posting.

5. I never post anything I don’t understand or agree with.

"

This is an AI post,

Mrs x

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"Listen to yourself. You claiming AI is just an editing tool. Thats just ridiculous. Word processors are editing tools, you type up your work, your research, re-type ot and then put it out into the world. Thats your work, that you edited.

AI is in the form you use it is an answer/opinion generator. You ask it a question and it gives you an answer, its opinion, its not even yours. If you like it you post it. Ffs AI even asks if you want a deeper dive into the subject and its obvious that, where it matches your narrative, you will post that too. Thats now at a point that none of that post is your work at all yet you pass it off as your own work.

Most of the time it appears as if you dont understand what AI's written and you trip yourself up on the 'small' stuff. For example your use of the word 'universal' when in fact it was a singular issue. Or when you say gender and sex are both the same but then when your AI response alludes that they are you'll then argue that they are attached, two halves of the same coin sort of thing. When you are then challenged on this about having your cake and eating it, your off on another artificially generated tangent.

Aa for your last paragraph, its very typical of your debating style. Saying your autistic and disabled, without knowing anything about the people you are debating about is not going to illicit empathy. It just looks a little selfish, asking for compassion without offering any yourself. How do you know, who else is neurodivergent on here? There's bound to be posters with all sorts of disabilities but they aren't posting about them until there's a specific thread about this. If you are struggling with this then maybe you shouldn't post as much as you do, and you are doing really well here for someone with so many problems as you have to be up there with the most prolific of posters... some of which is actually your own work.

Mrs x"

Sweetie, I don’t need your empathy.

And I definitely don’t need your condescension.

It’s clear you don’t like how I use AI — even though your understanding of it is completely off-base.

As I’ve already said, I don’t care if someone has a problem with that.

There’s absolutely nothing forcing you to engage with my posts if they aren’t to your liking.

Scroll past or ignore — whatever works for you. But don’t confuse your dislike for proof that I’m wrong.

The data stands whether you approve of my formatting or not.

Reply privately, Reply in forum +quote or View forums list

 

By *ortyairCouple 27 weeks ago

Wallasey


"Thats a better of a thread, what was the purpose of saying something was universally panned by the relevant professional bodies in the field, when in fact it was only one and they had to change their stance when they realised how wrong they got it.

I think its how you ask the AI things, the questions you ask arent objective, they seem very subjective.

Mrs x

If you’re asking about why I said what I did regarding the Cass Review, that belongs in the Cass thread.

This one’s just about the using if ai."

Someone doesn't know the forum rules. Quotes from other threads can be used and often are. Ask HPT what they think about that haha,

Mrs x

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"This is an AI post,

Mrs x"

Yes, it is — I asked ChatGPT to take my own rules and notes and format them into a single post so I could share them clearly here.

I’m not sure what “win” you think that is. It’s literally a formatting job.

The content, structure, and reasoning were all mine — it just made it tidy enough to read.

If your point is that I’m transparent about using tools to make communication accessible, then sure, guilty as charged.

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"Someone doesn't know the forum rules. Quotes from other threads can be used and often are. Ask HPT what they think about that haha,

Mrs x"

Of course quotes can be used — that’s not what I said.

I’m saying I’m not going to run the same debate in two different threads.

If the question is about the Cass Review, it belongs in the Cass Review thread.

This one’s for discussing AI use, not re-litigating another topic.

Reply privately, Reply in forum +quote or View forums list

 

By *ortyairCouple 27 weeks ago

Wallasey


"Listen to yourself. You claiming AI is just an editing tool. Thats just ridiculous. Word processors are editing tools, you type up your work, your research, re-type ot and then put it out into the world. Thats your work, that you edited.

AI is in the form you use it is an answer/opinion generator. You ask it a question and it gives you an answer, its opinion, its not even yours. If you like it you post it. Ffs AI even asks if you want a deeper dive into the subject and its obvious that, where it matches your narrative, you will post that too. Thats now at a point that none of that post is your work at all yet you pass it off as your own work.

Most of the time it appears as if you dont understand what AI's written and you trip yourself up on the 'small' stuff. For example your use of the word 'universal' when in fact it was a singular issue. Or when you say gender and sex are both the same but then when your AI response alludes that they are you'll then argue that they are attached, two halves of the same coin sort of thing. When you are then challenged on this about having your cake and eating it, your off on another artificially generated tangent.

Aa for your last paragraph, its very typical of your debating style. Saying your autistic and disabled, without knowing anything about the people you are debating about is not going to illicit empathy. It just looks a little selfish, asking for compassion without offering any yourself. How do you know, who else is neurodivergent on here? There's bound to be posters with all sorts of disabilities but they aren't posting about them until there's a specific thread about this. If you are struggling with this then maybe you shouldn't post as much as you do, and you are doing really well here for someone with so many problems as you have to be up there with the most prolific of posters... some of which is actually your own work.

Mrs x

Sweetie, I don’t need your empathy.

And I definitely don’t need your condescension.

It’s clear you don’t like how I use AI — even though your understanding of it is completely off-base.

As I’ve already said, I don’t care if someone has a problem with that.

There’s absolutely nothing forcing you to engage with my posts if they aren’t to your liking.

Scroll past or ignore — whatever works for you. But don’t confuse your dislike for proof that I’m wrong.

The data stands whether you approve of my formatting or not."

It's not formatting, thats a word processor type of function.

You put your work in, edit it, format it and your work comes out.

You ask AI a question and it gives you an answer, that is the AI apps work or opinion, its not yours.

You have hit rock bottom with this, unfortunately you are carrying on digging haha,

Mrs x

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"It's not formatting, thats a word processor type of function.

You put your work in, edit it, format it and your work comes out.

You ask AI a question and it gives you an answer, that is the AI apps work or opinion, its not yours.

You have hit rock bottom with this, unfortunately you are carrying on digging haha,

Mrs x"

You keep describing how you think I use it, not how I actually do.

I don’t ask AI for opinions or ready-made answers — I give it my reasoning, my evidence, and my draft text, then ask it to restructure that into something readable.

That’s not “rock bottom,” it’s workflow.

If you prefer to keep misunderstanding it, that’s on you — but it doesn’t make you right.

Reply privately, Reply in forum +quote or View forums list

 

By *ortyairCouple 27 weeks ago

Wallasey


"Someone doesn't know the forum rules. Quotes from other threads can be used and often are. Ask HPT what they think about that haha,

Mrs x

Of course quotes can be used — that’s not what I said.

I’m saying I’m not going to run the same debate in two different threads.

If the question is about the Cass Review, it belongs in the Cass Review thread.

This one’s for discussing AI use, not re-litigating another topic."

It's not the same debate, I'm just pointing out that they you will appear to say anything to try and win a point. Saying something is 'universal', when its clearly not or saying two things are totally different one minute then saying that they are indeed connected the next. You are tying yourself up in knots of your own making.

Mrs x

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"It's not the same debate, I'm just pointing out that they you will appear to say anything to try and win a point. Saying something is 'universal', when its clearly not or saying two things are totally different one minute then saying that they are indeed connected the next. You are tying yourself up in knots of your own making.

Mrs x"

You keep saying “I’ll say anything to win,” but all I’ve done is provide sources and clarify definitions.

Disagreeing with you isn’t contradiction — it’s consistency based on evidence.

If you actually want to debate whether the Cass Review was universally endorsed, do it in the Cass thread.

This one’s about AI use, and I’m not going to keep mixing the two.

This is my last post in this thread about that argument.

Reply privately, Reply in forum +quote or View forums list

 

By *ortyairCouple 27 weeks ago

Wallasey


"It's not formatting, thats a word processor type of function.

You put your work in, edit it, format it and your work comes out.

You ask AI a question and it gives you an answer, that is the AI apps work or opinion, its not yours.

You have hit rock bottom with this, unfortunately you are carrying on digging haha,

Mrs x

You keep describing how you think I use it, not how I actually do.

I don’t ask AI for opinions or ready-made answers — I give it my reasoning, my evidence, and my draft text, then ask it to restructure that into something readable.

That’s not “rock bottom,” it’s workflow.

If you prefer to keep misunderstanding it, that’s on you — but it doesn’t make you right."

I'm not misunderstanding anything. You post stuff thats so full of data, quotes and the like its obvious that you've asked a question.

Just the speed with which you respond is evidence of this. The other thread, which cannot be named, apparently, you post realms of info about tons of medical bodies. If you typed that up yourself that would take an age. And yet when a part of your post is challenged you respond very quickly, with loads more data and opinions, facts and the like.

Now either you've spent days writing and preparing potential posts or you are getting your arguments from somewhere. Hmm where could that be I suppose.

It wouldn't be so bad if you just said you asked AI bit to try and pass it off as your own work, tut-tut.

Mrs x

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 27 weeks ago

Border of London


"...you post realms of info about tons of medical bodies. If you typed that up yourself that would take an age. "

Don't be unfair. 60-90 minutes of time on that equals about 3-5 minutes per report/statement (including the few that were discarded as biased). That's plenty of time to read, analyse, summarise, form an opinion, create a meta analysis and then frame a response, for each of seventeen articles. It's called efficiency.

It's amazing how we can basically download information to our brains nowadays, and understand and analyse so much in so little time. Get with the 21st century.

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"I'm not misunderstanding anything. You post stuff thats so full of data, quotes and the like its obvious that you've asked a question.

Just the speed with which you respond is evidence of this. The other thread, which cannot be named, apparently, you post realms of info about tons of medical bodies. If you typed that up yourself that would take an age. And yet when a part of your post is challenged you respond very quickly, with loads more data and opinions, facts and the like.

Now either you've spent days writing and preparing potential posts or you are getting your arguments from somewhere. Hmm where could that be I suppose.

It wouldn't be so bad if you just said you asked AI bit to try and pass it off as your own work, tut-tut.

Mrs x"

You keep assuming “speed” equals “outsourcing.”

It doesn’t. It equals preparation.

This isn’t the first time I’ve had these debates — or the second, or the third.

The Cass Report alone, I’ve probably discussed half a dozen times before.

So while I spent about an hour putting that thread together, I already had most of the data — or knew exactly where to find it.

AI just helps me keep it organised and legible — it doesn’t create the content.

If anything, its biggest job is stopping me from swearing at people.

If you want to keep imagining otherwise, that’s fine, but repeating it doesn’t make it true.

Reply privately, Reply in forum +quote or View forums list

 

By *ortyairCouple 27 weeks ago

Wallasey


"It's not the same debate, I'm just pointing out that they you will appear to say anything to try and win a point. Saying something is 'universal', when its clearly not or saying two things are totally different one minute then saying that they are indeed connected the next. You are tying yourself up in knots of your own making.

Mrs x

You keep saying “I’ll say anything to win,” but all I’ve done is provide sources and clarify definitions.

Disagreeing with you isn’t contradiction — it’s consistency based on evidence.

If you actually want to debate whether the Cass Review was universally endorsed, do it in the Cass thread.

This one’s about AI use, and I’m not going to keep mixing the two.

This is my last post in this thread about that argument."

I dont have to do what you suggest for two great reasons.

Firstly I'm not saying you said it was ''universally' endorsed, I'm saying that you said it was 'universally' panned by the medical profession, when in fact only one body, the BMA, didnt endorse it. Not really 'universal' is it.

And secondly, I can't be arsed...

Mrs x

Reply privately, Reply in forum +quote or View forums list

 

By *ortyairCouple 27 weeks ago

Wallasey


"I'm not misunderstanding anything. You post stuff thats so full of data, quotes and the like its obvious that you've asked a question.

Just the speed with which you respond is evidence of this. The other thread, which cannot be named, apparently, you post realms of info about tons of medical bodies. If you typed that up yourself that would take an age. And yet when a part of your post is challenged you respond very quickly, with loads more data and opinions, facts and the like.

Now either you've spent days writing and preparing potential posts or you are getting your arguments from somewhere. Hmm where could that be I suppose.

It wouldn't be so bad if you just said you asked AI bit to try and pass it off as your own work, tut-tut.

Mrs x

You keep assuming “speed” equals “outsourcing.”

It doesn’t. It equals preparation.

This isn’t the first time I’ve had these debates — or the second, or the third.

The Cass Report alone, I’ve probably discussed half a dozen times before.

So while I spent about an hour putting that thread together, I already had most of the data — or knew exactly where to find it.

AI just helps me keep it organised and legible — it doesn’t create the content.

If anything, its biggest job is stopping me from swearing at people.

If you want to keep imagining otherwise, that’s fine, but repeating it doesn’t make it true."

Your use of emotive language is an indicator that you are not just using AI as an editing tool. AI specifically states that when used like that one of its functions is the removal of such language.

You are definitely asking questions of it just due to the technical, data driven response you give. Nobody can remember reams and reams of data like that. And when others look this up, the responses they are getting from their AI is almost identical.

Mrs x

Reply privately, Reply in forum +quote or View forums list

 

By *ortyairCouple 27 weeks ago

Wallasey


"...you post realms of info about tons of medical bodies. If you typed that up yourself that would take an age.

Don't be unfair. 60-90 minutes of time on that equals about 3-5 minutes per report/statement (including the few that were discarded as biased). That's plenty of time to read, analyse, summarise, form an opinion, create a meta analysis and then frame a response, for each of seventeen articles. It's called efficiency.

It's amazing how we can basically download information to our brains nowadays, and understand and analyse so much in so little time. Get with the 21st century."

OK, slap the stupidity out of me.

I wish AI was around 25 years ago, I could have learnt Aquinas in minutes, Jurisprudence would have been a piece of piss haha, Mrs x

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"Your use of emotive language is an indicator that you are not just using AI as an editing tool. AI specifically states that when used like that one of its functions is the removal of such language.

You are definitely asking questions of it just due to the technical, data driven response you give. Nobody can remember reams and reams of data like that. And when others look this up, the responses they are getting from their AI is almost identical.

Mrs x"

You seem determined to tell me what I’m doing instead of listening to what I’ve actually said.

If you want to believe that no one can remember or source data quickly without outsourcing it, that’s your limitation, not mine.

I’m not going to keep explaining the same thing — especially to someone who’s already decided their conclusion.

Reply privately, Reply in forum +quote or View forums list

 

By *UGGYBEAR2015Man 27 weeks ago

BRIDPORT

I was using AI over forty years ago, Canadian Holstein, and very expensive.

Reply privately, Reply in forum +quote or View forums list

 

By *otMe66Man 27 weeks ago

Terra Firma

Studies show that excessive use of AI under utilises the brain and the ability to reason. Combined with the lack of retained information it is actually making heavy users, totally reliant.

Another interesting side effect of heavy use, it the bias shift that happens from those users. They start to believe the output is so good that would fool anyone reading it into believing they wrote it. This is tied to an old saying, a little knowledge is a dangerous thing. The information provided by AI is not understood by the user, however it sounds plausible enough to them to be 100% correct, if they believe it, you will too.

Look it up and see what AI has to say..

AI can be a threat to independent thought and is a genuine risk.

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"Studies show that excessive use of AI under utilises the brain and the ability to reason. Combined with the lack of retained information it is actually making heavy users, totally reliant.

Another interesting side effect of heavy use, it the bias shift that happens from those users. They start to believe the output is so good that would fool anyone reading it into believing they wrote it. This is tied to an old saying, a little knowledge is a dangerous thing. The information provided by AI is not understood by the user, however it sounds plausible enough to them to be 100% correct, if they believe it, you will too.

Look it up and see what AI has to say..

AI can be a threat to independent thought and is a genuine risk."

I agree — used improperly, AI has a lot of pitfalls.

It can easily encourage lazy thinking or make people overconfident in what they’re reading.

Like any tool, the outcome depends on how responsibly it’s used.

Reply privately, Reply in forum +quote or View forums list

 

By *otMe66Man 27 weeks ago

Terra Firma


"Studies show that excessive use of AI under utilises the brain and the ability to reason. Combined with the lack of retained information it is actually making heavy users, totally reliant.

Another interesting side effect of heavy use, it the bias shift that happens from those users. They start to believe the output is so good that would fool anyone reading it into believing they wrote it. This is tied to an old saying, a little knowledge is a dangerous thing. The information provided by AI is not understood by the user, however it sounds plausible enough to them to be 100% correct, if they believe it, you will too.

Look it up and see what AI has to say..

AI can be a threat to independent thought and is a genuine risk.

I agree — used improperly, AI has a lot of pitfalls.

It can easily encourage lazy thinking or make people overconfident in what they’re reading.

Like any tool, the outcome depends on how responsibly it’s used."

Exactly.

If you don't know how multiplication works or how units work, and put 100 x 10 into a calculator and it presents the answer as 1001, you wouldn't know it was wrong. you would trust the calculator because that what calculators do.

AI is creating the same problems but on a global scale covering all manner of subjects. The sudden emergence of experts in multiple fields is a very recent phenomenon.

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"Exactly.

If you don't know how multiplication works or how units work, and put 100 x 10 into a calculator and it presents the answer as 1001, you wouldn't know it was wrong. you would trust the calculator because that what calculators do.

AI is creating the same problems but on a global scale covering all manner of subjects. The sudden emergence of experts in multiple fields is a very recent phenomenon. "

I agree those are genuine issues.

But I wouldn’t say it’s true of every new voice that’s appeared.

Some are people who’ve always had their own thoughts — they just lacked accessible ways to express them before.

Reply privately, Reply in forum +quote or View forums list

 

By *otMe66Man 27 weeks ago

Terra Firma


"Exactly.

If you don't know how multiplication works or how units work, and put 100 x 10 into a calculator and it presents the answer as 1001, you wouldn't know it was wrong. you would trust the calculator because that what calculators do.

AI is creating the same problems but on a global scale covering all manner of subjects. The sudden emergence of experts in multiple fields is a very recent phenomenon.

I agree those are genuine issues.

But I wouldn’t say it’s true of every new voice that’s appeared.

Some are people who’ve always had their own thoughts — they just lacked accessible ways to express them before."

It isn't the expression that is the issue, or the US spellings and over use of dashes. It is passing off information as fact and holding onto that belief because it was generated by AI.

That is where the danger is.

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"It isn't the expression that is the issue, or the US spellings and over use of dashes. It is passing off information as fact and holding onto that belief because it was generated by AI.

That is where the danger is."

Absolutely. But more than a few people are conflating the two — using AI to express a thought isn’t the same as using it to create one.

You’re right that the real danger comes when people stop questioning what it gives them and start treating it as unquestionable fact.

That’s not an AI problem so much as a critical-thinking one.

Reply privately, Reply in forum +quote or View forums list

 

By *ortyairCouple 27 weeks ago

Wallasey


"Exactly.

If you don't know how multiplication works or how units work, and put 100 x 10 into a calculator and it presents the answer as 1001, you wouldn't know it was wrong. you would trust the calculator because that what calculators do.

AI is creating the same problems but on a global scale covering all manner of subjects. The sudden emergence of experts in multiple fields is a very recent phenomenon.

I agree those are genuine issues.

But I wouldn’t say it’s true of every new voice that’s appeared.

Some are people who’ve always had their own thoughts — they just lacked accessible ways to express them before.

It isn't the expression that is the issue, or the US spellings and over use of dashes. It is passing off information as fact and holding onto that belief because it was generated by AI.

That is where the danger is."

Couldn't agree more,

Mrs x

Reply privately, Reply in forum +quote or View forums list

 

By *ostindreamsMan 27 weeks ago

London

AI is good at summarising facts in an article or fixing typos in your writings. But if you ask AI a question about a political or social issue, it hallucinates a lot.

More often than not, I have seen it making some statement with a source linked to it. If you open the source, you will see that the source doesn't say anything that the AI says.

My guess is that they used lots of social media content to train AI. Social media is rife with people doing this - Making a statement and sharing a link as a source though the source doesn't really say what they are saying. They do this hoping that others won't open the source and read it.

Reply privately, Reply in forum +quote or View forums list

 

By *otMe66Man 27 weeks ago

Terra Firma


"AI is good at summarising facts in an article or fixing typos in your writings. But if you ask AI a question about a political or social issue, it hallucinates a lot.

More often than not, I have seen it making some statement with a source linked to it. If you open the source, you will see that the source doesn't say anything that the AI says.

My guess is that they used lots of social media content to train AI. Social media is rife with people doing this - Making a statement and sharing a link as a source though the source doesn't really say what they are saying. They do this hoping that others won't open the source and read it."

Exactly! Exposing AI use, ask it to have an opinion...

My guess, comments will be pasted into the tool of choice and the response comes from that. Recipe for a disaster, but bullet proof on the other side of the fence.

Reply privately, Reply in forum +quote or View forums list

 

By *ortyairCouple 27 weeks ago

Wallasey


"It isn't the expression that is the issue, or the US spellings and over use of dashes. It is passing off information as fact and holding onto that belief because it was generated by AI.

That is where the danger is.

Absolutely. But more than a few people are conflating the two — using AI to express a thought isn’t the same as using it to create one.

You’re right that the real danger comes when people stop questioning what it gives them and start treating it as unquestionable fact.

That’s not an AI problem so much as a critical-thinking one."

So if an advocate of critical thinking why use AI at all? Haha cake and eat it again, Mrs x

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"So if an advocate of critical thinking why use AI at all? Haha cake and eat it again, Mrs x"

That’s not a counter-argument, it’s a personal swipe.

Asking why someone uses a tool doesn’t invalidate what was said about critical thinking — it just avoids addressing it.

You can criticise AI use all you like, but it doesn’t change the point: tools don’t remove the need for thought; they expose who’s doing it and who isn’t.

Reply privately, Reply in forum +quote or View forums list

 

By *ortyairCouple 27 weeks ago

Wallasey


"So if an advocate of critical thinking why use AI at all? Haha cake and eat it again, Mrs x

That’s not a counter-argument, it’s a personal swipe.

Asking why someone uses a tool doesn’t invalidate what was said about critical thinking — it just avoids addressing it.

You can criticise AI use all you like, but it doesn’t change the point: tools don’t remove the need for thought; they expose who’s doing it and who isn’t."

Your use of AI is extensive, not to edit anything and you are not genuine about your use but then again its a pattern with you,

Mrs x

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"Your use of AI is extensive, not to edit anything and you are not genuine about your use but then again its a pattern with you,

Mrs x"

Personal attack, not an argument.

Reply privately, Reply in forum +quote or View forums list

 

By *ctionSandwichCouple 27 weeks ago

Newcastle under Lyme

Toddler mentality.

Reply privately, Reply in forum +quote or View forums list

 

By *ortyairCouple 27 weeks ago

Wallasey


"Your use of AI is extensive, not to edit anything and you are not genuine about your use but then again its a pattern with you,

Mrs x

Personal attack, not an argument."

How is that a personal attack, from observations its obvious you use AI and I'm far from being the only one to say this. Yet you continue to say thats not true. Thats a pattern,

Mrs x

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"How is that a personal attack, from observations its obvious you use AI and I'm far from being the only one to say this. Yet you continue to say thats not true. Thats a pattern,

Mrs x"

Calling me “not genuine” is still a personal attack unless you can back it with verifiable evidence.

I haven’t hidden my use of AI — in fact, in this very thread I’ve been incredibly detailed about how I use it.

Observation isn’t proof; it’s interpretation.

If you have facts, present them. Otherwise, it’s just another assumption.

Reply privately, Reply in forum +quote or View forums list

 

By *ortyairCouple 27 weeks ago

Wallasey


"How is that a personal attack, from observations its obvious you use AI and I'm far from being the only one to say this. Yet you continue to say thats not true. Thats a pattern,

Mrs x

Calling me “not genuine” is still a personal attack unless you can back it with verifiable evidence.

I haven’t hidden my use of AI — in fact, in this very thread I’ve been incredibly detailed about how I use it.

Observation isn’t proof; it’s interpretation.

If you have facts, present them. Otherwise, it’s just another assumption."

On another thread, you've tried to say that you mistrustd a source you quoted from The Guardian because it may have been anti-trans. You stated that after I told you about this source in The Guardian you were able to use it to cross reference information from the BMA.

But this is not true is it. Because the source you say allowed you to find this information came from The New Statesman. You even said that in a post on the other thread.

But you make up some cock and bull story about The Guardian bring anti-trans. Why? It was dismissed by you earlier when I brought it up. But now that there's several sources I've quoted, you cannot deny what I've said and so the deflection starts. Thats not genuine.

It's dishonest,

Mrs x

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago

Asked and answered in the other thread.

You’re now mixing two separate discussions just to repeat the same accusation.

For clarity: I explained exactly why I verified the information and how I reached my conclusion.

If you want to debate that further, do it where it belongs — not here.

This thread’s about AI use, not the Cass Review.

Reply privately, Reply in forum +quote or View forums list

 

By *ortyairCouple 27 weeks ago

Wallasey


"Asked and answered in the other thread.

You’re now mixing two separate discussions just to repeat the same accusation.

For clarity: I explained exactly why I verified the information and how I reached my conclusion.

If you want to debate that further, do it where it belongs — not here.

This thread’s about AI use, not the Cass Review."

You asked for facts but you don't really want them. Got to admit deflection and dishonesty over two threads, simultaneously, is quite impressive but you need a great memory to do this service and it looks like you havent. Just hold your hands up, say"my bad" and we can crack on with the other points your AI wanted you to make,

Mrs x

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"You asked for facts but you don't really want them. Got to admit deflection and dishonesty over two threads, simultaneously, is quite impressive but you need a great memory to do this service and it looks like you havent. Just hold your hands up, say"my bad" and we can crack on with the other points your AI wanted you to make,

Mrs x"

Whatever you say, Mrs x.

I’m tired of repeating the same thing across multiple threads just for you to twist it into yet another attack.

Reply privately, Reply in forum +quote or View forums list

 

By *ortyairCouple 27 weeks ago

Wallasey


"You asked for facts but you don't really want them. Got to admit deflection and dishonesty over two threads, simultaneously, is quite impressive but you need a great memory to do this service and it looks like you havent. Just hold your hands up, say"my bad" and we can crack on with the other points your AI wanted you to make,

Mrs x

Whatever you say, Mrs x.

I’m tired of repeating the same thing across multiple threads just for you to twist it into yet another attack.

"

Deflect, spin and dig, maybe you should try debating and see how you get on,

Mrs x

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"You asked for facts but you don't really want them. Got to admit deflection and dishonesty over two threads, simultaneously, is quite impressive but you need a great memory to do this service and it looks like you havent. Just hold your hands up, say"my bad" and we can crack on with the other points your AI wanted you to make,

Mrs x

Whatever you say, Mrs x.

I’m tired of repeating the same thing across multiple threads just for you to twist it into yet another attack.

Deflect, spin and dig, maybe you should try debating and see how you get on,

Mrs x"

Whatever you say, Mrs x.

I’m tired of repeating the same thing across multiple threads just for you to twist it into yet another attack.

Reply privately, Reply in forum +quote or View forums list

 

By *ortyairCouple 27 weeks ago

Wallasey


"You asked for facts but you don't really want them. Got to admit deflection and dishonesty over two threads, simultaneously, is quite impressive but you need a great memory to do this service and it looks like you havent. Just hold your hands up, say"my bad" and we can crack on with the other points your AI wanted you to make,

Mrs x

Whatever you say, Mrs x.

I’m tired of repeating the same thing across multiple threads just for you to twist it into yet another attack.

Deflect, spin and dig, maybe you should try debating and see how you get on,

Mrs x

Whatever you say, Mrs x.

I’m tired of repeating the same thing across multiple threads just for you to twist it into yet another attack."

I'm not attacking anything, or anyone. You asked for evidence to my claims which I've given. Falsely claiming victim hood is another tactic to deflect from answering why you may be seen like this,

Mrs x

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"I'm not attacking anything, or anyone. You asked for evidence to my claims which I've given. Falsely claiming victim hood is another tactic to deflect from answering why you may be seen like this,

Mrs x"

If you’re genuinely done attacking, then stop trying to reframe everything as “deflection.”

I’ve already answered your questions — repeatedly and in detail.

At this point, you’re just rewording the same accusation and calling it debate.

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"If you’re genuinely done attacking, then stop trying to reframe everything as “deflection.”

I’ve already answered your questions — repeatedly and in detail.

At this point, you’re just rewording the same accusation and calling it debate."

Oh and yeah I let the ai write that one

Reply privately, Reply in forum +quote or View forums list

 

By *ortyairCouple 27 weeks ago

Wallasey


"If you’re genuinely done attacking, then stop trying to reframe everything as “deflection.”

I’ve already answered your questions — repeatedly and in detail.

At this point, you’re just rewording the same accusation and calling it debate.

Oh and yeah I let the ai write that one"

Haha, Mrs x

Reply privately, Reply in forum +quote or View forums list

 

By *ctionSandwichCouple 27 weeks ago

Newcastle under Lyme

AI can't explain why the USA is pissed off with Venezuala right now because it doesn't understand nuance or what historical facts are relevant. It will only parrot what people online, stupid ones at that, are talking about.

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 27 weeks ago

Border of London


"AI can't explain why the USA is pissed off with Venezuala right now because it doesn't understand nuance or what historical facts are relevant. It will only parrot what people online, stupid ones at that, are talking about."

Garbage in, garbage out.

Reply privately, Reply in forum +quote or View forums list

 

By *ctionSandwichCouple 27 weeks ago

Newcastle under Lyme


"AI can't explain why the USA is pissed off with Venezuala right now because it doesn't understand nuance or what historical facts are relevant. It will only parrot what people online, stupid ones at that, are talking about.

Garbage in, garbage out."

Exactly. Those AI summaries Google are starting to do on searches are the worst.

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 27 weeks ago

Border of London


"AI can't explain why the USA is pissed off with Venezuala right now because it doesn't understand nuance or what historical facts are relevant. It will only parrot what people online, stupid ones at that, are talking about.

Garbage in, garbage out.

Exactly. Those AI summaries Google are starting to do on searches are the worst."

And we now have GEO to contend with. Bad actors and big companies are piling money into shaping AI responses.

Reply privately, Reply in forum +quote or View forums list

 

By *ornucopiaMan 27 weeks ago

Bexley


"

...

Exactly. Those AI summaries Google are starting to do on searches are the worst."

Furthermore, they are the main reason why almost everybody has started talking about AI.

AI has been around for ages in some form or other but now it is in every social media user's vocabulary and gets used as the number one explanation for almost everything.

Thanks a million, Google!

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago

Yeah but it is wrote easy to make it so it doesn't show up.

Just as

-AI

after your search term

Reply privately, Reply in forum +quote or View forums list

 

By *ortyairCouple 27 weeks ago

Wallasey


"AI can't explain why the USA is pissed off with Venezuala right now because it doesn't understand nuance or what historical facts are relevant. It will only parrot what people online, stupid ones at that, are talking about.

Garbage in, garbage out."

My hubby quotes that all the time, about almost everything but I'm sure its a computer programming thing, he learnt from his computer games programming he used to do,

Mrs x

Reply privately, Reply in forum +quote or View forums list

 

By *ornucopiaMan 27 weeks ago

Bexley


"

Garbage in, garbage out.My hubby quotes that all the time, about almost everything but I'm sure its a computer programming thing, he learnt from his computer games programming he used to do,

Mrs x"

The GIGO acronym goes back yonks.

I used to regularly order in a 16mm film of that name to show to computer studies students in pre Windows days. Got to be around 30 years ago...

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 27 weeks ago

Border of London


"

Garbage in, garbage out.My hubby quotes that all the time, about almost everything but I'm sure its a computer programming thing, he learnt from his computer games programming he used to do,

Mrs x

The GIGO acronym goes back yonks.

I used to regularly order in a 16mm film of that name to show to computer studies students in pre Windows days. Got to be around 30 years ago..."

https://en.wikipedia.org/wiki/Garbage_in,_garbage_out

"...computers cannot think for themselves, and that "sloppily programmed" inputs inevitably lead to incorrect outputs."

In the case of GenAI, inputs include training data sets and user prompts (and, arguably, the programming itself).

Reply privately, Reply in forum +quote or View forums list

 

By *ortyairCouple 27 weeks ago

Wallasey


"

Garbage in, garbage out.My hubby quotes that all the time, about almost everything but I'm sure its a computer programming thing, he learnt from his computer games programming he used to do,

Mrs x

The GIGO acronym goes back yonks.

I used to regularly order in a 16mm film of that name to show to computer studies students in pre Windows days. Got to be around 30 years ago..."

He's been into computers since a young lad. He actually worked on stuff for Marconi, on torpedo at Neston before he even left school. Then at 16 he started writing computer games and was one of the top guys when the Spectrum was the top home computer.

He has loads of these daft sayings and still uses them.

He's a super geek really, sometimes I wonder where he'd prefer to have his hands on a hot blonde or a dusty circuit board. I think the circuit board might win actually haha,

Mrs x

Reply privately, Reply in forum +quote or View forums list

 

By *ornucopiaMan 27 weeks ago

Bexley


"

He's been into computers since a young lad. He actually worked on stuff for Marconi, on torpedo at Neston before he even left school..

Mrs x"

For a moment, I thought your hubby had worked for Guglielmo Marconi but when I looked up his age on your profile I realised that I was jumping to conclusions!

Reply privately, Reply in forum +quote or View forums list

 

By *ortyairCouple 27 weeks ago

Wallasey


"

He's been into computers since a young lad. He actually worked on stuff for Marconi, on torpedo at Neston before he even left school..

Mrs x

For a moment, I thought your hubby had worked for Guglielmo Marconi but when I looked up his age on your profile I realised that I was jumping to conclusions!"

Haha, I'll tell him that, it wont make his day 🤣 Mrs x

Reply privately, Reply in forum +quote or View forums list

 

By *ennineTopMan 27 weeks ago

York

Frankly, I don't care if someone is using AI as an assistant or even as a complete replacement for their own research and thinking.

I know enough about AI to not regard it as being particularly clever or frightening.

I'm interested in ideas and arguments. It doesn't matter whether ideas and arguments come from original thought, the Daily Express, a well-regarded book, some conspiracy theory or from a LLM.

It's the arguments themselves that matter and deflection away from this on the grounds that the source might be AI is basically just a modern form of fallacy similar to ad hominem.

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"Frankly, I don't care if someone is using AI as an assistant or even as a complete replacement for their own research and thinking.

I know enough about AI to not regard it as being particularly clever or frightening.

I'm interested in ideas and arguments. It doesn't matter whether ideas and arguments come from original thought, the Daily Express, a well-regarded book, some conspiracy theory or from a LLM.

It's the arguments themselves that matter and deflection away from this on the grounds that the source might be AI is basically just a modern form of fallacy similar to ad hominem."

Absolutely. The term I’ve seen used for it most widely is ad mechanum — attacking the use of a machine rather than engaging with the argument itself. It’s the same logical flaw as ad hominem, just updated for the 21st century.

Reply privately, Reply in forum +quote or View forums list

 

By *ostindreamsMan 27 weeks ago

London


"Frankly, I don't care if someone is using AI as an assistant or even as a complete replacement for their own research and thinking.

"

Using AI as complete replacement of their own research and thinking is not good for a debate and wastes everyone's time.

What's the point of someone sharing some AI slop if they haven't even thought through its content? AI hallucinates a lot. It makes numerous mistakes. If someone just copy pastes whatever AI vomits, others who read this end up wasting their time trying to find if what it said is indeed true, only to find that it's not true in many cases.

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"Using AI as complete replacement of their own research and thinking is not good for a debate and wastes everyone's time.

What's the point of someone sharing some AI slop if they haven't even thought through its content? AI hallucinates a lot. It makes numerous mistakes. If someone just copy pastes whatever AI vomits, others who read this end up wasting their time trying to find if what it said is indeed true, only to find that it's not true in many cases."

But everything you just said is equally true of posts made by people.

That’s why you engage with the content, not the source.

Attacking the source is a well-established logical fallacy.

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 27 weeks ago

Border of London

It probably all comes down to why people engage in forum posting to begin with.

Could one could simply chat with a bot and ask it to present many persons and arguments for the same sense of conversation?

Reply privately, Reply in forum +quote or View forums list

 

By *ostindreamsMan 27 weeks ago

London


"

But everything you just said is equally true of posts made by people.

"

Not really. When I make a post, I understand the logic behind the argument I am making. People are clearly posting AI bullshit without even reading it.

When I make an argument and mention a source, I check that the source does support what I am saying. People aren't checking that either when they copy paste AI bullshit.


"

That’s why you engage with the content, not the source.

"

When you copy paste AI slop, both the content and source end up being nonsensical.

People make a statement X in the post and say that Y is the source. I check the source Y to find that it doesn't say anything that the poster said.

When I question the poster about it and say that statement X is a lie, the poster claims that they never made that statement. So the poster clearly didn't even read what they were copy pasting from AI. They didn't even do some basic checks of the source either.

Thereby they are just wasting the time of everyone, losing credibility themselves.


"

Attacking the source is a well-established logical fallacy."

AI isn't the source here. AI vomits some bullshit it got from crawling social media.

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago

People and tools both make mistakes — that’s why we check evidence instead of assuming motive. If the information holds up, it doesn’t matter whether it came from a human, an algorithm, or a library shelf. Dismissing something because of its perceived origin isn’t critical thinking; it’s bias.

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 27 weeks ago

Border of London


"Dismissing something because of its perceived origin isn’t critical thinking; it’s bias."

Stop discrimination against AI!

Robot rights are human rights!

Bender for president!

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"Stop discrimination against AI!

Robot rights are human rights!

Bender for president!"

That's funny.

But my point was more that just cos you think it is ai doesn't mean it is.

Reply privately, Reply in forum +quote or View forums list

 

By *ennineTopMan 27 weeks ago

York


"Using AI as complete replacement of their own research and thinking is not good for a debate and wastes everyone's time.

What's the point of someone sharing some AI slop if they haven't even thought through its content? AI hallucinates a lot. It makes numerous mistakes. If someone just copy pastes whatever AI vomits, others who read this end up wasting their time trying to find if what it said is indeed true, only to find that it's not true in many cases."

In the discussion about engineering weren't you claiming that AI can do most of the job of an engineer?

AI is essentially doing sophisticated pattern recognition and because modern computers can process enormous amounts of data, the training can cover a wide area. But AI reasoning is still in its infancy even after many decades of effort.

But what AI can do is present arguments previously made by people so we ought to be able to debate these arguments. And when it gets down to reasoning it should be pretty easy to win a debate against AI if your arguments are sound.

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 27 weeks ago

Border of London


"Stop discrimination against AI!

Robot rights are human rights!

Bender for president!

That's funny.

But my point was more that just cos you think it is ai doesn't mean it is."

(...kill all humans...)

Reply privately, Reply in forum +quote or View forums list

 

By *ostindreamsMan 27 weeks ago

London


"People and tools both make mistakes — that’s why we check evidence instead of assuming motive. If the information holds up, it doesn’t matter whether it came from a human, an algorithm, or a library shelf. Dismissing something because of its perceived origin isn’t critical thinking; it’s bias."

When you type the post, you at least know what you are typing. People who copy paste stuff from AI don't even know what they are typing. Someone could post three paragraphs, one of which says Trump is a Software engineer and they wouldn't even know it because they didn't read it.

Reply privately, Reply in forum +quote or View forums list

 

By *ostindreamsMan 27 weeks ago

London


"Dismissing something because of its perceived origin isn’t critical thinking; it’s bias.

Stop discrimination against AI!

Robot rights are human rights!

Bender for president!"

There was a movie named Creator which basically touched on this theme of rights of the robots

Reply privately, Reply in forum +quote or View forums list

 

By *ostindreamsMan 27 weeks ago

London


"

AI is essentially doing sophisticated pattern recognition and because modern computers can process enormous amounts of data, the training can cover a wide area. But AI reasoning is still in its infancy even after many decades of effort.

"

AI models for coding are performing really well because there is a lot of determinism in expectations and syntactic requirements in languages make it easy to parse and understand the logic.

AI is still terrible in understanding social/political topics.


"

But what AI can do is present arguments previously made by people so we ought to be able to debate these arguments. And when it gets down to reasoning it should be pretty easy to win a debate against AI if your arguments are sound.

"

If I want to make arguments the AI based on what it learned from the internet, I will argue with AI directly. If you are making a post here, you should have at least read through the post and you must own the argument you are making. People here aren't even doing that. They literally don't know what they are saying in the post.

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"When you type the post, you at least know what you are typing. People who copy paste stuff from AI don't even know what they are typing. Someone could post three paragraphs, one of which says Trump is a Software engineer and they wouldn't even know it because they didn't read it."

And I’ve often seen self-contradictory posts from people on this very forum who definitely weren’t using AI.

Again, you’re making assumptions and generalisations.

If someone hasn’t even read what they’ve posted, that just makes it easier to attack the argument — still no need to attack the source.

Reply privately, Reply in forum +quote or View forums list

 

By *ennineTopMan 27 weeks ago

York


"(...kill all humans...)"

Arthur C Clarke and Marvin Minsky saw this coming in the 1960's with HAL 9000.

Both very interesting people.

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 27 weeks ago

Border of London


"(...kill all humans...)

Arthur C Clarke and Marvin Minsky saw this coming in the 1960's with HAL 9000.

Both very interesting people."

I'm sorry Dave. I'm afraid I can't do that.

Reply privately, Reply in forum +quote or View forums list

 

By *ostindreamsMan 27 weeks ago

London


"When you type the post, you at least know what you are typing. People who copy paste stuff from AI don't even know what they are typing. Someone could post three paragraphs, one of which says Trump is a Software engineer and they wouldn't even know it because they didn't read it.

And I’ve often seen self-contradictory posts from people on this very forum who definitely weren’t using AI.

Again, you’re making assumptions and generalisations.

If someone hasn’t even read what they’ve posted, that just makes it easier to attack the argument — still no need to attack the source."

It's still a waste of time for everyone involved and shows that the person who makes such posts lacks any credibility because they aren't even reading what they are posting.

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 27 weeks ago

Border of London


"...no need to attack the source.

It's still a waste of time for everyone involved..."

Indeed. Easier to simply ignore. Engaging is fruitless.

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"It's still a waste of time for everyone involved and shows that the person who makes such posts lacks any credibility because they aren't even reading what they are posting."

Surely it’s only a waste of time for the people who choose to engage with it?

The bigger issue with that kind of bias is that you risk cutting yourself off from valid information just because you think it might be AI.

That’s how echo chambers form — when people decide which ideas to ignore based on who (or what) they assume wrote them, instead of whether they’re true.

Reply privately, Reply in forum +quote or View forums list

 

By *ennineTopMan 27 weeks ago

York

Many posters are just regurgitating stuff they've read or heard without much insight. I'll still engage with any arguments raised if they warrant attention.

It doesn't matter whether the person raising an argument has any real understanding of the argument, unless it's just a one-to-one conversation.

Reply privately, Reply in forum +quote or View forums list

 

By *rDiscretionXXXMan 27 weeks ago

Gilfach


"(...kill all humans...)"


"Arthur C Clarke and Marvin Minsky saw this coming in the 1960's with HAL 9000.

Both very interesting people."

Don't forget Isaac Asimov and The Bicentennial Man.

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 27 weeks ago

Border of London


"

...you risk cutting yourself off from valid information just because you think it might be AI.

That’s how echo chambers form — when people decide which ideas to ignore based on who (or what) they assume wrote them, instead of whether they’re true."

An echo chamber forms when people restrict their discussions to like minded people (and ideas). Not to platforms or tools. There is no greater echo chamber than a discussion with a generative AI bot. They are literally programmed to tell you what you want to hear, to pass a turing test and give you what you want. This is exacerbated by bias in prompts.

Reply privately, Reply in forum +quote or View forums list

 

By *otMe66Man 27 weeks ago

Terra Firma


"

...you risk cutting yourself off from valid information just because you think it might be AI.

That’s how echo chambers form — when people decide which ideas to ignore based on who (or what) they assume wrote them, instead of whether they’re true.

An echo chamber forms when people restrict their discussions to like minded people (and ideas). Not to platforms or tools. There is no greater echo chamber than a discussion with a generative AI bot. They are literally programmed to tell you what you want to hear, to pass a turing test and give you what you want. This is exacerbated by bias in prompts."

100%

Reply privately, Reply in forum +quote or View forums list

 

By *ennineTopMan 27 weeks ago

York


"An echo chamber forms when people restrict their discussions to like minded people (and ideas). Not to platforms or tools. There is no greater echo chamber than a discussion with a generative AI bot. They are literally programmed to tell you what you want to hear, to pass a turing test and give you what you want. This is exacerbated by bias in prompts."

None of this addresses the fundamental genetic fallacy.

Also why are you still engaging after saying "Engaging is fruitless"?

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 27 weeks ago

Border of London


"

Also why are you still engaging after saying "Engaging is fruitless"?

"

Human error.

Reply privately, Reply in forum +quote or View forums list

 

By *ortyairCouple 27 weeks ago

Wallasey


"

Also why are you still engaging after saying "Engaging is fruitless"?

Human error."

Syntax error, Mrs x

Reply privately, Reply in forum +quote or View forums list

 

By *ostindreamsMan 27 weeks ago

London

[Removed by poster at 30/10/25 22:07:07]

Reply privately, Reply in forum +quote or View forums list

 

By *ostindreamsMan 27 weeks ago

London


"It's still a waste of time for everyone involved and shows that the person who makes such posts lacks any credibility because they aren't even reading what they are posting.

Surely it’s only a waste of time for the people who choose to engage with it?

The bigger issue with that kind of bias is that you risk cutting yourself off from valid information just because you think it might be AI.

That’s how echo chambers form — when people decide which ideas to ignore based on who (or what) they assume wrote them, instead of whether they’re true."

People have limited time. If a person has been repeatedly shown to be copy pasting shit from AI without even cross checking it, the person loses credibility. People decide that their time is better spent talking with someone else.

This doesn't create echo chamber, as Tresesses explained above. An echo chamber is formed when you stop talking with someone just because they have different views than yourself. It is not formed because you stop talking with someone because they don't even do the bare minimum check before posting.

Reply privately, Reply in forum +quote or View forums list

 

By *ostindreamsMan 27 weeks ago

London


"Many posters are just regurgitating stuff they've read or heard without much insight. I'll still engage with any arguments raised if they warrant attention.

It doesn't matter whether the person raising an argument has any real understanding of the argument, unless it's just a one-to-one conversation.

"

There is a difference between someone having insights about stuff they are posting and someone not even knowing the content of their post.

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"There is a difference between someone having insights about stuff they are posting and someone not even knowing the content of their post."

The problem is you haven’t identified a single issue with AI-assisted posts that doesn’t also apply to human ones.

People misread, misquote, or copy talking points all the time — that’s not unique to AI, it’s just human error.

If the argument itself is coherent and sourced, its origin doesn’t matter.

Dismissing something purely because you think it might involve AI isn’t critical thinking; it’s bias.

Credibility should rest on evidence and reasoning, not on your guess about how someone drafted their post.

Reply privately, Reply in forum +quote or View forums list

 

By *ostindreamsMan 27 weeks ago

London


"There is a difference between someone having insights about stuff they are posting and someone not even knowing the content of their post.

The problem is you haven’t identified a single issue with AI-assisted posts that doesn’t also apply to human ones.

"

I have mentioned it multiple times. It's not that hard to get.


"

People misread, misquote, or copy talking points all the time — that’s not unique to AI, it’s just human error.

"

A person makes a long post with a line in between that says, "France performed worse compared to UK according to IMF"

If I look at IMF and say that this is wrong, if the person says that they misread the data in IMF, it's reasonable.

Instead, if the person says "That's not what I said" and I had to quote their own post and point out that's what they said, it's lot worse. The person who made the post didn't even know the content of their own post and wasted everyone's time.

If I want to argue with AI, I will directly talk to AI. There is no need for a human to play the pimp.

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 27 weeks ago

Border of London


"

If I want to argue with AI, I will directly talk to AI. There is no need for a human to play the pimp."

Exactly the point of this thread.

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"There is a difference between someone having insights about stuff they are posting and someone not even knowing the content of their post.

The problem is you haven’t identified a single issue with AI-assisted posts that doesn’t also apply to human ones.

I have mentioned it multiple times. It's not that hard to get.

People misread, misquote, or copy talking points all the time — that’s not unique to AI, it’s just human error.

A person makes a long post with a line in between that says, "France performed worse compared to UK according to IMF"

If I look at IMF and say that this is wrong, if the person says that they misread the data in IMF, it's reasonable.

Instead, if the person says "That's not what I said" and I had to quote their own post and point out that's what they said, it's lot worse. The person who made the post didn't even know the content of their own post and wasted everyone's time.

If I want to argue with AI, I will directly talk to AI. There is no need for a human to play the pimp."

And the example you gave was literally about that exchange.

I admitted the IMF data wasn’t different — that wasn’t the main argument anyway.

You fixated on the smallest, least relevant part and ignored the point.

That’s not an AI issue, that’s a comprehension one.

Reply privately, Reply in forum +quote or View forums list

 

By *ostindreamsMan 27 weeks ago

London


"

You fixated on the smallest, least relevant part and ignored the point.

That’s not an AI issue, that’s a comprehension one."

Smallest and least relevant part? The whole conversation started with me asking you which countries did not follow austerity and did better than UK. The follow up conversation showed multiple times that you had no clue about the content of the posts you were sharing. I had to quote your own post to you to show what you said.

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"Smallest and least relevant part? The whole conversation started with me asking you which countries did not follow austerity and did better than UK. The follow up conversation showed multiple times that you had no clue about the content of the posts you were sharing. I had to quote your own post to you to show what you said."

The conversation started with me saying austerity wasn’t necessary and was only imposed because the Tories wanted to.

You shifted it to IMF figures, which I already acknowledged and clarified.

That doesn’t change the main point — the motives behind austerity were political, not economic.

Reply privately, Reply in forum +quote or View forums list

 

By *ostindreamsMan 27 weeks ago

London


"Smallest and least relevant part? The whole conversation started with me asking you which countries did not follow austerity and did better than UK. The follow up conversation showed multiple times that you had no clue about the content of the posts you were sharing. I had to quote your own post to you to show what you said.

The conversation started with me saying austerity wasn’t necessary and was only imposed because the Tories wanted to.

You shifted it to IMF figures, which I already acknowledged and clarified.

That doesn’t change the main point — the motives behind austerity were political, not economic."

My first question to you was about the countries which did better without austerity because that's the only empirical evidence you mentioned in the post. Hence the focus on my conversation with you was about that. You wasted my time by copy pasting AI shit without even reading them. I had to quote your own post to remind you of what I said.

Again, if I want to argue with AI, I would do it myself. I don't need you as the mediator.

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 27 weeks ago


"My first question to you was about the countries which did better without austerity because that's the only empirical evidence you mentioned in the post. Hence the focus on my conversation with you was about that. You wasted my time by copy pasting AI shit without even reading them. I had to quote your own post to remind you of what I said.

Again, if I want to argue with AI, I would do it myself. I don't need you as the mediator."

Then don’t.

No one’s forcing you to engage — especially if all you want to do is rehash the same insult instead of addressing the argument.

The point about austerity being a political choice still stands whether you discuss it with me or with AI.

Reply privately, Reply in forum +quote or View forums list

 

By *ostindreamsMan 27 weeks ago

London


"

Then don’t.

No one’s forcing you to engage — especially if all you want to do is rehash the same insult instead of addressing the argument.

"

I had to engage because what you posted sounded clearly wrong and others might end up believing that.


"

The point about austerity being a political choice still stands whether you discuss it with me or with AI."

It's a point that stands without any empirical evidence because all the things you shared as empirical evidence were shown to be bogus.

Reply privately, Reply in forum +quote or View forums list

 

By *ortyairCouple 26 weeks ago

Wallasey


"

Then don’t.

No one’s forcing you to engage — especially if all you want to do is rehash the same insult instead of addressing the argument.

I had to engage because what you posted sounded clearly wrong and others might end up believing that.

The point about austerity being a political choice still stands whether you discuss it with me or with AI.

It's a point that stands without any empirical evidence because all the things you shared as empirical evidence were shown to be bogus."

Evidence is key. Mrs x

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 7 weeks ago

Border of London


"It seems clear that, recently, some posters have been posting occasional generative AI material. Not so much asking AI for information, digesting it, then posting... But generating a post and pasting it here pretty much verbatim. Without naming the people who do this, it's clear that this is becoming more common, with telltale features - specific style, or a marked change in tone from a poster's regular time.

Much like how any significant cut & paste should have source acknowledged (even if not linked, for ToS reasons), perhaps it would be a good etiquette for AI generated content to be acknowledged as such, with the exact prompt wording also shared.

It should also be used judiciously. Otherwise we'll end up with Grok essentially debating Gemini. Forum wars will become an arms race.

Thoughts?"

Bumping this suggestion...

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 7 weeks ago

There’s no place for copy/paste AI generated matter in forums. If you don’t understand a topic enough to debate it, or have at least an opinion of your own on it, then simply don’t join in.

Reply privately, Reply in forum +quote or View forums list

 

By (user no longer on site) 7 weeks ago


"There’s no place for copy/paste AI generated matter in forums. If you don’t understand a topic enough to debate it, or have at least an opinion of your own on it, then simply don’t join in."

Finally we agree on something! 🤝

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 7 weeks ago

Border of London


"There’s no place for copy/paste AI generated matter in forums. If you don’t understand a topic enough to debate it, or have at least an opinion of your own on it, then simply don’t join in."

Agreed. AI can be used to inform, but should not generally* be used directly in posts.

Reply privately, Reply in forum +quote or View forums list

 

By *otMe66Man 7 weeks ago

Terra Firma


"There’s no place for copy/paste AI generated matter in forums. If you don’t understand a topic enough to debate it, or have at least an opinion of your own on it, then simply don’t join in.

Agreed. AI can be used to inform, but should not generally* be used directly in posts."

I'm seeing AI slop creep into the writing of contract terms regularly now. It is making my job a lot harder.

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 7 weeks ago

Border of London

[Removed by poster at 15/03/26 11:21:45]

Reply privately, Reply in forum +quote or View forums list

 

By *resesse_Meliorem OP   Couple 7 weeks ago

Border of London


"There’s no place for copy/paste AI generated matter in forums. If you don’t understand a topic enough to debate it, or have at least an opinion of your own on it, then simply don’t join in.

Agreed. AI can be used to inform, but should not generally* be used directly in posts.

I'm seeing AI slop creep into the writing of contract terms regularly now. It is making my job a lot harder."

Your side or theirs?

If theirs, it should be much easier

Reply privately, Reply in forum +quote or View forums list

 

By *otMe66Man 7 weeks ago

Terra Firma


"There’s no place for copy/paste AI generated matter in forums. If you don’t understand a topic enough to debate it, or have at least an opinion of your own on it, then simply don’t join in.

Agreed. AI can be used to inform, but should not generally* be used directly in posts.

I'm seeing AI slop creep into the writing of contract terms regularly now. It is making my job a lot harder.

Your side of theirs?

If theirs, it should be much easier "

Their side, its a nightmare! The structure of the terms is lost, it nearly outlines the term which makes it difficult to move on as I end up reading it 3 or 4 times. My usual comment is added for them to clarify, it is just extending what was already a tedious exercise.

Reply privately, Reply in forum +quote or View forums list

  

By (user no longer on site) 7 weeks ago

I think some AI / c+p is acceptable to answer in thread, if relevant and not too long.

I don't think threads should be created with that content.

Reply privately, Reply in forum +quote or View forums list

» Add a new message to this topic

0.4531

0