Examples why people distrust or dislike AI

Here are a few *recent, concrete examples* that show the tension between AI developers, content creators, users, and online communities — illustrating why people distrust or dislike AI.

## Example 1: **“AI actress” Tilly Norwood**

A character named *Tilly Norwood*, created by the studio Particle6 / Xicoia, was unveiled as an AI “actress.” Critics, including actors and SAG‑AFTRA, have condemned this as using unauthorized performances, threatening jobs, and undermining artistry. One quote: SAG‑AFTRA argued that “creativity should remain human‑centered.” ([The Guardian][1])

This shows concerns about:

* **Identity & consent** — Did they use actual actors’ performances without permission?
* **Job displacement** — If AI actors become accepted, what happens to human actors?
* **Authenticity & human touch** — Many see value in human-driven performance, not just technically polished output.

## Example 2: **Disney & Character.AI intellectual property dispute**

Disney sent a cease‑and‑desist to Character.AI, demanding an end to use of its copyrighted characters by that platform. Disney argued that misuse of its IP not only causes financial loss, but could harm its brand image. Character.AI claimed some characters were user‑generated, but Disney pushed back. ([Reuters][2])

This reflects:

* The tension around **copyright / training data**: can AI simply re‑use existing characters, stories, images without permission?
* **Control & brand dilution**: companies are worried their creative works being used by others (especially AI) will weaken their grip over how the work is expressed or monetized.

## Example 3: **Reddit / University of Zurich “covert Reddit experiment”**

* Researchers from University of Zurich posted **AI‑generated comments** in r/ChangeMyView (a subreddit devoted to reasoned debate) over months. They didn’t tell users they were AI. The goal was to test whether AI could “change views” in that community. ([AI Commission][3])
* This provoked strong backlash: moderators said users “deserve a space free from this type of intrusion.” Reddit threatened legal action and banned accounts tied to the experiment. ([Yahoo Tech][4])

This incident highlights:

* **Consent & transparency**: users felt misled. They didn’t agree to be part of an experiment with AI bots.
* **Trust erosion**: people expect communities to be “real conversations among humans,” not manipulated by AI actors.
* **Ethical research vs. community boundaries**: even though the researchers got ethics‑board review, many felt norms and rules of the community were violated. Decisions made “for research” can still violate community trust and rules.

## Example 4: **Artists vs. AI and Copyright Lawsuits over AI‑Art**

* Artists have sued generative AI platforms (e.g. Stable Diffusion, Midjourney) for using their artwork without consent or paying them. The claim is that the AI models were trained on “scraped” art from the web which included copyrighted work, often without compensation. ([The Verge][5])
* There was also public protest when auction houses (like Christie’s) planned to sell AI‑generated art pieces, which many artists called “mass theft” because the models’ training data included works by human artists without their permission. ([The Guardian][6])

This underscores:

* Legal and moral issues around **training data**: what rights do creators have when their works are used to train AI?
* Economic concerns: If AI art floods the market, how does that affect valuation of human art?
* Community identity & fairness: many artists feel their labor is being co‑opted without respect or compensation.

 

## Why These Stir Up Strong Emotions

From these examples, you can see several recurring triggers of anger or distrust:

* **Lack of transparency or consent**: People get upset when they learn AI was used behind the scenes, or their content/data was used without their knowledge.
* **Fairness & attribution**: Creators want recognition and compensation when their work contributes to profitable AI models.
* **Threats to livelihood**: If AI is seen as replacing human work — whether acting, writing, art — that’s a serious worry.
* **Loss of trust & authenticity**: Online spaces often rely on the assumption of genuine human participation; AI can feel like intrusion or falsification.
* **Scale & uncontrollability**: AI can produce large volumes of low‑quality or misleading content fast, making moderation hard and changing dynamics.

Leave a Comment

Your email address will not be published. Required fields are marked *