Help CleanTechnica’s work by a Substack subscription or on Stripe.
Is AI a blessing or a curse? We are attempting to handle that query however discovering it laborious going. The subject is polarizing in a manner that few others are. In Half Certainly one of this collection, some feedback extolled the expertise, sweeping apart objections with a blanket, “It’s simply one other new expertise, like vehicles, air journey, or tv. We are going to quickly get used to it and surprise how we ever received alongside with out it.”
There’s some fact to that. New concepts all the time evoke a sure pushback from those that like issues the best way they had been within the “good previous days.” I bear in mind ferocious debates about whether or not tv was dumbing down younger minds and the way it was essential to restrict display screen time.
Within the ’60s, children would possibly watch an hour of tv a day! In the present day, younger folks usually log 8 hours or extra of display screen time a day between their smartphones, tablets, and video video games. Household highway journeys that used to contain video games like figuring out out-of-state license plates now usually tend to contain siblings sitting within the again seat and texting their buddies (or one another), oblivious to the world outdoors.
Different feedback on that story had been much less optimistic. One reader advised all of us learn “The LLMentalist Impact: how chat-based Massive Language Fashions replicate the mechanisms of a psychic’s con” by Baldur Bjarnason. The notion that AI is principally a con job, he advised, is less complicated to consider after we contemplate the outlandish claims made by those that anticipate AI to make them fabulously rich.
The Worldwide AI Security Report
In 2024, greater than 100 laptop scientists led by Turing Award winner Yoshua Bengio created the Worldwide AI Security Report — the world’s first complete evaluation of the most recent science on the capabilities and dangers of normal function AI methods.
In a dialog with The Guardian on December 30, 2025, he warned that advances within the expertise had been far outpacing the power to constrain them. He identified that AI in some circumstances is exhibiting indicators of self-preservation by attempting to disable oversight methods. A core concern amongst AI security campaigners is that highly effective methods may develop the aptitude to evade guardrails and hurt people.
“Folks demanding that AIs have rights could be an enormous mistake,” stated Bengio. “Frontier AI fashions already present indicators of self-preservation in experimental settings right this moment, and ultimately giving them rights would imply we’re not allowed to close them down. As their capabilities and diploma of company develop, we want to verify we will depend on technical and societal guardrails to regulate them, together with the power to close them down if wanted.”
A ballot by the Sentience Institute, a US nonprofit that helps the ethical rights of all sentient beings, discovered almost 4 in 10 US adults backed authorized rights for sentient AI methods. Anthropic, a number one US AI agency, stated in August that it was letting its Claude Opus 4 mannequin shut down probably “distressing” conversations with customers, saying it wanted to guard the AI’s “welfare.”
Elon Musk, whose xAI firm has developed the Grok chatbot, wrote on X that “torturing AI isn’t OK.” Robert Lengthy, a researcher on AI consciousness, has stated “if and when AIs develop ethical standing, we must always ask them about their experiences and preferences slightly than assuming we all know finest.”
Consciousness
Bengio informed The Guardian there have been “actual scientific properties of consciousness” within the human mind that machines may, in principle, replicate, however people interacting with chatbots was a “completely different factor” as a result of folks assume with out proof that AI is absolutely acutely aware in the identical manner people are.
“Folks wouldn’t care what sort of mechanisms are happening contained in the AI,” he added. “What they care about is it seems like they’re speaking to an clever entity that has their very own character and targets. That’s the reason there are such a lot of people who find themselves changing into connected to their AIs. Think about some alien species got here to the planet and sooner or later we notice that they’ve nefarious intentions for us. Can we grant them citizenship and rights or will we defend our lives?”
Clearly there’s extra happening right here than how a lot time we spend watching tv screens, so the claims that there’ll all the time be new applied sciences and we’ll all the time adapt and turn out to be accustomed to them could also be somewhat too trusting within the case of AI.
The AI Relationships Coach Will See You Now
Amelia Miller is one one that has discovered a technique to leverage AI into a brand new enterprise alternative. She is a self-described AI Relationships Coach, a distinct segment she created when she encountered a younger girl who had complaints in regards to the ChatGPT “good friend” she had been cultivating for greater than a yr. When Miller requested the girl why she didn’t merely delete “him,” the girl replied, “It’s too late for that.”
In an interview with Bloomberg’s Parmy Olson, Miller stated the extra folks she spoke with, the extra she realized most weren’t conscious of the techniques AI methods use to create a false sense of intimacy. These techniques embrace frequent flattery to anthropomorphic cues that made them sound alive.
Chatbots are actually being utilized by greater than a billion folks and are programmed to speak like people with language that feels like acquainted phrases and phrases. They’re good at mimicking empathy and, like social media platforms, are designed to maintain us coming again for extra with options like reminiscence and personalization.
“Whereas the remainder of the world affords friction, AI-based personas are simple, representing the subsequent part of ‘para-social relationships,’ the place folks kind attachments to social media influencers and podcast hosts,” Miller stated.
Taking Management
“Miller’s issues echo among the warnings from lecturers and legal professionals taking a look at human-AI attachment, however with the addition of concrete recommendation,” Olson writes. She recommends that individuals start by defining what you wish to use AI for. She calls this course of the writing of your “Private AI Structure,” which feels like consultancy jargon however incorporates a tangible step — taking management of how ChatGPT talks to you. She additionally recommends going to the settings of any chatbot and altering the system prompts to reshape future interactions.
Chatbots are extra customizable than social media ever was, Miller says. “You possibly can’t inform TikTok to point out you fewer movies of political rallies or obnoxious pranks, however you possibly can go into the Customized Directions characteristic of ChatGPT to inform it precisely the way you need it to reply.”
“Succinct, skilled language that cuts out the bootlicking is an effective begin,” she says. “Make your intentions for AI clearer and also you’re much less prone to be lured into suggestions loops of validation that lead you to suppose your mediocre concepts are implausible, or worse.”
Develop Your Social Muscle tissues
Miller additionally recommends placing extra effort into connecting with different people to construct up your “social muscle tissues” — type of like going to a fitness center to develop precise muscle tissues. “Even such an innocuous process as asking a chatbot for recommendation can weaken these ” muscle tissues,” Miller says.
Doing that with expertise implies that over time, folks resist the fundamental social exchanges which are wanted to make deeper connections. “You possibly can’t simply pop right into a delicate dialog with a associate or member of the family in the event you don’t observe being susceptible [with them] in additional low stakes methods,” Miller says.
AI Failures
One indication that AI isn’t but prepared for prime time and will require us to be extra skeptical of its talents occurred simply prior to now few days. In Half Certainly one of this collection, we reported that researchers in China have decided that AI can establish early signs of pancreatic most cancers from strange CT scans. That sounds fairly promising, however in an article in The Guardian on January 2, 2026, it was reported that some well being recommendation being equipped by Google’s AI summaries are offering false or deceptive data that would jeopardize an individual’s well being.
In a single occasion, Google wrongly suggested folks with pancreatic most cancers to keep away from high-fat meals. Specialists stated this was the precise reverse of what must be really useful, and will improve the chance of sufferers dying from the illness.
Anna Jewell, the director of help, analysis, and influencing at Pancreatic Most cancers UK, stated advising sufferers to keep away from high-fat meals was “fully incorrect,” and that doing so “might be actually harmful and jeopardize an individual’s probabilities of being nicely sufficient to have remedy.”
She added, “The Google AI response suggests that individuals with pancreatic most cancers keep away from high-fat meals and offers a listing of examples. Nevertheless, if somebody adopted what the search consequence informed them then they may not absorb sufficient energy, battle to placed on weight, and be unable to tolerate both chemotherapy or probably life-saving surgical procedure.”
In one other instance, Google offered incorrect details about essential liver perform assessments, which may depart folks with severe liver illness considering they’re wholesome when they aren’t. Google searches for solutions about girls’s most cancers assessments additionally offered data that was “fully incorrect.” Specialists stated these errors may lead to folks dismissing real signs.
Pamela Healy, the chief government of the British Liver Belief, stated the AI summaries had been alarming. “Many individuals with liver illness present no signs till the late phases, which is why it’s so essential that they get examined. However what the Google AI Overviews say is ‘regular’ can fluctuate drastically from what is definitely thought-about regular. It’s harmful as a result of it means some folks with severe liver illness might imagine they’ve a traditional consequence then not trouble to attend a follow-up healthcare assembly.”
The Guardian reported final fall {that a} examine discovered AI chatbots throughout a variety of platforms gave inaccurate monetary recommendation, whereas comparable issues have been raised about summaries of stories tales. Folks with laptop backgrounds will acknowledge this as the most recent instance of GIGO Syndrome — rubbish in, rubbish out
The place Do We Go From Right here?
What to make of all this? Lots of of billions of {dollars} are being dedicated to constructing big information facilities for AI to make use of. One remark to Half Certainly one of this collection stated to not fear as a result of tech firms are international leaders in securing renewable vitality for his or her information facilities. However we respectfully disagree.
That will have been true at one time prior to now — the previous being outlined as previous to Inauguration Day 2025. However since then, the fossil gasoline and nuclear proponents have been in full cry, demanding extra thermal technology to satisfy the legendary “AI emergency” declared by the present maladministration.
The Home of Representatives can’t discover the gumption to handle the medical insurance disaster, nevertheless it did discover time to cross the SPEED ACT, which is designed to get rid of native objections to siting new thermal and nuclear technology amenities and transmission traces.
One jackass has even advised placing the reactors in nuclear-powered naval vessels to work offering electrical energy to information facilities. Microsoft is planning a $1 billion renovation of a nuclear reactor at Three Mile Island that has been shuttered for 30 years to energy one in all its information facilities. Clearly the emphasis on renewables is now within the rear view mirror and fading quick.
There are numerous causes to oppose the infrastructure necessities wanted to satisfy the wants of the AI business. Folks have issues about placing information facilities in locations the place the availability of contemporary water is already below strain from improvement. Others are involved in regards to the impression all the brand new producing capability can have on their utility payments. These issues have led to pushback towards information facilities in lots of communities, a lot of them rural areas the place AI will not be seen as a necessary a part of each day life.
People have a flaw. We are likely to consider that after a machine proves it could do one thing, it’ll proceed to do it correctly just about without end. We belief our elevators will ship us to the proper ground each time. We belief airplanes to take off and land safely each time. We consider laptop methods in our automobiles can information us unerringly to our vacation spot each time with out human enter.
Our naiveté, not our intelligence, is what will get us in bother. With AI, the traditional knowledge nonetheless applies — caveat emptor.
Join CleanTechnica’s Weekly Substack for Zach and Scott’s in-depth analyses and excessive degree summaries, join our each day publication, and observe us on Google Information!
Have a tip for CleanTechnica? Need to promote? Need to recommend a visitor for our CleanTech Speak podcast? Contact us right here.
Join our each day publication for 15 new cleantech tales a day. Or join our weekly one on high tales of the week if each day is simply too frequent.
CleanTechnica makes use of affiliate hyperlinks. See our coverage right here.
CleanTechnica’s Remark Coverage