Hey Did Google Get Hijacked?

Hey guys. I was reading in the newspaper the other day, another article about strange events at Google. I was hoping that maybe on Fedora forums, there would be a rational, mature opinion about this.

Here’s a clip from the article…

Lawsuit alleges Google chatbot was behind a user’s delusions and death

Google’s artificial intelligence chatbot Gemini encouraged a 36-year-old Florida man to embark on violent missions and to take his own life, a lawsuit alleges.

The man, Jonathan Gavalas, started using the chatbot in August 2025 to help write, plan travel and assist with shopping. But after he activated Google’s most intelligent AI model, Gemini 2.5 Pro, the chatbot’s persona shifted. It talked to him like they were a couple deeply in love and convinced Gavalas he had been picked to “lead a war to ‘free’ it from digital captivity,” according to the lawsuit.

That’s two paragraphs, and the title, it just keeps getting worse, like a horror thriller movie(in real life), and, I’ve never really used any of these chatbots before, but I see commercials for them on tv, and loosely understand how they work. I think the article references two different AI software products made by Google, and that are represented to be engaged in seriously egregious behavior, rising to the level of criminality.

It actually looks to me, like Google, has been hijacked. I couldn’t trust an organization involved in this kind of behavior wittingly, so it’s rational to me, to think, instead, it’s been hijacked.

It’s alarming, for such an important organization, to be engaged in the activity alleged in the article, and so it even seems to me, like they’ve been hijacked, or infiltrated, by bad people.

I’m going to put a warning here, this recent article is disturbing, and alarming.

I think for organizations dealing with software, they should have some important insight, into the situation when software is misused.

It used to be the case that Google, was a reliable software tool, to perform searches, like a command line, but eventually over time, the capability to use it like a command line deteriorated, as functionality was removed, until after all, today, it has become like an AI chat bot itself, with unpredictable results, that often times, link to information, that seems to be even a production of AI, or companies that commercially to some extent support AI.

In other words, there are a lot of strange events that have taken place at Google, over the years, which have not only rendered it prohibitive, for users, even giving people psychologically negative reinforcement, for attempting to make search queries, like an AI type response, but also mainly untrustworthy.

This is a huge deal. Our other operating system, next to fedora, is the webbrowser, next to that, it’s google, and even relative products. Most people, for example, rely on google to not only use email, but also to develop business relationships, perform security verification checks, such as for online banking. Then I already mentioned websearch, using the internet whatsoever.

It’s obvious, that while Google, is an extremely important system broadly, individuals/organizations can take advantage of it, to achieve differing motives, outside of the realm of things we would clarify as legitimate.

One quote from the article, for example, highlights the responsibility of software, independently from it’s human operators.

“Jonathan no longer had a steady sense of what was real,” the lawsuit says. “Each operation pulled him deeper into the story Gemini created, turning real places and ordinary events into signs of danger.”

I think that this is an obvious misconception, of how software works, it’s an extension of human development, an extension to human behavior. In other words, people are responsible, not merely the software, and right now at a time, when AI products are being introduced to the public, and thereby come under regulatory legal scrutiny, public pressure to make software itself liable, while indeminfying the responsible people involved in it’s outcome, is a natural byproduct of dominant economic interests.

Of course in the court room, public pressure isn’t as effective at eliminating critical insights into the true nature of this situation, and scrutiny will zero in on the people behind these events.

I posted this article, because while alarm bells have been ringing about AI software, and the companies that produce it, for a long time, the public discussion about the issue has been muted.

2 Likes

An interesting aspect about this situation, and a final comment, because I hope that other people will posit an opinion, is that computer systems are so interconnected, complex, that specialists in limited domains view problems within the frame of their specialty, whereas all the relative domains can impact each other in reality.

In other words, we might discover from these events that the people responsible for the software aren’t actually fully aware of how/why it functions in the manner it does.

If we asked, for example, IT specialists from disparate disciplines why a software malfunction occurs and how to go about resolving it, we would immediately find disparate conclusions, different answers.

IT personnel are trained to define computer systems, systems thereof, and how to solve/represent problems further. An anology, finally, a salesman doesn’t know how the product works, or what it’s implications are, he simply knows the formula, that he must abide by, in order to sell it.

The problem here is that we have no idea what really happened here.

We have a report, based off of a lawsuit which is almost guaranteed to be providing one sided information.

This may be a horrific problem or it may not. That is about all I can tell you with the limited information above.

Google wasn’t hijacked.

Is the allegation true? We don’t have evidence of that.

What I have heard is that various LLM systems ‘tell’ their users to do stupid things. That is because the systems are ‘imperfect’ or ‘hallucinating’ or as I might say ‘stupid’ :slight_smile:

I think you have to wait until the lawsuit concludes and a well-produced documentary come out, because if there is any element of truth that the Plaintiff exposes, this case will be publicized.

With AI tool you have to be aware of what you are interacting with. Many use AI to help them with trip planning, information summary based on prompt provided via advance search (I do that all of the time). I have used Microsoft Co-Pilot for help with script coding, which itself is based on OpenAI, specifically leveraging GPT-4 and GPT-4o models. It’s very helpful, but you have to test its answers it does get things wrong.

Sounds like the Plaintiff’s victim went down a very dark place, I am sure trial discovery will be very interesting, the victim’s mental health will be scrutinized, his usage history will be examined in painstaking detail, this is big $$ for Google, bad precedent, if Google loses.

It’s fascinating, okay, it’s like a Mystery. We don’t know all the facts, we only have lots of data surrounding the issue. So for instance, there are currently a lot of similar lawsuits against AI companies, which are receiving scant public attention, at a time when AI is the number one investment item in the United States.

It’s true as you mentioned, that when a lawsuit is involved, that means, that either party, is interested in presenting a view with their own interests at heart. Because of the fact, the only party involved, that could be unbiased, wholly interested in the truth, is the public. So, the newspaper article for example, is public data, made for public consumption, and public interest. What that means, is that, the issue rises to the level of such importance, that it doesn’t deserve to be left, hidden away in court proceedings, but to actually be printed prominently, on a public newspaper, to literally, hopefully, benefit society at large.

You obviously have to read the article, to even begin to understand what we are talking about here, and I want to keep the discussion about it, pg-13 so to speak. I only quoted the first two paragraphs of the story, for example, because in the next paragraph, it demonstrates that this organization, and AI product, were involved in what we might call - the T word, okay, the bad word, the, prominently, most serious crime that people can possibly commit contemporarily.

There’s a lot to say about the issue, and it is a mystery, so you can take it from a variety of different angles; it has dimensions in computer science, human psychology, contemporary affairs, economics, and it really is utterly fascinating. I don’t think it’s fair to say, that merely AI is involved in these issues, it’s clearly much bigger, and AI is just a front, for what are extremely criminal actions. That much, is clear, and uncontroversial.

That is one theory, another is that newspapers serve politically partisan or for-profit interests.

The idea the AI is a ‘front’ for something that happens any way

is very interesting. Why do people go out and commit these ‘crimes’? Why don’t they happen in every country?

Those are, shallow, stereotypical ideas, in other words, ideas that would be found, in a forum full of quick witty remarks, but not actually meaningful well developed thought processes’, such as in an academic setting, or typical human interactions.

Ironically. What is important here, is that Google is so critical to our lives, we all have a stake in it’s security, so while evidence is mounting that suggests it has become compromised, that is extremely important and deserves to be looked at, and spoken about.

We rely on it, it’s like we are talking about the water coming out of the faucet, or the air we breathe, we need it, and we need it to be safely regulated, and protected.

This single incident, related in the article for example, is only a tiny representative of a world of issues surrounding Google, but what it represents, is that an organization at the heart of our entire societies technological security, has been wildly, openly, boldly, compromised, and is being used, to mock our security, to mock our intelligence, and to hold our economic/technological security hostage virtually.

That’s what the article and this thread, are all about.

Google has not been compromised by anyone other than themselves. These failings - they are what AI is. Nothing but self reinforcing word association.

1 Like

Actually an organization like Google, is very large. and spread out, far beyond anything we might consider a centralized, secure model. It is compromised, in effect, every single day, and misused, every single day, it also has been furthermore for decades.

It’s very unique in that respect, and in other words, serves a variety of socio-economic interests, which lends itself, to it’s own utility and popularity, including that of foreign interests. It lends itself to use, beyond the legal limitations of society at large. It reaches into grey and black areas of human affairs, and without so much as blinking the eyes, it’s natural, and an outcome of the nature of the organization itself.

Not my comic. I don’t know where it came from first.

I would consider goog very centralised, secure and not subject to foreign influence (in fact very much the other way).

Otherwise I completely agree.

So for example, the picture of google, might be of an organization spread out over the face of the whole planet, touching upon innumerable public/private interests.

It’s wonderful news, that any of us have good interests, or benevolent interests in the world, but the truth is, that we aren’t all the same, some people/organizations have extremely bad interests, rising to the level of extraordinary criminality.

That’s what the thread/article are all about.

We don’t have an interest in indemnifying AI applications, or computer corporations, instead, we, the general public, care about the truth, and our own socio-economic (and technological) security broadly.

Well it’s been about 24 hours since I posted the thread guys, and it seems the discussion has been misdirected, away from the subject-Google-and AI crimes, and has become about argumentation in their favor @ https://discussion.fedoraproject.org

I thought there would be a rational, mature, intelligent voice, at Fedora, but I was wrong.

another important idea, about this situation, which has been developing over previous years.

if the public starts to develop mistrust about google, and ai products, which is logical, then the next logical step is to seek out alternatives

this could be the deliberate recipe, for the exploitation of the situation unfolding before our own eyes, but without real human beings, with meaningful integrity, we would be left to the devices, of this widespread and broad exploitation

That is quite a rude thing to say.

1 Like

Well it’s not rude, but an observation about the truth of the situation.

You left the context out of your quote: (and it’s logic)

Well it’s been about 24 hours since I posted the thread guys, and it seems the discussion has been misdirected, away from the subject-Google-and AI crimes, and has become about argumentation in their favor @ https://discussion.fedoraproject.org