OpenAI engine usage to aid the community?

This is just an idea - that I think worth to discuss.

What would happen if Ask Fedora receives OpenAI aid? What will happen, if we use this or similar bot to search answers, bugs, fixes, and more? What happens, if we have a bot that is able to generate answers, teaching tasks that we do in the community. Would it be reasonable to implement features that OpenAI - Sage, Claude+, or similar bot takes our knowledge, and aids us?

TIA

Integrating an OpenAI-powered bot into Ask Fedora could potentially have several benefits for the community. Such a bot could assist with searching for answers, identifying and fixing bugs, and providing guidance on various tasks related to the Fedora operating system.

One potential benefit of using an AI-powered bot is that it could help reduce the workload on human moderators and volunteers. By automating certain tasks, the community could free up more time and resources to focus on other areas of development and support. But Fedora already using bots and automate certain tasks in here already.

Another potential benefit is that the bot could provide more accurate and consistent responses to common questions and issues. With its ability to process large amounts of data quickly, an AI-powered bot could provide more comprehensive and up-to-date information than a human moderator could. Based on that idea it would be nice to answer certain question type such as required package / missing package / howtos for example install package or update or upgrade.

However, there are also potential drawbacks and limitations to consider. For example, AI-powered bots may not always be able to provide the same level of personalized support and empathy that human moderators can offer. Additionally, there is always the risk of AI bias or errors, which could lead to incorrect information or solutions being provided.

Overall, the implementation of an OpenAI-powered bot in Ask Fedora could be a promising idea worth exploring further. However, it’s important to carefully consider the potential benefits and drawbacks and to ensure that the bot is used appropriately to supplement, rather than replace, human support and moderation.

I didn’t talk about privacy but I believe others can explain better than me on this subject.

Did you write this with ChatGPT? :classic_smiley:

Don’t reveal the magic :slight_smile:

PS : Yes, I used a bit :slight_smile:

Beside Thunderbirdtr cheating, what do you think?

The problem is that AI in its current state cannot provide verified solutions.
It can speak confidently while using outdated and irrelevant data, and cannot check its own words in practice, which can do more harm than good.
To improve the quality, it needs to be properly trained with a decent machine knowledge base and integrated with some automated testing framework.

You are possibly right, and we are already using bots, and services - only we have to find our own way that connecting the dots and makes it easier how people work, how to search within the community resources.

Honestly speaking the approach of “I have an AI, let’s find a use for it” is broken by design.

Start with the problem, then look for solution. If AI is a good solution for a problem, then it is an aid for community. If not - then it is not.

If the problem you are trying to solve is “research of the AI itself” - but then it is not the community which benefits from it.

1 Like

No. I don’t want to go that direction. I just see the potential, how it could improve the way that we provide information to our users, and community – how do we connect them. Search and find always been an important thing in any community. Human power and coding capabilities are limited, and recruiting contributors never will be enough. Is it possible to have something automated, that can you reach from the desktop and might end up with an answer? Is it possible to bring closer the community, if you have a tool that gives you a possible package, work to-do, or a go-to person?

Search and find always been an important thing in any community.

But search and find is not equivalent to AI. It arguably is the opposite of what AI is doing, as AI obfuscates the sources of the data and adds fuzziness to it, which makes real searching and finding (which includes tracking the data to its sources) much harder.

I am not saying there couldn’t be a use for the AI. For example AI does help in things like translation, or captioning.

I am saying we should start with defining the problems we would like to solve, not the tools we would like to use.

2 Likes

My view is that if someone wants to get tech support from AI, they can ask ChatGPT directly. When you want an answer from a person, you come ask in the forum. I would be frustrated if I asked a technical question hoping for a response from a person and what I get as a response from an AI. I wouldn’t want to blur those lines.

Not to mention that because I know AI gets things wrong a decent amount of time, I would never try whatever an AI spat out to me to fix my problem if there was a risk of messing up my system. I’m not a technical person, so I am trusting the community and expertise of individuals to do things I wouldn’t normally feel comfortable doing myself. Someone who doesn’t realize that AI can be wrong might assume the bot is right base on the reputation of the forum - a reputation fostered by human beings who know what they’re talking about.

Full disclosure: I don’t like AI stuff like this, so I am biased, lol

2 Likes

I think @thunderbirdtr’s post really illustrates the point. It’s a lot of text, but doesn’t actually say much. It takes much more time to read it and try to discern if there is any value than it does to generate it. I think this asymmetry is very concerning.

I’d much rather a short but meaningful post with Onuralp’s own real opinion.

1 Like

It’s important to remember what large-language model text generators like ChatGPT do. They:

  • have a model based on scraping large amounts of the internet
  • use that model to predict likely next words, just like predictive typing on your phone
  • can “interpolate” to find next-likely words for situations it hasn’t exactly recorded (just like filling in the color orange given R_YGBP)
  • slightly randomizes the pick to make it seem less robotic.

Apart from “generated answer” I agree with you that LLMs are very powerful and versatile tools, but they also require careful evaluation and adaptation for specific use cases. OpenAI’s solutions are impressive, but they are not the only ones available. There are also other open source solutions like GPT that have different strengths and weaknesses. Depending on what we want to achieve with LLMs, we need to choose the best option for our needs. One possible use case is to create a Q&A system that can answer questions based on the discussions in this forum. This would require us to collect and process the data from the forum posts and comments, and train a LLM to generate relevant and accurate answers. This would be a more reasonable and specialized use case than trying to scrape data from the whole internet and generate answers for any possible question. Of course not all answer would be correct so in this case data ise “key role” in this cases.

PS : I did not read other comments when I answer this so It was purely my thoughts from bit more technical aspect.

1 Like

Something like that could be exploited for the ask.fedora category, not to create new answers or to solve issues, but to avoid users to create work with topics that already exist and that have been already solved: As proposed, as a bot or so, which posts to a given topic suggestion(s) based upon topics that already exist, that seem to have an equal problem, and that have been solved. At the moment, such tasks (linking related topics) are done by supporters manually.

Additionally, if some problem comes up at many users, this could also indicate bugs in Fedora or the kernel (we had this before, e.g., the asus_ec_sensors bug). With this in mind, the bot could also contribute in identifying bugs.

The bot could be limited to posts and topics that are linked to the currently supported releases of Fedora. Or if applicable, it could post related “common issues”, which are not always checked by users before opening topics.

All that could indeed free time of the supporters of ask.fp. Too much topics for too few supporters is indeed an existing (and old) problem. However, there is a long road and much contribution necessary to achieve such an outcome.

Isn’t this the functionality already available in the Discourse engine now?

It shows you topic suggestions when you create a new one. It is not generating anything, it does a fuzzy search and points to existing things, which look similar to what you wrote.


I think there is this big difference between working on better search and discovery methods vs wrapping results of the automated work in a form so that they look as if they are produced by a human.

The previous set of tools which we had for a number of years tried to do the first. The current hype (OpenAI, ChatGPT) is unnecessarily fixated on the second.

Better search is a good thing, but how exactly is it related to AI? Especially the text generating AI? Why would one need search results in a form of the long “human-like” text instead of the list of bullet points with links every current search mechanics is currently providing?

Something that aims to go in such a direction, yes. But the function is superficial and does not really evaluate/analyze or compare contents, solutions, similarities, and other types of properties/pattern. So its far away from what would be theoretically possible.

Agreed to the latter. And I do not see any need that my proposal needs to pretend to be human, or to be human-like. But with some relations to Matt’s elaboration above (this one), I would not link AI only to code that aims to be “human-like”: there is more contained/related that can be exploited in some ways, such as to identify, consolidate, merge and output information that is required by humans.

The “human-like” stuff is the current hype, as you said, and although I have to admit that I am not deeply engaged in this topic and that I do not know current academic definitions, for me, being human-like is neither the sole purpose of AI nor a necessary property. However, maybe it is probabilistic if you count my adaption to AI or not. But this would not change that something like my proposal could free time of supporters or help identifying bugs at ask.fp, and that the current function of discourse is far away from exploiting what is possible in technology :wink: But of course you could argue that I am just talking about a traditional bot with more sophisticated search, analysis and output methods than usual search engines

I didn’t argue for that :slight_smile:

OK, I get it now. And I think we agree.

There are areas where AI as a technology can help. And it can be interesting to research it further. For example for classification and annotation of existing knowledge.

I didn’t really mean to set the dividing line on traditional vs new. Rather on solving a problem vs playing with the human like interface.

And in general playing with things is ok too, as long as it is not harmful to others. The issue is that posting auto-generated texts to the channels dedicated for human communication is explicitly harmful.

It doesn’t mean that we can not build other tools, and use AI technology in places where it helps. But I doubt the OpenAI/ChatGPT provides a good example here. I would say we need to go one-two steps deeper into what AI machinery actually does internally, and start from there.

Absolutely agreed! It should always be explicitly avoided that a bot/AI is taken for a human. A bot has to be clearly “marked” as such. We have already one at ask.fp: Profile - issuebot - Fedora Discussion → when being active, it is always clear and obvious that it’s a bot and not a human, and of course we have to ensure this with any other bot as well.

In the meantime, if anyone does notice posts which appear to be AI generated without saying so, please flag them for review. Other forums have seen this as a way to build reputation for spam accounts. See What shall we do about answers generated by ChatGPT?

That’s not to say we can’t do something with AI. Actually, we’re doing something on this forum already — we’re using a classifier model to identify and flag potentially-toxic posts. (See Trying an AI/ML sentiment analysis system)