Getting a lot of buzz about this with mixed details. I understand they are exploring it, not sure if this is true or not. I’m ok with vibe as long as it’s verified by a professional and reviewed by management, etc… Supposedly employees bonuses are tied to ai and vibe coding for faster results! What are your thoughts and is any of this true?
lol!
In order to prompt an LLM to the degree that it’s necessary and to ensure that what it’s building is actually correct, the person doing the prompting has to be able to write the code themselves. This is doubly so for code review, where the reviewer has to be considering the side effects, implications, assumptions and edge cases of the code being reviewed.
Management is rarely capable of this, in my experience.
I’d rather someone produce correct code than incorrect code faster. I’m not going to get a bonus on output that doesn’t work just because I knocked it out in a day rather than 2 weeks, as the time taken to unpick, debug, correct and test is going to be longer than both of those timeframes for all but the most trivial of programs. Paying bonuses based on speed is as bad of an idea as using “number of lines written” to measure productivity.
Look, I know there are lots of opinions out there, but I had a lot of fun making a much better landing page for my Fedora People space using AI. It was totally “vibe coding” but I made one HTML page, I was able to better consider things like accessibility and color contrast, and I actually made the page responsive. It was not this way before.
It might be hard to tell now, but the page before was literally a barebones HTML page, with <h1>, <ul>, and <li> elements and not much else. I am actually really happy with the outcome of this small thing. Plus, it gets better indexing on search engines too, since I was able to prompt the LLM to consider things like search engine optimization on specific keywords, e.g. my name.
Knocking out a web page is one thing.
Making substantial changes to legacy code to implement new tax laws for a pensions company is another. People tend to get a bit tetchy if you blow their retirement fund because you “vibe coded” some legislated change that the Inland Revenue require or an investment rule.
I notice that on the page you have not mentioned that AI assisted in it’s creation, as is required by Fedora’s AI rules ![]()
I know that is the case for code that is packaged. Is it also the case for infra as well?
Transparency: You MUST disclose the use of AI tools when the significant part of the contribution is taken from a tool without changes. You SHOULD disclose the other uses of AI tools, where it might be useful. Routine use of assistive tools for correcting grammar and spelling, or for clarifying language, does not require disclosure.
So it’s a little bit speculative, I say this to Justin ‘tounge in cheek’, he’s normally really great at disclosing his extensive use of AI ![]()
It does raise the issue of how AI will creep into many aspects of our workflow.
@jflory7 do you know where the official Fedora AI policy is hosted? Council Policy Proposal: Policy on AI-Assisted Contributions - #243 by jflory7 says that it will be put at Council Policies but it is not there.
A great place to read about AI, vibe coding and how it works in paractice is Hacker News,
They regularly have tales from all sides.
As someone who is not really much of a coder (but my son is and very good at it), I don’t really trust AI. I trust very basic Google responses at times but if there is any doubt I will try to fact check and make sure that there are not errors in the response. Because… let’s face it… AI can and will make errors. And that is the problem. If humans do the coding and go through the proper steps that brings a lot more validity to the process. Just offloading things to a machine seems to be a very lazy way to go about things. And in the end we want good code and not AI slop. I just don’t get the idea that we should constantly trust machines. Hell, when I use the self checkout machine at the grocery store it will mess up all the time and accuse me of stealing simply because I moved my hand in the wrong manner. And then I get to deal with the person who is watching the machines. So AI is not a panacea. Yeah, it can probably be used as a tool at times. But it is 100% fallible and should not be trusted to get things right the first time…
My 0.02… Might have to say my 0.05 since they got rid of pennies now.
What I have been told by people successfully using AI coding tools is that the trick is to have tests to prove that the code works as required.
Using the tests you can get the AI to bug fix and try again.
But this is all hearsay as I’ve not jump into the AI coding pool.
Writing the tests first (by hand) is certainly a big help, but by that point you’ve already done the design and then you might as well just write the code yourself rather than spend an indeterminate amount of time trying to tell the AI what it’s supposed to do and then correcting every style error or gaping logic hole it leaves behind when it finally does write something even somewhat acceptable. I absolutely can’t stand how it will use such wildly different approaches to a conceptually similar thing even within the same generated method. I guess the model you pick does make a difference but spending time trying out models instead of just writing the code I need doesn’t sound very appealing
/rant
That said, I’ve found using AI to be a great help when trying to remember some obscure syntax or finding out how something works in code I’m not familiar with.
I made a status page with HTML: https://status.realmofespionage.xyz/
Syntax was easy (put stuff in HTML tags), and I made it responsive with one line:
<meta name="viewport" content="width=device-width, initial-scale=1" />
It’s bare-bones because that’s all it needs to be (text, info, links) No AI use, and my vibe coding was throwing <ul>s around to make spacing feel good ![]()
But that Fedora page looks good and light! I run a full-blown Joomla instance for profile links ![]()
If I vibe coded with AI assistance, I’d probably spend more time figuring out what I can remove to make it simple. No plans on using it personally, but probably wouldn’t be opposed to using it while employed (I prefer maintaining creativity and would have to be paid to try AI
)
You can describe the tests to the AI and avoid writing them I hear.
You can describe the tests to the AI and avoid writing them I hear.
Sure, you absolutely can do that. I’ve had a decent experience with this the other way around as well, where you write the code you need first, then give the AI the diff of your changes, point it at the files where the tests should go, and having it write tests that match the functionality.
This approach is also useful if you have code that doesn’t have tests at all. You can have the AI generate characterization tests from existing code so that you can refactor it more safely ![]()
Or maybe just dont?
it’s all a bit too hyped up for me - there is a lot of good work published recently that looks at the security vulnerabilities in vibe coded projects. and surprise, surprise - it’s not been great!! ![]()
i was just recently looking at this cool article by wiz security where they describe exposing MILLIONs of API keys and other user data in mere minutes - during a simple non-intrusive security review!!
Hacking Moltbook: The AI Social Network Any Human Can Control
some researchers have also looked at security vulnerabilities in AI generated code on github for example. Security Vulnerabilities in AI-Generated Code: A Large-Scale Analysis of Public GitHub Repositories
my favourite is another new paper looking at whether AI tools improve developer efficiency or not - as is often claimed by those in the spotlight.
(see many examples of the salesforce or microsoft CEOs boasting about how many developers are using AI in their companies. in fact i think it’s MANDATORY at microsoft now!! wtf?
) Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
basically TL;DR - it’s nowhere near usable by non-devs for actual live projects that need to be secure and relatively bug-free. and in the case of improving efficiency of experienced devs - there is no overall consensus - but a lot of empirical data that suggests it’s not worth the hassle!!
yeah, honestly for some stuff like this it’s fine, i guess. and i also think generative AI could be a decent tool for learning…
however i would not trust it in a case where i need production-ready code for a serious project.
In the latter part of 2024 I was asked to take a look at GitHub Copilot to see if it would be useful for some projects we were about to start. At that time it struggled even when doing some basic programming tasks, so we opted to not use it. I tested it out earlier this year (again at work) and the difference was pretty shocking. I can understand why there’s so much hype, since the rate of improvement isn’t like anything I’ve seen before.
We’ll have to see if that rate of improvement continues into the next few years.
Fair enough. Good point. I will work on getting my Ansible setup updated with a footnote for how I deploy this micro-site:

