Given the recurring vulnerabilities in OpenSSH, it might be wise to consider switching to WolfSSH as a more secure alternative.
How do we know that wolfssh is any better reviewed the openssh?
What is there to stop wolfssh also have undiscovered bugs?
i have asked my local ai olmo running in ollama and it said wolfssl is better after the looking into the codebase
That’s a terrible argument. AIs don’t “look” at code, nor do they do security reviews.
The Qualys security advisory says:
OpenSSH is one of the most secure software in the
world; this vulnerability is one slip-up in an otherwise near-flawless
implementation. Its defense-in-depth design and code are a model and an
inspiration, and we thank OpenSSH’s developers for their exemplary work.
I’ve never seen security researchers praise the software they have been investigating so thoroughly before.
Removed kinoite-team, silverblue-team
but ai said it has a smaller codebase and modern
also llama3 said that wolf ssh is backed by company so it is well maintained also.
but yes your point is also valid if i can run a better larger model such as deepseek2 or llama3 70b(hugh ram required) i can have better look into the security aspect of the codebase.
also musl systems are kind of unaffected musl libc: "OpenSSH sshd on musl-based systems is not vulnera…" - Fosstodon
When I first read your post I thought you were referring to something different (maybe sort of company I wasn’t aware of?) or I’m was misunderstanding your statement due to a language barrier.
Now it appears more clear to me you were simply referring to “toy” AI models…
You have an extremely problematic understanding of what today’s AI tools are capable of and what their value is. You take AI’s “random text generation” against teams of experienced/expert developers.
As general advice, the sooner you start treating AI as a “toy”, the better chance you have of actually coming across useful and accurate information and learning to do actual research on topics that interest you. I say this kindly, with the best of intentions.
what i know that models are made using a huge code base and data and it manipulate matrix to give answers yes it is a toy kind of and i did not say it can develop something but it can give a basic view about a project. maybe that was wrong,
In 10 years time AI tech is very likely to be a valuable source of information, but today not even close. Typically LLMs are trained on data from the internet that is of questionable quality.
From Project Discussion to The Water Cooler
Added tech-talk and removed server-wg, workstation-wg
Even setting aside the misguided use of an LLM as a knowledge base, CVE counting is itself misguided. This is just the same tired argument of “use musl / openrc / runit / libressl / doas / whatever instead of the alternative because it has fewer CVEs and a smaller codebase”. The CVE disclosure itself (not the underlying vulnerability), is a good thing. It means that the right people are scrutinizing the project and uncovering vulnerabilities. A project that is widely used and well scrutinized with a high rate of CVE disclosure is infinitely preferable to a rarely used and unscrutinized project with few CVEs.
Welcome to
How do i know which model have the best quality data. I find olmo by allanai but still not that much and and can’t expect to get the data info that used for training llama or gemma or phi or aya…and i just recently have a server running ollama so data can’t be collected.any suggetion.
Open a new topic for this, please.
The more important question is when will Fedora package and push out the 9.8 version which isn’t vulnerable?
Or do we jsut download the latest version and install ourselves? Seems a painful way of doing it but will if needed.
The patched 9.6p1-1.fc40.4 version was pushed to F40 stable three days ago. So, you probably already have it.
However, unfortunately on FCOS where this package is especially relevant, the patched version missed the cutoff which means probably another two weeks till it sees the patch
Thanks. Didn’t realise they’d done a backport fix for an earlier version. Was looking for the 9.8 version