• 0 Posts
Joined 2Y ago
Cake day: Oct 28, 2020


No, you are confusing flatpak with sandboxing. Sandboxing is a good thing. You don’t need flatpak to implement sandboxing. Additionally, good sandboxing has to be configured by trusted 3rd parties, like package maintainers, not by upstream developers, because the latter creates a conflict of interest.

Flatpak, snap and docker are the problem.

No thanks - that would harm the fediverse by allowing a lot of targeted trolling.

Walkscore gives very inflated ratings.

Debian, for providing security backports, pioneering reproducible builds, and handling software licensing carefully.

Urgh. No, I was thinking of UIs that are information-dense and allow quickly scanning across long threads and thousands of messages, e.g. https://usenet-abc.de/wiki/uploads/Team/Sylpheed2.7.1_big.jpg

No, the tools are crucial in presenting content in the right way to create the community.

Discussion groups for meaningful conversations, like Usenet/NNTP was.

Nonetheless, the concept of supply chain applies perfectly.

As SoCs constantly increase both in complexity and power, the amount and size of firmware has been increasing as well. It becomes more difficult to find hardware that runs without any close source component.

The majority of closed source software is not innovative at all. It’s usually just a rehash of existing ideas and functions with a new UI.

Cloning it is also not innovative but FOSS is hardly to blame here. If anything, breaking users free from lock-in is the main innovative aspect.

First you release something, wait until is widely adopted and then add ways to control users or capture their data, for example host contents on a CDN you control, or add paid extras, or switch license for later releases. All of this examples happened in the past. The good old embrace-extend-lock-in.

…but it does not federate with Lemmy and other platforms on the fediverse. Meh :(

I’m surprised the author did not mention NNTP, the protocol that ran the larges federated discussion system since 1986.

ActivityPub reinvented NNTP with less efficiency and very poor documentation.

If anything, there’s nothing more democratic that allowing for infinitely different “views” on the contents. Besides, there is nothing lazy in such implementation, on the contrary.

That would only generate echo chambers. Instead, each user should see a personalized ranking of contents based on what they want and who they trust (and who their “friends” trust and so on).

Because it’s stable and reliable. Other protocols come and go every 10 years.

While the article provides good description of fuzzing, static analysis etc it focuses only on a set of threats and mitigations. There is much more:

  • “How security fixes work”: Linux distributions do a ton of work to implement security fixes for stable releases without input from upstream developers. (And sometimes projects are completely abandoned by upstream developers). The ability for 3rd parties to produce security patches depends on having access to source code and it’s absolutely crucial for high-security environments (e.g. banks, payment processors…). Some companies pay a lot of money for such service. This aspect is a bit understated under “Good counter-arguments”.
  • Software supply chain attacks are a big issue. Open source mitigates the problem by creating transparency on what is used in a build. OS distributions solve the problem by doing reviews and freeze periods.
  • Some Linux distributions go even further and provide reproducible builds. This is not possible with closed source.
  • A transparent development process creates accountability and limits the ability for a malicious developer to insert backdoors/bugdoors. This is quite important.
  • Access to source code, commit history and bug trackers allows end users to quickly gain an understanding of the quality of the development process and the handling of security issues in the past.
  • …it also enable authorship and trust between developers and users.
  • End users and 3rd parties can contribute security-related improvements e.g. sandboxing.
  • Companies can suddenly terminate or slow down development or security support. Community driven projects, and the ability to fork projects strongly mitigates such risk.

I agree that claiming that something is secure just because it’s FLOSS is an oversimplification. Security is a much bigger and broader process than just analyzing a binary or some sources.

Debian, if you flip around the “based on” requirement.

Besides, what uses significant amounts of RAM is not “the distro” but the primarily the window manager, some daemons and little more. You can try LXDE as a window manager. Good luck with browsers tho.

It’s incorrect to call BSD/MIT “not political”. It allows proprietization and does not protect users and authors from tivoization, patents and trademarks.

If you just want to use it, Ubuntu. If you want to learn, Debian.

This sounds very much like a smear piece. For a list of projects receiving funding see: https://en.wikipedia.org/wiki/Linux_Foundation

Sorting/scoring of posts and comments based on votes from users that I trust or users with similar voting pattern to mine.

Flatpak and appimage are far from a security improvement. The real solution is for upstream developers to release software that is not incredibly difficult to build and provide security fixes for.

Excellent points. Many “alternatives” to traditional distributions like docker, flatpak and similar harm security and stability in the long term. Users need systems that receive security updates for years without having to break functionality.

The article is indeed one-sided and often makes exaggerated claims.

One example: "This is in contrast to a rolling release model, in which users can update as soon as the software is released, thereby acquiring all security fixes up to that point. "

This ignores that facts that new releases are the only source of new vulnerabilities.

Plus, new vulnerabilities are still to be reported. A 0-day in the wild is usually worse than a published vulnerability: at least you can learn about the latter and take decisions on how to handle it.

This statements can be profoundly misleading when taken without context.

Security is complex and multi-faceted. It needs to be understood with the proper context:

  • what type of user are we protecting: skilled, unskilled, an entire company? An entire nation?
  • what type of data are we protecting: a database? The user email address, browsing activity, connection metadata?
  • what is the threat model or the attacker: a simple email scam? Surveillance from big companies? Targeted attack from a nation state?

The majority of security breaches are surprisingly low-tech (phishing, guessable password…, stalkerware, built-in telemetries)

Without context an article that goes “Linux being secure is a common misconception in the security and privacy realm.” can easily fuel FUD.

Hurd or not, we need a new kernel. Linux is showing its limits around security and modularity. Writing drivers is difficult, error prone and users need to trust drivers not to introduce vulnerabilities. Vendors often refuse to write drivers or to write them well enough to be accepted into mainline Linux. Also, Linux and Hurd are not under GPLv3.

I did and it’s misleading. Debian itself is independent.

Anonymous comments would be best, but only if it’s integrated with upvoting/downvoting/flagging to provide personalized ranking of comments.